What is security awareness training?
Security awareness training is an ongoing program that teaches employees to recognize, report, and respond to everyday cyber risks (e.g., phishing, social engineering, data handling). Modern programs blend short, role-relevant lessons with realistic simulations and instant feedback, and track outcomes like report rate and time-to-report.
Why security awareness training matters
Security awareness training works when it changes day-to-day behavior, not just “checks the box.” Measure outcomes like reporting rate and time-to-report, keep learning small/frequent, and bake compliance in without losing empathy or relevance. That’s how programs reduce human risk and support business goals.
What actually moves the needle:
- Behaviour > box-ticking. Training is information; behaviour change is the goal. Shift success from completions to actions people take under pressure.
- Ditch vanity metrics. Completion is an input and click rate is easily gamed. Prioritize reporting rate, real-threat reports, and dwell/time-to-report instead.
- Faster signal, faster response. Track how quickly users report phishing simulations and real incidents; shrinking dwell time is tangible risk reduction.
- Small, frequent, respectful. Replace annual marathons with micro-learning in-flow; cadence + timing beat volume.
- Empathy & personalization. Speak the user’s language (e.g., Finance ≠ generic “watch invoices”). Role-based, context-rich nudges sustain engagement.
- UX that lowers friction. One report phishing button + instant feedback creates a repeatable habit and visible progress (dashboards, light gamification).
- Compliance is the floor. Meet requirements, but design for behaviour and culture so the program actually reduces cyber risk.
What should security awareness training cover in 2025?
Prioritize behaviors, not topics. Build security awareness training around employees’ reality (their tools, time, workflows), then reinforce with short, role-relevant lessons and phishing simulations. Focus first on recognizing/reporting suspicious messages (email/QR/SMS/voice), identity hygiene, and safe data handling - measured by reporting rate and time-to-report, not completion alone.
Curriculum (behaviors first)
- Email & social engineering (multi-channel): Train people to spot phishing attempts across email, QR (“quishing”), SMS, and voice and to report quickly via one clear button. Measure median time-to-report and reporting rate; avoid “too many buttons” confusion.
- Identity & access (in the flow of work): Cover SSO resets, MFA fatigue, and credential harvesters using short, contextual nudges tied to everyday tools; keep it small, frequent, respectful.
- Data handling & collaboration hygiene: Teach safe sharing in mail/Teams/Slack, link previews, and attachment handling as part of daily workflows - embed learning so it helps people do their jobs, not pause them.
- Reporting & incident response habits: Make “see it → report it” the win condition. Provide instant feedback on reports and show progress in dashboards (trends, high-risk patterns).
- Culture guardrails (avoid backlash): Keep lures professional; don’t punish mistakes. Empathy and relevance drive engagement; punitive, school-style training backfires with busy professionals.
Program principle: Design for people, not policies - adapt by role and evolve with the threat landscape so users build real-world habits, not just check the box. You can read our full guide to security awareness training topics here.
How should we deliver security awareness training (models that actually work)?
Ditch marathon courses. Deliver security awareness training as short, in-flow moments reinforced by phishing simulations and instant feedback. Use one report button, integrate with the SOC, and trigger just-in-time nudges from signals (e.g., Microsoft Defender). Personalize by role and adjust cadence when engagement plateaus.
Delivery models (pick 2–3 to combine)
In-flow microlearning + signal-triggered nudges
Deliver 30–90-second tips where work happens (mail, chat, SSO). Fire nudges from detections/telemetry (e.g., risky sign-ins, policy hits) so lessons are timely and contextual. This replaces “annual noise” with habit formation.
Adaptive simulations + teachable landing pages
Treat simulations as practice and SAT as theory, they complement each other. Keep lures professional; pair each simulated phish with a 60-sec landing page and instant feedback to reinforce the report habit.
Role-based paths (finance, IT, exec support)
Swap generic content for job-specific drills (e.g., BEC for AP, SSO hygiene for Devs). Personalize cadence; refresh topics when engagement dips.
SOC-integrated reporting loop
Standardize on one report phishing button and pipe signals to triage. Give immediate feedback on real-threat reports to encourage repeat reporting and reduce false-positive noise.
Compliance overlay, not the program
Meet requirements, but design for behaviour change - short, dynamic lessons can satisfy compliance without derailing relevance.
What to measure to tune delivery
- Reporting rate & time-to-report (sim + real) to prove faster detection.
- Dwell time by department to target coaching where it lags.
- Engagement plateaus → refresh content or cadence, not punishment.
How do you measure security awareness training effectiveness (beyond completion rates)?
The most reliable security awareness metrics go beyond completion: track report rate, time-to-report, real-threat reporting coverage, and a resilience ratio (reports to fails). Prioritize psychological safety and behavior change: reporting should rise over time - even as click rates plateau - and insights should drive targeted coaching and faster incident response.
The KPI shortlist (what to track and why)
- Simulated dwell time: How long it takes users to report a simulated phish once it lands. Lower is better - speed turns instincts into a habit during practice. Track median.
- Simulated threat reporting (report rate): % of people who report a training phish. This is the primary engagement/behavior signal in training - optimize it before worrying about failure rates.
- Real dwell time: Minutes from a real phishing email reaching inbox to the first user report. Shrinking this window reduces attacker dwell time and accelerates containment.
- Real threat detection: Volume of real phishing reports from employees. Tie this to triage to show that training translates into live-fire detection. (Many teams normalize as “coverage”: % of active users who reported at least one real phish this quarter.)
Support metrics (use alongside the essentials)
- Resilience ratio (reports : fails, in sims): A derived view of practice performance. Use it with report rate - not as a substitute - and don’t over-index on failure rate; difficulty and timing skew it.
- Repeat-clicker recovery: Time/attempts for repeat clickers to achieve consecutive correct reports after remedial coaching. Focus on positive, adaptive interventions over punishment.
- Departmental deltas: Compare report rate/TTR across high-exposure roles (Finance/AP, IT, exec support) to target coaching where risk is concentrated.
Why this mix: Hoxhunt emphasizes dwell time + reporting in both simulated and real contexts to prove behavior change and risk reduction - traditional “pass/fail” alone can mislead or be gamed.
Buyer’s checklist: How do you choose security awareness training software?
Pick security awareness training that changes behaviour, not just completion. Prioritise one report phishing path with instant feedback, adaptive simulations + micro-lessons, strong Microsoft/Google integrations, reliable deliverability (no auto-click noise), and human risk-based analytics . Avoid punitive models; design for psychological safety and SOC workflows.
Buyer’s checklist (vendor-neutral, outcome-first)
- One reporting path + instant feedback: Standardise the report button and give users immediate, specific feedback on both simulated and real reports - this builds the habit you actually want. Bonus: it reduces SOC ping-pong.
- Adaptive practice + micro-learning, not annual marathons: Look for personalised difficulty, role-relevant content, and short lessons that trigger from events (e.g., risky sign-ins). This beats “set-and-forget” campaigns.
- Deliverability that reflects humans, not machines: Require clean allowlisting and proofs that tools won’t inflate clicks - e.g., QR and URL rewriting can corrupt results if not handled. Ask vendors to show how they de-noise metrics.
- Microsoft/Google ecosystem fit: Verify native paths for Microsoft Defender / M365 and Gmail, with signals flowing to your SOC dashboards for triage and coaching triggers.
- Meaningful metrics: Beyond click rate, insist on report rate, time-to-report and real-threat reporting coverage. Dashboards should segment by role/region to target coaching.
- Positive reinforcement > punishment: Gamification and recognition sustain engagement; punitive approaches erode trust and suppress reporting. Check the vendor’s stance and defaults.
- Ethics, privacy, and regional compliance: Ensure responsible lures (no humiliation), data-minimised analytics, and guidance on smishing/vishing legality by country.
- Fresh, real-world content: Prefer libraries updated from real threats and the ability to import lures from your own intel.
- Admin efficiency & scale: Low-ops campaign automation, role scoping, and clear governance. Trial for usability with your team.
- Proof you can show the business: Ask for before/after examples tied to incident response outcomes - e.g., rising real-threat reports and shrinking dwell/TTR - rather than vanity completion stats.
Free & official resources by region (USA, EU, UK)
For the USA, anchor your program to NIST SP 800-50 Rev.1 and CISA phishing guidance. In the EU, use ENISA awareness/cyber-hygiene resources and CERT-EU security guidance. In the UK, rely on NCSC phishing playbooks and micro-exercises. If you’re on Microsoft 365, add Defender Attack simulation training docs.
United States (USA)
- NIST SP 800-50 Rev.1 (Final): end-to-end framework for building a cybersecurity & privacy learning program (governance, roles, lifecycle, measurement).
- CISA - Recognize & Report Phishing: learner-friendly signs to spot and report phish; useful inside teachable landing pages and onboarding.
- CISA/NSA/FBI/MS-ISAC - Phishing Guidance: Stopping the Attack Cycle at Phase One: practical mitigations for defenders and SMBs.
European Union (EU)
- ENISA - Awareness & Cyber Hygiene: hub for awareness topics and good-practice cyber hygiene guidance you can localise.
- ENISA - Raising Awareness Campaigns: ready-to-reuse campaign ideas and materials for EU audiences.
- CERT-EU - Security Guidance 22-001: organisation-level mitigations that explicitly include phishing exercises and tailored training for exec support/IT admins.
United Kingdom (UK)
- NCSC - Phishing attacks: defending your organisation (PDF): layered mitigations; clear advice on simulations, reporting, and response.
- NCSC - Micro exercise: Identifying & reporting a suspected phishing email: short, instructor-led activity you can run internally.
- NCSC - Phishing (Spot & report): public guidance page you can link from training comms and intranet.
Platform (Microsoft 365) - global add-ons
- Defender for Office 365 - Get started with Attack simulation training: licenses, setup, and first-run checklist.
- Defender - Simulations: payload types, scheduling, and campaign design.
- Defender - Deployment considerations & FAQ: allowlisting, metrics, and operational tips.
How do you map security awareness & training to major frameworks?
Anchor Security Awareness and Training to four pillars: NIST CSF 2.0 (PR.AT) for outcomes, NIST SP 800-53 (AT-1/AT-2/AT-3/AT-4) for controls, ISO/IEC 27001:2022 (Clauses 7.2/7.3 + Annex A 6.3) for governance, and CIS Controls v8 (Control 14) for practical tasks. Keep role based training, auditable LMS reports, and records of course completions as evidence.
Quick mapping (what each framework expects and what you show)
NIST CSF 2.0 - PR.AT (Awareness & Training)
What it asks: People know risks and can do their tasks securely (PR.AT-01/-02).
Show as evidence: Training modules by audience, real phishing campaign drills, user awareness KPIs.
NIST SP 800-53 Rev.5 - AT family
What it asks: AT-1 policy/procedures; AT-2 literacy with practical exercises (social engineering, suspicious comms); AT-3 Role Based Training; AT-4 training records.
Show as evidence: SAT policy; role catalogs; SCORM-compliant lessons; records of course completions; drill logs for simulated phishing attacks.
ISO/IEC 27001:2022 + 27002:2022
What it asks: Clause 7.2 (Competence), 7.3 (Awareness); Annex A 6.3 “Information security awareness, education & training.”
Show as evidence: skills matrix, course exam (where needed), certificate of completion, LMS reports, periodic refresh micro-modules.
CIS Controls v8 - Control 14
What it asks: Ongoing awareness program with role-specific training and social-engineering practice.
Show as evidence: schedule of training campaigns, roster of high-risk roles, scenario library (BEC, spear phishing), outcome dashboards.
Make auditors (and learners) happy
- Keep one report phishing path and log everything (reports, landing page completions, LMS reports).
- Run role based training (finance, IT admins, exec support) tied to phishing templates that reflect real security threats.
- Store evidence centrally: policy, mappings, records of course completions, exports of report-rate/TTR, and a quarterly executive summary.
Role-based training: practical tracks for high-risk teams
Effective security awareness and training pairs short training modules with role-specific drills. Start with universal skills (spot & report) and add personalized training for Finance/AP, IT admins, executive support, and developers. Use realistic phishing templates (including spear phishing) and track by department .
What each audience needs (behaviors ▸ drill ▸ measure)
Finance / Accounts Payable
- Practice: verify payment changes out-of-band; spot BEC and fake invoices.
- Drill: BEC-style phishing campaign + “verify-and-callback” checklist (email → call-back SOP).
Executive support / assistants
- Practice: gatekeeping for VIPs; verify urgent requests; handle spear phishing and vishing calmly.
- Drill: exec-impersonation email → short voice callback script; optional SMS variant.
IT Administrators
- Practice: challenge unexpected SSO resets, MFA prompts, and admin-panel notices.
- Drill: credential-harvester sim (SSO reset) + micro-lesson on device posture and privileged access.
Developers / engineering
- Practice: spot token steals, package-manager impostors, and pastebin “fixes.”
- Drill: repo-notification lure + secure code training video on dependency trust.
HR / recruiting
- Practice: protect candidate and employee data; handle attachments safely.
- Drill: CV attachment with link-in-document (common drive-by download path) + landing-page teach-back.
All employees (foundation)
- Practice: one-tap report habit across email/QR/SMS; beware urgency and curiosity hooks.
- Drill: rotating phishing templates with instant feedback (micro modules or short videos).
Rollout plan: your first 90 days (from baseline to behavior change)
Set up a small rollout team, announce clearly before launch, deploy one report phishing path, and run an easy baseline phishing campaign with instant feedback. In weeks 3–10, keep comms flowing and add simple role tracks; by week 12, show trends and lock an ongoing cadence.
Week 0–2 - Prep & plumbing
- Form the team: program owner + IT/Sec + Comms + HR. Agree success metrics and channels (email, Teams/Slack, intranet).
- Pre-launch comms: tell people what’s coming, why it helps, and how to report; use short, friendly messages and a simple “what to expect” page.
- One reporting path: deploy a single report button (e.g., via M365/Exchange) and verify the route to triage.
- Deliverability check: finish allowlisting so scanners don’t inflate opens/clicks; sanity-check QR/URL rewriting.
Week 3–4 - Baseline & trust
- Launch with a gentle baseline (easy credential harvester), plus a 60-sec landing-page lesson and a “this is practice, not punishment” reminder.
- Keep talking: quick nudges in the first fortnight (newsletters/Slack posts) and a clear help path for questions.
Week 5–8 - Calibrate by role (keep momentum)
- Light role tracks: Finance/AP (vendor change), IT (SSO/MFA), exec support (verification/callback). Increase difficulty gradually - never “gotcha.”
- Recognition over reprimand: highlight good reports, use small rewards/leader shout-outs to build a reporting habit.
- Communication cadence: follow a simple plan - announce → encourage → reinforce with short, repeatable templates.
Week 9–10 - Add realism, responsibly
- Broaden carefully: introduce QR/SMS only where appropriate; keep lures professional and supportive. Pair each drill with instant feedback.
Week 11–12 - Prove impact & operationalise
- Show outcomes: early trends for report rate and time-to-report by department; share quick wins and next steps
- Lock the loop: publish an ongoing cadence (monthly/quarterly), keep the comms rhythm, and store artifacts in one place for easy reuse.
Quick gut-check: if engagement dips, refresh the message and templates before increasing volume. Consistent communication and positive reinforcement in the first 8 weeks drive long-term results.
Integrations: connect training to security operations (M365, SOC, LMS)
Wire security awareness and training into your stack so practice turns into protection. Standardize one report phishing button that submits to Microsoft Defender, run campaigns from the Microsoft Defender portal, and feed outcomes to your SIEM/SOAR. Pair LMS SCORM packages with behavior metrics.
M365 & email security (make practice → protection)
- Single route for reports. Use one add-in/report button and submit user-reported messages to Defender; the latest mail-based submission can auto-forward to Microsoft for analysis.
- Campaigns where you work. Schedule and review phishing simulations in the Microsoft Defender portal (licensing: M365 E5 or Defender for O365 Plan 2).
- Noise control. Finish allowlisting (proxies, VPNs, link-scanners) and account for URL/QR rewriting so opens/clicks reflect people - not machines.
SOC/IR workflows (close the loop)
- From report to response. Push user-reported events and training signals to your security operations center via the Microsoft APIs/SIEM connectors, then trigger playbooks (coach repeat clickers, escalate real threats).
LMS & HR systems (prove learning happened)
- Publish cleanly. Export SCORM-compliant lessons (1.2 or v2004) to your LMS for LMS reports, records of course completions, and a certificate of completion - keep behavior metrics alongside.
- Provisioning at scale. Use SSO/SCIM from your identity provider to keep learner rosters in sync (Smart Groups/OU mapping for cohorts).
What “good” looks like (quick checklist)
- One report path → Defender User reported messages → SIEM/SOAR.
- Simulations scheduled in M365; allowlisting done; QR/URL rewriting tested.
- SCORM course for onboarding + short refreshers; behavior KPIs (report rate, time-to-report) visible to managers.
Common mistakes to avoid (so your SAT program doesn’t stall)
Most security awareness and training failures trace to avoidable basics: chasing completion over behavior, running “gotcha” phishing campaigns, splitting reports across multiple buttons, skipping allowlisting (metrics noise), one-size-fits-all content, and no SOC/LMS loop.
Ethical guardrails & user trust
Fair phishing simulations build skills; unfair ones break trust. Keep lures professional, explain the “why,” and use positive reinforcement (not penalties). Standardize one report phishing path, give instant landing page feedback, and collect the minimum data needed. Avoid sensitive topics and humiliation. Transparent comms + quick coaching sustain employee engagement and a security-first culture.
Plain rules (easy to follow)
- State the purpose upfront. Training = practice. Success = fast reporting, not “never clicking.”
- Keep lures professional. Use work-relevant phishing templates (e.g., delivery, meeting, BEC) and avoid shock tactics.
- Coach, don’t punish. Use end-user notifications with a helpful tone; enable notification options for managers to send kudos.
- Match difficulty to risk. Calibrate by role and past behavior; escalate gently.
- Protect privacy. Minimise data, limit access, and summarise results at team level when possible.
- Close the loop fast. Instant, specific landing page feedback after simulated or real reports reinforces the habit.
- Respect local rules. For SMS/voice (smishing/vishing), check regional policies and obtain approvals.
Topics to avoid (or handle with care)
- Personal/medical details, pay/benefits changes, layoffs/reorgs, emergency alerts, political/religious content.
- Anything likely to induce panic or real-world harm (e.g., urging users to cancel credit cards).
- If in doubt, run a quick “no-harm test” with Comms/HR before sending.
How Hoxhunt enables behavior-first security awareness
Hoxhunt pairs adaptive phishing simulations with bite-size micro-learning and instant, positive feedback. It plugs into the Microsoft Defender portal, your SIEM/SOAR, and LMS - so security awareness training proves behavior change, not just completions.
What you get out of the box
- Adaptive practice + micro-modules. Individual learning paths automatically tune simulated phishing difficulty by role, skill, and location; users receive short, interactive micro-training after each simulation to reinforce behaviors. Training typically lands about every ~10 days.
- Instant, human feedback. When people report suspected phishing attacks, they get real-time, plain-English guidance (what was risky and why), reducing SOC back-and-forth while accelerating learning.
- One reporting route. A single report phishing button in Outlook/Gmail (desktop, web, and mobile) standardizes the habit; reports can flow into Defender and downstream tooling for IR.
- Multi-channel realism - responsibly: Email first, then optional QR more advanced, multi-channel simulations (including deepfakes) where policy allows.
- Metrics that matter: Built-in dashboards emphasize report rate, time-to-report, resilience ratio, and real-threat reporting coverage - so you can show behavior change, not just completions.
- Admin time-savers. Centralized Outlook add-in deployment, SSO/SCIM user provisioning, attribute-based targeting, and a clean admin portal cut ops overhead.
Proof (real-world outcomes)
Legacy security awareness training fell short of engaging the AES workforce to reduce human risk. They needed a solution that fixed this, while supporting effective scaling of training in multiple languages, positive security culture and enthusiasm for cybersecurity, and automated analysis of reported threats.
Hoxhunt performance vs. AES’s previous security awareness software tools:
- Reporting rate increased by 526%, from the 3-tool aggregate of 11.5% to 60.5% (this only reflects the proportion of AES employees whose work is computer-based)
- Failure rate decrease by 79%, from the 3-tool aggregate of 7.6% to 1.6%
- Miss rate decreased by 58%, from the 3-tool aggregate of 80.9% to 34%
- Resilience ratio increased by 2533%, from 1.5 to 38
- Full case study here

Security awareness training FAQs
How often should we train without causing fatigue?
Run a light course for onboarding, then refreshers of 5–10 minutes. Layer practice with monthly or quarterly simulated emails; adjust cadence when report rate plateaus rather than turning up volume.
What metrics matter beyond completion?
Prioritize report rate, time-to-report (TTR) for both real and simulated threats. Use a resilience view (reports : fails) for context. Keep LMS reports for audits, but brief leaders on behavior deltas and reduced cybersecurity risks.
Do we really need SCORM?
If you have an LMS, yes - publish SCORM-compliant lessons (ideally SCORM v2004) to capture user progress, records of course completions, and a certificate of completion. Then pair those artifacts with operational KPIs from your reporting pipeline.
Is spear-phishing simulation ethical?
Yes - when professional and transparent. Avoid humiliation or highly personal lures, keep scenarios work-relevant, and focus on coaching, not penalties.
We saw a spike in false positives after a campaign - bad sign?
It’s a good, temporary sign of user awareness. Reinforce what to report, keep a single button, refresh phishing templates, and add 60-second “why this was risky” landing pages to tune signal quality.
Glossary: quick definitions
- Adversary-in-the-middle (AiTM): Intercepts sessions (often MFA-backed) to steal tokens and replay sign-ins. Why it matters: raises the bar for “verify links only” coaching; train for out-of-band verification and device posture checks.
- Adversary-on-the-side (AotS): Observes traffic but can inject packets before the real server replies. Why it matters: explains “sudden reply” risks in thread workflows.
- Browser-in-the-browser (BitB): Fake OAuth/SSO pop-up framed inside a page to capture credentials. Why it matters: teach users to check the real browser address bar (not the modal).
- Email thread hijacking: Actor joins an existing chain (often via EAC) and drops malicious links/attachments mid-conversation. Why it matters: reporting habit must include suspicious replies, not just first-touch emails.
- Right-to-left override (RTLO): Unicode trick flips filename extensions to hide payloads (e.g., “.gpj.exe”). Why it matters: update attachment drills and mail-gateway rules.
- Authority impersonation: Impersonates a “can’t-ignore” sender (CXO, government). Why it matters: build verify-and-callback SOPs; great scenario for simulated phishing.
- Business email compromise (BEC): Payment or data change via trusted-party impersonation; often paired with invoice manipulation. Why it matters: Finance/AP role training and vendor verification are non-negotiable.
- Email account compromise (EAC): Real mailbox taken over to send authentic-looking mail. Why it matters: detection shifts from domain checks to behavior anomalies and rapid report rate.
- Phishing kit / template: Prebuilt lures and pages sold as “PaaS,” enabling at-scale campaigns. Why it matters: expect rapid reuse; rotate phishing templates in training to mirror kits.
- Secure email gateway (SEG): Mail filtering layer (spam/malware/auth checks). Why it matters: align SEG policies with training so gateway behavior doesn’t contradict user guidance.
- DMARC / DKIM / SPF: Email-auth trio that reduces spoofing, not social-engineering risk. Why it matters: keep coaching users even with DMARC “reject” - BEC/EAC still land.
- QR-code phishing (“quishing”): QR image routes to fake login or payment page; often used in delivery or “policy update” lures. Why it matters: add mobile URL-checking drills and scanner-bypass tests.
- Postal/courier impersonation: Delivery updates used as pretext (increasingly with QR). Why it matters: safe, relatable scenario for baseline simulations and micro-lessons.
- Vendor impersonation: Spoofs a supplier to change banking details or invoices. Why it matters: connect SAT with procurement controls and out-of-band verification.
Report, coach, adapt - quick demo
See how Hoxhunt turns phishing simulations into daily habits: one report phishing path in Outlook/Gmail, instant teachable landing pages, adaptive micro-learning, and metrics that matter.
What you’ll do in 60 seconds
- Open a realistic simulated phishing email.
- Tap the Report phishing button (Outlook toolbar).
- See instant, friendly feedback on a teachable landing page.
- Watch how the Hoxhunt's gamification keep employees engaged.
Find out more about Hoxhunt's security awareness training here.
Sources
Building a Cybersecurity and Privacy Learning Program (NIST SP 800-50 Rev.1) - NIST, 2024
NIST Cybersecurity Framework 2.0 - NIST, 2024
Security and Privacy Controls (NIST SP 800-53 Rev.5, AT family) - NIST, 2020
CIS Control 14: Security Awareness & Skills Training - Center for Internet Security, 2025
Recognize and Report Phishing - CISA, 2025
Phishing attacks: defending your organisation - UK NCSC, 2025
Awareness & Cyber Hygiene (campaign hub) - ENISA, 2025
Security Guidance 22-001: Cybersecurity Mitigation Measures - CERT-EU, 2022
Get started with Attack simulation training - Microsoft Learn, 2025