8 Social Engineering Defense Strategies for 2025 (with Dr. Jessica Barker)

Stop deepfake, smishing & vishing scams with 8 proven tactics: Feel→Slow→Verify→Act, no approvals in live calls, out-of-band callbacks, and a reporting culture.

Post hero image

Table of contents

See Hoxhunt in action
Drastically improve your security awareness & phishing training metrics while automating the training lifecycle.
Get a Demo
Updated
September 19, 2025
Written by
Fact checked by

This article is based on insights from Dr. Jessica Barker, MBE, recorded live on the All Things Human Risk Management Podcast. You can listen to the full conversation here:

What is social engineering?

Social engineering is the manipulation of human trust to bypass technical defenses and gain access to sensitive data, systems, or money. Attacks like malware or brute-force hacking, whereas social engineering exploits psychology (fear, urgency, curiosity, or empathy) to trick people into taking unsafe actions. In 2025, social engineering has evolved beyond phishing emails into deepfake calls, AI-crafted scams, and multi-channel deception.

Why it matters in 2025

Dr. Jessica Barker emphasizes that “social engineering isn’t new, but it is more effective than ever because of how it plays on human emotions”. CISOs echo this frustration: even with strong technical safeguards, business email compromise and vishing bypass filters, leaving employees as the last line of defense.

Core tactics attackers use

  • Phishing emails & malicious links: still the most common breach entry point.
  • Pretexting & spoofed identities: attackers impersonate colleagues, suppliers, or executives.
  • Deepfake social engineering: cloned voices or video convincingly pressure targets.
  • Urgency & authority cues: “your account will be closed” or “the CEO needs this payment now.”

Why organizations struggle

Most employees already know what phishing is, but that doesn’t mean they act securely under pressure. Awareness training alone doesn’t translate into behavior change.

The 8 social engineering defense strategies (overview)

  1. Teach the universal reflex: Feel → Slow → Verify → Act: Coach people to notice emotional spikes, pause, then confirm via another channel before doing anything high-impact.
  2. Run empathy-first comms (not fear): Pair every risk story with what to do next; fear alone backfires and kills reporting.
  3. Ban approvals in live calls: No payments, banking changes, or access approvals inside Teams/Zoom; step out and follow the documented workflow.
  4. Require callbacks for sensitive requests: Never reply in-thread; call back on a known number or validate in a trusted system.
  5. Enforce two-person verification for money moves & bank changes: Separation of duties for finance changes and larger transfers, every time.
  6. Instrument one-click reporting everywhere: Make reporting obvious in email, chat, and mobile; cut the “too many buttons” confusion.
  7. Measure behavior that reduces risk: Prioritize report rate (and follow-up quality) over vanity completion metrics.
  8. Simulate beyond email (SMS, voice, video/deepfake): Close the gap where most orgs aren’t testing: smishing, vishing, and live-call imposters.

Strategy #1: Teach the universal reflex: Feel → Slow → Verify → Act

Make one habit universal across channels. When a message is unexpected, triggers emotion (fear, urgency, flattery), and asks for a high-impact action, users should notice the feeling, pause, verify via another channel, then act through policy.

Why this works (Dr. Barker's guidance)

  • “What we need to look out for is really how we feel… gone are the days [of] check[ing] emails for spelling or grammatical errors.”
  • “If you have a communication like that, [then] slow down and verify with the supposed sender via another means.”
  • “In the age of artificial intelligence, it’s our human intelligence - our emotional intelligence - that becomes so much more important.”

The reflex in four teachable steps

  1. Feel: Notice if you’re rushed, scared, panicked, or flattered; emotion = risk.
  2. Slow: Create space before clicking, paying, or sharing access.
  3. Verify: Call/text on a known number or use a trusted system; don’t reply in-thread.
  4. Act: Only after verification, follow the approved workflow (least-privilege, maker-checker, then report).

Rollout checklist

  • 1-page poster for email, chat, phone, and video - same rule everywhere.
  • No approvals inside live calls; require stepping out to the documented payment/access workflow.
  • One-click reporting across channels so SOC sees attempts quickly (optimize for report rate & time-to-report).

Strategy #2: Run empathy-first comms (not fear)

Fear backfires; empathy changes behavior. Dr. Jessica Barker argues that scaring people shuts them down, while pairing risks with clear, doable next steps drives action and faster reporting. Move beyond outdated practices (“check for typos”) to human, channel-agnostic habits that work under pressure.

Why this works (Dr. Barker's guidance)

  • “If we simply scare people, it doesn’t change their behavior… it’s more likely to backfire.” Tie every risk to an immediate action people can take.
  • “Gone are the days [of] check[ing] emails for spelling… hover over the link.” Teach strategic behaviors users can apply across email, chat, voice, and video.
  • Empathy + relationship speed incident response. People report sooner when they feel safe and supported.

What to change in your comms

  • Tone: Replace blame with coaching. Thank reporters - even when they clicked - then guide the next step.
  • Message pattern: Story → risk → exact behavior to do now (e.g., “call back on a known number”).
  • Focus: Teach the Feel → Slow → Verify → Act reflex instead of traditional checklists around typos etc.

Playbook you can publish now

  • One-page guidance: “If it’s unexpected and emotional, pause and verify.”
  • Leader scripts for praise-on-report and “no approvals in live calls” reminders.
  • Micromodules (60–90s) that end with a single action (e.g., “Use known numbers for callbacks”).

How to measure success

  • Reporting culture KPIs: report rate and time-to-report trending up; punitive language trending down.
  • Quality of reports: more verifications logged; fewer approvals attempted inside live calls.
“In the age of artificial intelligence, it’s our human intelligence - our emotional intelligence - that becomes so much more important.”

Strategy #3: Ban approvals in live calls (Teams/Zoom)

Rule it out completely: Don’t approve payments, banking changes, access grants, or password/MFA resets inside a live video/voice call. Deepfaked executives and vendors can look and sound real; step out of the call, verify via another channel.

Why this is non-negotiable

  • Dr. Jessica Barker highlights a real case where a worker joined a Teams call and “was the only real person” - the CFO and others were deepfakes, leading to a ~$20M transfer.
  • Her guidance: when a request feels urgent or emotional, “slow down and verify with the supposed sender via another means.”

The policy (copy/paste)

  • No approvals in live calls. Money moves, banking updates, access grants, and resets must exit the call.
  • Out-of-band callback. Use a known number or trusted system to validate the request; never reply in-thread.
  • Record in system of record. Execute in ERP/ITSM with two-person verification; store artifacts outside chat.

Team playbook

  • Execs & finance: Agree a codeword/callback routine; rehearse it quarterly.
  • IT/helpdesk: Refuse live resets; require caller verification + ticketed workflow.
“What we need to look out for is really how we feel… then slow down and verify via another means.”

Strategy #4: Require out-of-band callbacks for any high-risk request

This means you confirm sensitive requests via a separate, trusted channel before acting - e.g., hang up a live call/Teams, then phone the requester using a number from your directory. This stops deepfake voice/video scams by forcing pause → verify → act and removes attacker control of the conversation.

Why this matters now

Attackers can clone a voice from a few seconds of audio using off-the-shelf tools - so “it sounds like the CFO” is not evidence. And because these scams intentionally create panic, people rush decisions - exactly when a callback rule saves you.

When to trigger a mandatory callback (examples)

  • Any payment change: bank details, new vendor, unusual amount, urgent wire.
  • Account/MFA resets or privilege escalations.
  • Sensitive data requests (customer, HR, legal).
  • Live-call approvals (Zoom/Teams) for money or access - never approve in-call.

Playbook

  • If request arrives via email/chat/voice: Stop → Create ticket → Callback via directory → Ask 3 specifics → Require code → Approve/deny → Log & report.
  • If on a video/voice call: Pause the meeting; state “policy requires an out-of-band callback”; end call; perform steps above.

Design for emotion

Social engineers play a numbers game and manufacture urgency; your rule must be automatic so people aren’t negotiating while stressed.

“It’s an AI clone, but it sounds just like them… [Hearing that] is just horrific... to clone a voice, it’s a few seconds of somebody’s voice.”

Strategy #5: Enforce two-person checks for payments & bank changes

Require two independent approvers - and callback to a verified number - before releasing funds or updating banking details. Deepfakes and urgent pretexts thrive on speed; a second human and a fresh channel add the pause and proof attackers can’t easily fake.

What falls under “two-person check”

  • New beneficiaries or bank-detail changes (vendors, payroll, rebates).
  • Out-of-cycle or urgent wires (especially initiated over chat/voice/video).
  • High-value thresholds (set tiered limits by entity/currency).
  • Any request originating in live calls (Teams/Zoom/phone) → no approvals in-call; move to ticket + callback.

Why this works now

Attackers can convincingly impersonate executives in voice/video with seconds of audio, making single-person approvals brittle. As Dr. Jessica Barker notes, “it’s a very… small amount of data that attackers need” to clone a voice, and sites make it “pretty realistic” fast.

Anti-bypass guardrails

  • No exceptions via DM: any “CEO says do it now” must enter the same ticket + callback path.
  • Pre-approved fallback contacts: if the requester is unreachable, use the alt approver list - never the number provided in the message.
  • Log & review: monthly audit of vendor-change callbacks and dual-approval outliers.
Dr. Barker's lens (why empathy matters): Many scams are ultimately about money and play a numbers game - your process should remove speed as the attacker’s advantage without shaming staff who get fooled by convincing voices.

Strategy #6: Instrument one-click reporting everywhere

Make reporting the easiest action in your security program. Add a single, consistent “Report” control; route every submission to the SOC with auto-feedback and ticketing. Faster, shame-free reporting stops active attacks - unreported incidents do the most harm, especially after a mistaken click.

Why this matters

  • Speed > perfection. If someone clicks, your next best outcome is immediate reporting so the SOC can contain damage.
  • Reduce choice paralysis. Leaders say users are confused by “too many buttons” and unclear paths; unify to one obvious control.
  • Culture accelerates response. Jessica Barker emphasizes that empathy and strong relationships make incidents easier to handle - people report sooner when they feel safe.

Design patterns to implement

  • One button, all channels. Outlook/Gmail add-in + Slack/Teams message action + mobile share-target → single SOC inbox/workflow.
  • Auto-feedback. Immediate “thanks, we’re on it” with lightweight coaching; avoids silence that discourages future reports.
  • Ticketing by default. Every report creates a case with artifacts (headers, URLs, voicemail). SOC can label real vs. benign and reply with guidance.
  • Everywhere training. In onboarding and micro-modules, show exactly where the button lives across devices; remove duplicate pathways.

What to measure (behavior, not vanity)

  • Report rate (% of users who report a suspect item in a period).
  • Time-to-Report (TTR) from first receipt to first report—aim for minutes, not hours. (Aligns with Jessica’s culture-speed theme.)
  • Quality of reports (percent “true positive” or actionable intel).

Policy & comms you can ship this week

  • Standardize the message: “Clicked? Report immediately - we can still help.” (Reinforces that early reporting limits harm.)
  • Celebratory tone: Publicly thank reporters (even after mistakes) to normalize help-seeking and speed.
Bottom line: Eliminate friction and people will report more, faster - and that’s the behavior that actually reduces risk.

Below you can see what Hoxhunt's single, unified reporting button looks like.

Social enginnering defense -single reporting button

Strategy #7: Measure behavior, not vanity

Ditch vanity metrics like completion rate and standalone click rate. Track behaviors that cut risk: report rate, time-to-report (TTR) and first-report dwell time on real threats. Pair these with small experiments and show outcome deltas to your board.

What to stop over-weighting (and why)

  • Completion rate → measures input, not outcome; creates a false sense of security if reported alone.
  • Click rate in isolation → easy to game with template difficulty; focuses on failure, not protective action.

The behavior metrics that matter

📊 Metrics That Matter for Social Engineering Defense

Use these four metrics to track real behavior change and reduce human cyber risk.

Simulated dwell time
Definition: Minutes from opening a simulated phishing email to reporting it.
Why it matters: Builds the “see it → report it” reflex in training.
Target: Trend down over time (median).
Simulated threat reporting
Definition: % of users who report a training phish (per campaign).
Why it matters: Measures the habit you want in the real world.
Target: Trend up month-over-month.
Real dwell time
Definition: Minutes from a real phishing email reaching inbox to first user report.
Why it matters: Direct signal for faster detection/containment.
Target: Trend down (median; break out by role/region).
Real threat detection
Definition: # of real phishing emails reported by employees (per month).
Why it matters: Proves training translates to real protection.
Target: Trend up and diversify by channel (email/SMS).

Prove impact with small experiments

Run an opt-in pilot, then A/B a standardized phish across pilot vs. control. Hoxhunt programs see users 6× less likely to click and 7× more likely to report post-intervention - proof that moves execs.

Executive storytelling (so the board leans in)

Present what they care about in business language: pair a few crisp metrics with one concrete story (e.g., “callbacks stopped a fraudulent vendor change last quarter”), not a wall of stats.

Bottom line: If a metric doesn’t show faster detection or fewer irreversible actions, it’s vanity. Shift scorecards to behaviors users perform and controls they follow - then celebrate those gains loudly.
Social engineering defense metrics

Strategy #8: Simulate beyond email (SMS, voice, deepfake live calls)

Most orgs still just test email - but real breaches now start in SMS, voice, and live video. Close the gap with smishing, vishing, and deepfake simulatioms so your “Feel → Slow → Verify → Act” reflex works everywhere. CISOs we speak to at Hoxhunt report high concern but low testing rates in these channels - fix that disparity.

Why go beyond email

  • CISOs are 71% concerned about smishing and 59% about vishing, yet only 27% simulate SMS and 15% simulate voice - a clear action gap.
  • Many orgs don’t practice for encrypted-messenger attacks even though 64% have been hit via those channels.
  • Attackers exploit phones, messaging apps, and deepfakes - not just email - so training must match the multi-channel reality.

What to simulate (and how)

  • Smishing: delivery or payroll texts, MFA reset prompts, encrypted-chat “urgent” DMs. See Hoxhunt's smishing training covered here.
  • Vishing: scripted IT/vendor pretexts that request password/MFA help; require caller verification and ticketed workflow to pass.
  • Deepfakes: rehearse a live “CFO on Teams” wire request so teams practice pause → verify under stress.

Cadence & realism

  • Run monthly micro-scenarios rotating channels.
  • Localize pretexts (suppliers, payroll cycles) so drills feel like real workflows, not “gotchas.”

Guardrails to avoid blowback

  • Keep drills non-punitive; reward fast reporting. (Barker’s empathy-first framing improves response speed.)
  • Debrief with one actionable step (“verify via another channel”) instead of simple checklists.

Below you can see a preview of Hoxhunt's deepfake simulation training. Whether it’s a fake CEO video call or a voice clone demanding action, Hoxhunt trains employees to spot AI-powered attacks and stop multimillion-dollar scams. More further down 👇.

Common types of social engineering attacks in 2025

Social engineering now spans far beyond email. In 2025, attackers blend AI-crafted messages, deepfake voice/video, and multi-channel pretexts to push instant, high-risk actions (payments, credentials, MFA approvals). Expect convincing outreach over email, SMS, chat, and live calls - often escalating pressure to bypass checks.

Dr. Jessica Barker’s lens: attackers target emotion, not tools - fear, urgency, and flattery cloud judgment, which is why “check for typos” advice isn’t enough in the deepfake era.
Common Social Engineering Attacks (2025) What they look like & how to respond
Attack type Channels Typical pretext & tactics Immediate response / policy
Tip: Teach a channel-agnostic rule: FeelSlowVerifyAct.

Why does social engineering work?

Because it hijacks emotion and timing. Attackers manufacture urgency, fear, or flattery to cloud judgment and push instant actions (click, pay, share). AI has made lures more polished, but the core defense is human: notice how a message makes you feel, slow down, and verify via another channel.

What actually trips people up

  • Emotion over analysis: “What we need to look out for is really how we feel.”
  • Old tips aren’t enough: “Gone are the days where we tell people to… check emails for spelling or grammatical errors.”
  • AI raises the stakes: Deepfakes and GenAI make messages look and sound real; e.g., high-profile cases of deepfaked executives on video calls coercing wire transfers.

Practical detection you can teach (and measure)

  • Emotional trigger test: Unexpected + emotional + action request = high risk. Pause and verify (known number/chat).
  • Gut-check is a feature, not a bug: “In the age of artificial intelligence, it’s our human intelligence - our emotional intelligence - that becomes so much more important.”
  • Proactive defense cues: Coach for slowdown & callback, add multi-person approvals for payments, and track time-to-report as a leading indicator.

Hoxhunt: deepfake defense + full-stack human risk platform

Hoxhunt is a Human Risk Management platform that goes beyond security awareness to drive behavior change and measurably lower risk. Data breaches start with people, so Hoxhunt does too. It combines AI and behavioral science to create individualized micro-training experiences people love.

Deepfake simulations you can run today

  • Exec-imposter on Teams/Meet/Zoom: A fully customized scenario that can (with consent) clone an executive’s likeness and voice and play out inside a realistic video meeting UI your people already use.
  • Why it matters: We built this because deepfake scams are moving from headlines to everyday risk; our live-call training conditions teams to pause, verify out-of-band, and refuse in-call approvals.

Broader coverage beyond email

  • Vishing & smishing drills: Phone and SMS simulations that mirror multi-channel pretexts (IT reset, vendor change) so your reflex works outside the inbox.
  • Adaptive phishing training: AI-powered, role-aware simulations with micro-coaching to drive reporting behavior over time.

SOC outcomes (not vanity metrics)

  • Threat Analyst Agent: Converts every user-reported phish into enriched intel and returns SOC-level feedback to the reporter in seconds - closing the loop and encouraging more/faster reporting.
  • Search & Destroy: Zero-click removal sweeps matching threats after the first credible report - shrinking dwell time across the tenant.

Social engineering defense FAQ

How do we reduce the attack surface for social engineering?

  • Tighten security policies around payments, account resets, and data sharing (no approvals in live calls; mandatory callbacks).
  • Limit exposure of private information and privileged information (job titles, org charts, vendor contacts) - the less that’s public, the fewer believable pretexts.
  • Lock down Voice over Internet Protocol (VoIP) and voice communication routes (caller verification, recorded lines for resets).
  • Remove risky malicious websites/domains with takedown workflows; block newly registered look-alikes.
  • Standardize defense across domains (email, chat, SMS, voice, video) so the same rules apply everywhere.

How do we spot deepfake social engineering over voice/video?

Assume voices and faces can be cloned. Use defense mechanisms that don’t rely on “recognition”:

  • Verify out-of-band on a known number; never approve inside the call.
  • Use challenge/response (codewords, ticket IDs) and require a second approver.
  • Pair process with tech: caller verification for helpdesk, MFA number-matching, and meeting-join controls.
  • Treat anything urgent over Voice communication / VoIP as high-risk until verified.

What vectors matter beyond email-based attacks?

  • SMS/IM (smishing), collaboration chat, and voice communication (vishing).
  • Live video calls with executive imposters.
  • Malicious websites delivered via QR codes or “callback” scams.
    A solid attack taxonomy for execs: Phishing (email), Smishing (SMS/IM), Vishing (voice), Deepfake Impersonation (voice/video), MFA-fatigue, and Callback/QR scams.

How should we balance human-based vs technology-based mitigation?

  • Human-based mitigation: train one universal reflex (Feel → Slow → Verify → Act), reward fast reporting, run tabletops.
  • Technology-based mitigation: one-click report in every channel, enforced maker-checker in ERP/ITSM, rate-limited MFA, domain/URL scanning, caller verification for helpdesk.

Both halves support proactive defense when you measure behavior (report rate, time-to-report) and fix bottlenecks.

Sources

CISOs Face Widening Gaps in Defending Multi-Channel Social Engineering Threats - IssueWire, 2025
NSA, FBI & CISA Release Cybersecurity Info Sheet: Deepfake Threats - CISA, September 12, 2023
Hong Kong Deepfake CFO Scam (video call wire fraud) - CNN, February 4, 2024
Business Email Compromise (BEC) - FBI
Avoiding Social Engineering and Phishing Attacks - CISA

Want to learn more?
Be sure to check out these articles recommended by the author:
Get more cybersecurity insights like this