In this episode, we break down how to use real cyber incidents ethically, without fear-mongering or victim-blaming.
Using real breach stories in security training works... but only if you do it ethically.
In this episode of All Things Human Risk Management, host Noora Ahmed Moshe is joined by David Badanes (Human Risk Management advisor) to explore how security teams can use real cyber incidents to drive behavior change without crossing into fear-mongering, victim-blaming, or loss of trust.
They break down why real incidents outperform generic examples, where awareness programs most often go wrong, how to decide whether a public breach is appropriate to use, and how to operationalize ethical review even with limited time and resources.
What you’ll learn in this episode:
Timestamps:
(00:00) Why use real breach stories in security awareness training at all?
(00:15) How do real incidents change behavior better than generic warnings?
(01:18) Why ethical security storytelling matters now
(02:21) Why do real breach stories work better than fictional examples?
(03:40) What are the ethical risks of using real cyber incidents in training?
(05:03) What does ethical security storytelling actually look like?
(08:27) How should awareness managers choose what parts of a breach to include?
(09:24) How do you operationalize ethical review with limited time and resources?
(27:10) How does culture change what’s considered ethical security storytelling?
(31:36) What good ethical storytelling achieves — and what it avoids
Host links:
Noora Ahmed-Moshe: https://www.linkedin.com/in/nooraahmedmoshe
David Badanes: https://www.linkedin.com/in/dbadanes
In this episode of All Things Human Risk Management, host Noora Ahmed-Moshe is joined by David Badanes to tackle one of the most sensitive tools in security awareness: using real breach stories in training.
Real incidents make risk feel real. They cut through complacency and drive behavior change. But handled poorly, they can just as easily erode trust, shame employees, or turn learning into fear. This conversation breaks down where that line is, why so many programs cross it unintentionally, and how awareness teams can use real breaches responsibly and effectively.
Generic warnings tell people to “be careful.” Real stories show them what careful actually looks like. Concrete incidents create empathy, context, and memorability. But without guardrails, they also risk victim-blaming, fear-mongering, and reputational harm.
“Real stories create empathy - unless you turn them into judgment.”
Being ethical doesn’t mean watering stories down or avoiding reality. It means choosing examples that teach behavior, not punishment. The focus should be on decision points, not personal failure, and on learning, not spectacle.
“Public doesn’t mean appropriate.”
Many programs unintentionally cross ethical lines by naming individuals, overemphasizing consequences, or drowning people in technical detail. These approaches reduce trust and discourage reporting - the opposite of what awareness programs are meant to do.
“If people feel judged, they stop raising their hand.”
Effective stories work backward from the incident to the moment where a different choice could have changed the outcome. That’s where learning happens — not in CVE numbers, malware names, or blame.
“Behavior changes at forks in the road, not in postmortems.”
Just because a breach is widely reported doesn’t mean it’s fair to reuse internally. Ongoing investigations, named individuals, and unresolved impact all matter. Ethical review starts after something becomes public - not before.
“Availability is the start of the ethical analysis, not the end.”
Fear-based messaging may get attention, but it doesn’t build lasting security habits. Empathy helps employees see themselves in the story and recognize how easily anyone could be targeted.
“You want people thinking, ‘That could be me,’ not ‘I’d never be that stupid.’”
When handled carefully and with alignment from legal and communications teams, internal near-misses can be more effective than headline breaches. They’re relevant, credible, and directly connected to employees’ daily work.
“Nothing resonates like something that almost happened here.”
Long lectures and one-off campaigns don’t change behavior. Micro-learning, scenarios, and decision-based exercises help people practice what to do when it matters - especially in moments of uncertainty.
“People don’t remember slides. They remember choices.”
Ethical storytelling doesn’t require heavy bureaucracy. A simple pause moment, a second perspective, and a short checklist can prevent harm while preserving speed and relevance.
“This isn’t red tape - it’s ethics by design.”
What feels educational in one culture can feel humiliating in another. Global awareness programs need sensitivity to tone, humor, and social norms and must test stories before scaling them.
“The same story can land very differently depending on where you tell it.”
The real signal isn’t whether people enjoyed the story - it’s whether behavior changes. Faster reporting, better decisions, and reduced impact are the outcomes that matter.
“Trust shows up in what people do next.”
Noora:
Hello, my name is Noora Ahmed Moshe. I am the VP of Strategy and Operations at Hoxhunt. Welcome back to our All Things Human Risk Podcast.
Today we’re talking about something that is both fascinating and, for many awareness teams, a little uncomfortable: using real breach stories in security training.
Why would you do that? Because real incidents work. They make threats feel concrete, and they cut through the “this would never happen to me” mentality. But there’s also a very thin line between being memorable and being harmful.
We never want fear-mongering, victim-blaming, or humiliating people who make mistakes. And we never want to use another company’s worst day for entertainment. As we know, breaches can have serious consequences for employees and organizations. If we get this wrong, we don’t just miss the learning opportunity — we lose trust.
To help us navigate this topic, I’m joined today by David Badanes. David brings years of hands-on cybersecurity experience and deep expertise in human risk management. Today, we’ll be turning that experience into a practical playbook for ethical security storytelling — how to choose examples responsibly, how to tell the story with context and respect, and how to turn incidents into training that actually sticks.
David, a very warm welcome to the show.
David:
Thank you, Noora. I’m really glad to be here. This is a challenging topic for anyone working in cyber awareness and human risk management. Those of us in this field are constantly trying to figure out where that line is — how to share stories that truly make an impact without crossing an ethical boundary.
It’s a fascinating and important issue, and I’m excited to talk about it with your audience.
Noora:
Let’s start with the basics. When we think about telling real breach stories, why do you think they’re so effective? Why do they work better than generic or made-up examples?
David:
It really comes down to how we’re wired as humans. We understand and remember concrete experiences far better than abstract concepts.
If you say, “Social engineering attacks are increasing,” that’s abstract. But if you say, “An attacker called pretending to be IT and asked for a password reset,” suddenly people can picture it. They can imagine themselves in that situation.
In the book Made to Stick by Chip and Dan Heath, one of the core ideas is that concrete, sensory details are what make ideas memorable. Real stories give us those details.
I often think about an episode of The Office called “Fire Drill.” Michael Scott gives a boring safety presentation with bullet points, and nobody is paying attention. Then Dwight sets an actual fire drill. It’s chaotic and goes terribly wrong, but everyone remembers how to evacuate the building. That’s the difference between passive information and a memorable experience that actually teaches a lesson.
Noora:
That makes a lot of sense. But clearly, there are risks. Where do you see things going wrong when teams use real breach stories?
David:
There are several risks, and some of them are serious.
One is crossing into victim-shaming or fear-based messaging. Another is using examples that are ethically questionable or emotionally harmful.
During COVID-19, for example, many organizations — including one I supported — chose not to run phishing simulations around contact tracing or vaccine access. Those lures would have been realistic, but we decided that employee safety and trust were more important than measuring susceptibility.
There’s also the “ambulance chaser” problem. An attack happens somewhere else, and suddenly vendors or teams use it as a scare tactic: “Have you seen this breach? It could happen to you.” That kind of approach exploits someone else’s crisis instead of helping people learn.
Noora:
Most awareness managers would say they want to do this ethically — but that’s easier said than done. If you had to describe ethical security storytelling in one sentence, how would you define it?
David:
One sentence is tough, but I’ll try.
Ethical security storytelling means using real stories to show people what being careful actually looks like, while making sure you don’t become just like the adversary by crossing ethical boundaries yourself.
More practically, it means avoiding victim-blaming, avoiding humiliation, and building empathy. You want employees to think, “That could have been me,” not, “I would never do something that stupid.”
You also want to avoid overwhelming people with technical details they can’t act on. This isn’t about CVE numbers or indicators of compromise. It’s about behavior change.
The focus should be on the decision point: what did the person see, what choice did they make, and what could someone do differently next time? And finally, this can’t be one-and-done. Ethical storytelling reinforces lessons over time.
Noora:
If a breach is public and widely reported, does that automatically make it fair to use in training?
David:
Not necessarily. Public just means accessible — it doesn’t mean appropriate.
You need to ask additional questions. Is there an ongoing investigation? Could individuals still be identified or harassed? Are you adding learning value, or just amplifying someone’s worst day?
For example, there might be a lawsuit that publicly names an IT administrator whose credentials were stolen. That information might be public, but using it in training could cause real harm to that individual.
On the other hand, a journalist-written case study that anonymizes individuals and focuses on attacker tactics may be appropriate. Journalists have already made ethical editorial decisions.
A good rule of thumb is this: if the story were about your own organization, would you be comfortable with it being used this way?
Noora:
Let’s get practical. For an awareness manager planning to use a real breach story, how do they decide what parts of the story are actually useful?
David:
You start by defining the behavior you want to change. If you can’t describe that behavior in one clear sentence, you’re not ready to use the story.
Ask yourself: does this story actually illustrate that behavior? Is the timing appropriate? Is anyone likely to be harmed by telling it?
A great example is Barbara Corcoran, who publicly shared how her company lost $400,000 to a business email compromise scam. She chose to tell the story herself. She’s a sophisticated business leader, and her story challenges the idea that only naïve people fall for these attacks. She models the right response without shame.
Internal near-misses can be even more powerful, if your legal and communications teams are aligned. They’re real, relevant, and deeply resonant for employees.
Noora:
How do we give enough context to explain how attacks work without turning it into a technical lecture or blaming the victim?
David:
You focus on facts and decision points.
What type of attack was it? How did it arrive? What did the target see at that moment? Were there red flags that were easy to miss?
You focus on organizational impact, not personal punishment. You omit names. You avoid humiliation and speculation about motives.
Work backward from the breach to that fork-in-the-road moment and translate it into simple “if-then” guidance people can actually use in their own work.
Noora:
What formats work best for delivering this kind of learning?
David:
Research consistently shows that micro-learning works better than long, compliance-style training.
Small, frequent learning moments are more effective than quarterly lectures. And active learning — where people have to make decisions — is far more effective than passive learning.
That can mean phishing simulations, short scenarios, discussions, podcasts, or quick refreshers. The key is meeting people where they already are and letting them practice decision-making.
Noora:
How would you apply this approach to something like business email compromise training?
David:
I like a choose-your-own-adventure approach.
You show the email and ask people what they would do. Do they report it? Do they reply? Do they verify out-of-band?
Then you walk through the scenario together. You explain what the attacker was trying to do, what signals were present, and what a safer response would have been. That kind of active engagement builds real understanding.
Noora:
Many awareness teams are under-resourced. How can they operationalize ethical storytelling without heavy process?
David:
You need a lightweight pause moment.
Ideally, you involve at least one other perspective — HR, communications, or legal. Not to create bureaucracy, but to catch blind spots.
A simple checklist helps: What behavior are we targeting? Who could be harmed? Would we be comfortable if this story were about us?
This isn’t bureaucracy for its own sake. It’s ethics by design.
Noora:
How does culture affect what’s considered ethical storytelling?
David:
Culture matters a lot. Humor, tone, and examples don’t translate the same way everywhere.
A story that feels educational in one culture can feel humiliating in another. Global programs need sensitivity, testing, and feedback to make sure messages land as intended.
Noora:
Final question. How do teams know if this approach is actually working?
David:
You start with reaction and learning, but you can’t stop there.
The real goal is behavior change and results. Are people reporting faster? Are they making better decisions? Are incidents being caught earlier or causing less damage?
Ethical storytelling works when it builds trust, empathy, and better outcomes — not fear.
Noora:
David, thank you so much for joining us.
David:
Thank you, Noora.
Drastically improve your security awareness & phishing training metrics while automating the training lifecycle.