Deepfake Attacks: How to Keep Your Business Safe (+ Examples)

Your ultimate guide to deepfake attacks to keep your organization safe. Includes video examples and case studies.

Post hero image

Table of contents

See Hoxhunt in action
Drastically improve your security awareness & phishing training metrics while automating the training lifecycle.
Get a Demo
Updated
October 25, 2024
Written by
Maxime Cartier
Fact checked by

What are deepfakes?

Deepfakes are AI-generated media that manipulates an individual’s appearance, voice, or actions to create highly realistic but entirely fake videos or audio.

Using machine learning techniques like deep learning and neural networks, deepfakes can mimic real people with startling accuracy, making it difficult to distinguish between what’s real and what’s fabricated.

What are deepfakes used for?

Deepfakes are usually made using AI (particularly deep learning algorithms) to swap faces, mimic voices, or even create entirely new and convincing content that appears to feature real people.

Deepfakes present a significant threat to organizations.

Illicit actors can use deepfakes to impersonate executives, manipulate financial transactions, or create damaging fake news that could harm your company's reputation.

Deepfakes are not inherently bad.

Their impact depends on how they are used.

While deepfakes are often used by threat actors for scams, misinformation, or impersonation, they also have positive and creative applications.

Legitimate uses of deepfakes:

  • Entertainment: Deepfakes are used to create realistic special effects in movies, TV shows, and video games.
  • Training and education: Deepfakes can be used in virtual training simulations in industries like medicine, law enforcement, or military operations.
  • Advertising and marketing: Brands sometimes use deepfake technology to create personalized ads by altering videos to fit the target audience.

Malicious uses of deepfakes:

  • Fraud and scams: Cybercriminals use deepfakes to impersonate business executives or employees in
  • Business Email Compromise (BEC) schemes: One example is when scammers used a deepfake voice to impersonate a CEO and steal $243,000 from a UK energy firm.
  • Misinformation: Deepfakes can spread false information, making it difficult to discern between real news and manipulated content.
  • Identity theft: Deepfakes can be used to create fraudulent identities or bypass biometric security systems by mimicking real people.

How does deepfake technology actually work?

When it comes to protecting your organization, getting your head around how deepfake technology works is critical for developing effective defenses.

Encoder-decoder architecture

The process starts with the encoder, which is a neural network that compresses the input data (e.g., a face image) into a smaller, more manageable representation known as a latent space.

This compressed representation contains the most critical features of the face, such as its shape, structure, and texture, without the original high-dimensional details.

The latent space is a lower-dimensional representation of the input data, capturing essential features like facial expressions, angles, and lighting in a compressed form.

This space is crucial because it allows the model to efficiently manipulate and transform these features to generate new outputs.

After encoding, the decoder takes the latent space representation and reconstructs it into a full image.

For deepfakes, the decoder is trained to take the latent representation of one face and generate the face of another person, often swapping features between the two.

Generative adversarial networks (GANs)

Deepfake technology primarily relies on artificial intelligence.

More specifically, a type of deep learning called Generative Adversarial Networks (GANs).

GANs consist of two neural networks:

  • Generator: This network generates fake data (such as images or video frames) that resembles real data. Its goal is to create content so realistic that the Discriminator cannot tell it apart from actual data.
  • Discriminator: This network evaluates the data and tries to distinguish between real and fake content. It essentially acts as a critic, judging whether the content produced by the Generator is authentic or fake.

During training, the Generator and Discriminator are essentially pitted against each other.

The Generator attempts to create more convincing fakes...

And the Discriminator gets better at detecting them.

Over time, this adversarial process improves the quality of the generated content, resulting in highly realistic deepfakes.

The generator becomes increasingly adept at producing highly realistic images or videos that the discriminator struggles to distinguish from real content.

The AI is trained using datasets

The GAN is trained on vast datasets of real images, audio, or video, depending on the type of deepfake being created.

If you wanted to create a deepfake video of a person, for example, the GAN would be trained on thousands of images or videos of that person's face, capturing various angles, expressions, and lighting conditions.

The quality of a deepfake will generally depend on the amount and quality of the training data.

The more diverse and comprehensive that datasets are, the more convincing the deepfake will be.

As the GAN trains, it iterates on the deepfakes being produced over and over.

This process will go on until the Generator produces content that is virtually indistinguishable from real content.

And these datasets are then used to generate deepfakes

Once the GAN is trained, it can generate deepfake content.

In a deepfake video, the trained Generator can manipulate a video by altering the subject’s facial expressions, syncing their lips to different audio, or even replacing their face entirely with someone else's.

The result is a video that looks authentic but has been artificially created or manipulated.

GANs is typically used in a couple of different ways:

  • Face swapping: The Generator replaces the face of a person in a video with the face of another person, maintaining natural expressions and movements.
  • Voice cloning: GANs can also be used to create deepfake audio by cloning a person’s voice. The model is trained on recordings of the person speaking, learning to mimic their tone, pitch, and speech patterns.
How do deepfake scams work?

Deepfake attacks examples

Hong Kong Bank scandal (2024)

What happened?

This is the first case of a video call deepfake scam.

Criminals used AI to mimic the voice and face of a company's CFO and convinced a Hong Kong bank employee to transfer funds to their account.

The scam was so convincing that it went unnoticed until the money had already been transferred.

Costs

$25 million was lost to the deepfake audio scam.

CEO voice impersonation scam (2019)

What happened?

In 2019, cybercriminals used deepfake technology to impersonate the voice of a German CEO in a phone call.

The attackers mimicked the executive’s voice, instructing the company's UK subsidiary to transfer money to a Hungarian supplier.

The employee, convinced they were speaking to their CEO, complied with the request.

Costs

The company lost $243,000 in the scam.

Although the funds were quickly transferred to a third party, this attack drew attention to the emerging threat of voice deepfakes and their potential to cause significant financial damage.

Social media deepfake fraud (2020)

What happened?

In 2020, a deepfake video of a well-known business executive endorsing a fraudulent investment scheme went viral on social media.

The video was highly convincing and led to numerous individuals investing in the fake scheme, thinking it was a legitimate opportunity backed by the executive.

Costs

The scam caused millions of dollars in losses to investors, and the reputational damage to the company was significant.

The exact quantified cost to the business was not disclosed, but it was a major hit to trust and public perception.

Deepfake audio scam on energy firm (2020)

What happened?

A deepfake audio was used to trick an executive into believing they were talking to their boss.

The scammer, using deepfake audio, requested a transfer to a third-party account, which was initiated but later flagged and halted before completion.

Costs

While the financial loss was mitigated, the attempted transfer was of $35 million.

Why are deepfakes becoming a more serious threat?

Deep learning models and AI rely on being fed data.

And over time, the amount of data they've ingested has grown exponentially.

Global data generated anually chart

And so with more data and more investment in deepfakes - they're now more convincing than ever before.

Deepfake technology is now faster, cheaper and easier to get hold of.

This means more deepfake attacks...

As well as more advanced deepfake technology that is now beginning to outpace our own ability to understand it.

Social engineering has - until now - been largely email-based.

However, you should expect to start seeing more deepfakes being used as the technology becomes more readily available.

Deepfake statistics

  • 70% of people say they aren’t confident that they can tell the difference between a real and cloned voice
  • 53% of people share their voices online or via recorded notes at least once a week
  • Searches for “free voice cloning software” rose 120% over the past year
  • Three seconds of audio is sometimes all that’s needed to produce an 85% voice match.
  • CEO fraud targets at least 400 companies per day.

Tactics used in deepfake scams

Voice impersonation for fraudulent transactions

Attackers use deepfake audio technology to mimic the voices of CEOs or senior executives.

This is often used in Business Email Compromise (BEC) scenarios where employees are tricked into authorizing large wire transfers or sharing sensitive information.

Video manipulation for disinformation campaigns

Deepfake videos are created to impersonate business leaders or public figures, spreading false information or discrediting individuals or companies.

These videos can be distributed on social media or directly to stakeholders, causing reputational damage.

Fake endorsements and identity theft

Deepfakes are used to create fake endorsements from celebrities or industry leaders, tricking businesses or customers into believing in the legitimacy of a product or service.

Targeted phishing and social engineering

Deepfakes can be used to enhance spear phishing attacks.

For instance, an attacker might use a deepfake audio message or video that appears to come from a trusted source within the organization, making the phishing attempt more convincing and increasing the likelihood of a successful breach.

How can employees spot deepfake scams?

If bad actors can make convincing fakes of near enough anyone in your organization, you might be wondering how on earth you an actually prevent these kinds of attacks.

Whilst these attacks are far more sophisticated that your average phishing attempt, there are still some giveaways that employees should be aware of.  

Check for inconsistencies

Deepfakes, particularly those involving video or audio, may contain subtle inconsistencies.

Look for unnatural facial movements, awkward lip-syncing, or mismatched audio and video quality.

Also, listen for odd intonations or changes in voice pitch that might not align with the speaker’s usual tone.

Example: A deepfake video might show a person’s face that appears slightly detached from the body or doesn’t fully match the usual mannerisms of the individual being impersonated.

Scrutinize unexpected requests

Be skeptical of unexpected or unusual requests, especially those involving financial transactions, sharing sensitive information, or urgent actions.

Deepfake scams often rely on creating a sense of urgency to bypass normal scrutiny.

Example: If a supposed executive sends an urgent video message asking for a wire transfer, verify the request through another trusted communication channel before taking action.

Verify with trusted sources

Always cross-check information or requests received through deepfake media with known, trusted sources.

For instance, if you receive a video or audio message from a colleague, consider calling them directly to confirm its legitimacy.

Example: If a deepfake impersonates a senior leader asking for sensitive details, contact that leader through a previously established method (like their direct phone line) to confirm.

Listen to your gut

If something feels off about a communication, it probably is.

Encourage employees to trust their instincts and report any suspicious content to their IT or security teams immediately.

Example: An employee might notice that a video message doesn’t quite “feel” right, even if they can’t pinpoint why. This should be flagged and investigated.

How to spot deepfake content

Which industries are the most at risk?

Financial services

Fintech saw an increase of 700% in deepfake attacks in 2023.

The financial sector is particularly vulnerable to deepfakes because of the high-value transactions involved and the reliance on identity verification processes.​

Attackers can use deepfaked audio to mimic the voices of company executives to authorize large transactions.

And this technology can also be used to approve loans, authorize payments, or create false compliance records - whether its through forging documents, altering video recordings or adding fake signatures.

Insurance

Fraudulent insurance claims cost more than $308.6 billion annually.

Scammers can create fake claims using falsified images and documents.

They can even fabricate accidents entirely - using AI to create photographs (with metadata) and certified eyewitness testimony.

Deepfakes can also be used to generate fraudulent medical records, test results, medical bills etc.

Media

Media organizations rely on trust and credibility.

Large-scale reputational damage can be caused  if false content is widely spread through trusted platforms​.

Deepfakes allow attackers to manipulate video and audio of prominent public figures, such as politicians, celebrities, or business leaders, to spread disinformation.

And Media organizations can unwittingly amplify these fake messages.

The biggest concern is that deepfake are successfully being used to influence elections.

Tools and tech for combating deepfake scams

AI-powered deepfake detection tools

These tools use advanced machine learning to detect subtle irregularities in video or audio files that indicate manipulation.

They analyze things like facial movements, lip-sync accuracy, and voice patterns to detect anomalies that are often present in deepfakes.

Blockchain technology for verification

Blockchain can be used to create immutable records of original video or audio files.

This makes it easier to verify the authenticity of content, as any alteration would be evident against the original blockchain entry.

You can also use blockchain to authenticate and timestamp content at the time of creation, ensuring that any future modifications are immediately detectable.

Watermarking and digital fingerprinting

Digital watermarking involves embedding a digital watermark in video or audio files can help trace the origin and verify the authenticity of the content.

Watermarks are often invisible to the naked eye but can be detected by specialized software.

Digital fingerprinting is technology that creates a unique "fingerprint" of the original media file.

If the file is altered, the fingerprint changes, signalling a potential deepfake.

Facial recognition systems

Biometric authentication systems can compare the biometric data (like facial features) in a video with known images of the individual to determine if the video is genuine or a deepfake.

Many organizations use advanced facial recognition technology as an additional layer of verification for sensitive transactions or communications.

How can employee training help protect against deepfakes?

Investing in specific tools isn't the only way to prevent deepfake attacks.

Incorporating deepfakes into your security training will help strengthen your human firewall against these kinds of tactics.

Whatever tech or filters you have in place, some attacks will still make it to your employees...

Which makes them your last line of defense.

Raising awareness of advanced social engineering tactics

Whilst the end goal of any effective security training should be to tangibly change behavior, awareness is also an important step towards this.

One of the challenges deepfakes pose (at least the better ones) is that employees are likely unaware this a genuine threat that they could fall victim to.

So although awareness alone won't have a dramatic impact on your overall security posture, its essential that employees are actually made aware that deepfake scams exist and are often pretty convincing.

Simulating deepfake scams

Simulating deepfake scams will give employees a feel for what this faked digital content looks like in the wild.

Without simulating attacks, you won't have any idea whether or not your training is actually working.

You can give employees the know-how to spot deepfakes.

And you can show them real-life examples.

But this doesn't necessarily mean they'll catch convincing real-life attacks.

Most platforms won't offer deepfake attack simulations...

Which is why we decided to build this ourselves here at Hoxhunt (more on how it works below👇)

Building a security-first culture

When it comes to protecting against sophisticated, targeted cyber threats, vigilance is everything.

Most employees might be able to detect a fairly obvious, sloppily crafted phishing email.

But attackers will go to great lengths to trick employees into taking action.

If you receive a suspicious email from a colleague, best practices might suggest that you call them to verify this email...

But what happens when even a video call can be faked too?

This is why its so important to foster a culture of vigilance and skepticism toward unsolicited or unusual communications.

Your security awareness training should not only educate employees about the potential dangers of deepfake technology and how it can be used in scams, but also empower them to act as a first line of defense.

Here's how Hoxhunt trains employees against deepfakes

Deepfake awareness training

Our deepfake training consists of three modules:

  • A practical guide,
  • A real-life case study example
  • And an actual deepfake where the person's voice has been cloned

These modules are available to all Hoxhunt customers.

Produce your own deepfakes

Want to take your phishing training a step further?

We can produce custom deepfakes for you and clone the voice of one of your employees/executives.

If you're looking to start testing your employees' ability to spot deepfakes, Hoxhunt can even create a multi-step deepfake attack on a fake Microsoft Teams call.

The target recipient gets a phishing email from a “boss” asking to jump on an call through a provided link.

In the call, a the deepfaked/cloned voice will invite employees to click on a phishing link (a safe and simulated one).

Deepfake scams FAQ

What are deepfake scams?

Deepfake scams involve using AI-generated media to impersonate individuals, typically high-ranking executives, to deceive employees or customers.

These scams often use fake videos or audio clips, where the scammers mimic the voices or faces of company executives to authorize fraudulent transactions or share misleading information.

How do deepfake scams work?

Deepfake scams typically involve the use of deepfake AI and Generative AI tools to create realistic video or audio content.

Scammers may use voice cloning technology to mimic a CEO or CFO's voice, instructing a financial officer to transfer funds to a fraudulent account.

How can businesses protect themselves from deepfake scams?

Businesses can implement several strategies to protect themselves, including:

  • Identity verification procedures for financial transactions.
  • Training employees on social engineering attacks and how to spot deepfake content.
  • Using Generative AI detection tools to identify and mitigate deepfake attempts.

How to spot a deepfake phonecall

  • Unnatural speech patterns: Deepfakes may have irregular pauses, monotone delivery, or robotic inflections that differ from the usual way the person speaks.
  • Contextual inconsistencies: The caller might request unusual actions, like a sudden money transfer, or use incorrect information that the real person would normally know.
  • Audio quality issues: Deepfake technology may produce distorted or glitchy audio, especially during longer sentences.
  • Urgency and pressure: Bad actors often use urgency to force quick decisions, so be cautious if the caller insists on immediate action.

Sources

Want to learn more?
Be sure to check out these articles recommended by the author:
Get more cybersecurity insights like this