If you’ve ever watched a video and thought “that looks a little off,” you’ve already encountered the edge of a problem that’s been growing fast. Deepfakes, once a novelty reserved for entertainment and internet mischief, have quietly become one of the more serious tools in a cyberattacker’s kit. And the gap between “looks a little off” and “completely indistinguishable from real” is closing faster than most organizations are prepared for.
The timing matters. Remote work, video conferencing, and digital-first communication have all normalized the idea of interacting with colleagues and executives through a screen. That shift created exactly the kind of environment where a convincing fake voice or face can do real damage. Attackers noticed.
Understanding how these attacks work, and what you can actually do about them, is no longer just a concern for security researchers. It’s practical business knowledge.
What is a Deepfake Attack?
A deepfake attack uses AI-generated audio, video, or imagery to impersonate a real person convincingly enough to deceive someone into doing something they shouldn’t. That might mean wiring money, handing over credentials, approving access, or resetting an account.
The term comes from “deep learning” combined with “fake.” The underlying technology uses neural networks trained on real footage, voice recordings, and images to replicate a person’s appearance and mannerisms with unsettling accuracy. Early deepfakes were easy to spot if you knew what to look for. Today, it’s much harder.
Deepfakes don’t need to be perfect to be effective either. They just need to be convincing enough that a person under time pressure, trusting the authority of the “person” they’re seeing or hearing, doesn’t stop to question it.

How Deepfakes Are Actually Being Used in Attacks
Business Email Compromise and Executive Impersonation
The most financially damaging deepfake attacks follow a familiar pattern: an employee receives a call or video message from someone who appears to be a senior executive, usually urgent, usually requesting a wire transfer or sensitive information. The classic “CEO fraud” email scam, but with a face and a voice attached to it.
The most cited example happened in early 2024, when a finance worker at global design firm Arup was tricked into wiring $25 million after a deepfake video conference call that appeared to include multiple company colleagues, all of whom were fake. No systems were compromised. No malware was deployed. It was pure deception, and it worked.
This type of attack has scaled up significantly. Fraud scammers have already demonstrated the ability to create real-time deepfake video of executives authorizing large payments, and the technology required to do so has become widely accessible. The average deepfake fraud costs businesses nearly $450,000, and those numbers will only climb.
IT Helpdesk Social Engineering
One of the more insidious uses of deepfakes is targeting IT helpdesks. The attack is straightforward: an attacker researches a real employee using social media and other public sources, then contacts the helpdesk impersonating that person, using a cloned voice, to request a password reset or MFA device enrollment.
Once the unsuspecting helpdesk agent resets the victim account’s multi-factor authentication, the attacker can reroute MFA codes to a device they control, then combine that access with credentials leaked in a previous breach to take over the employee’s account.
This exact playbook has been used by groups like Scattered Spider in a string of high-profile breaches at major UK retailers in 2025, costing those organizations hundreds of millions of dollars in losses. No malware or tooling was needed for initial access, putting the activity completely outside the visibility of endpoint detection and response systems. The helpdesk became the front door.
Voice Cloning Fraud
Voice-only attacks deserve their own mention because they’re arguably the most scalable form of deepfake fraud right now. Generating a convincing voice clone doesn’t require much source material, and it can be done in real time. Deepfake audio is now convincing enough that if it’s a co-worker you talk to occasionally, it can be replicated in real time without much pause, as security researchers have noted.
A 2025 Gartner survey of 302 cybersecurity leaders found that 43% reported at least one deepfake audio call incident and 37% experienced deepfakes in video calls. These aren’t fringe events anymore.
Credential Phishing and MFA Bypass
Deepfakes are also being layered into more traditional phishing campaigns to increase their effectiveness. In the Retool breach, attackers used SMS phishing to lure an employee to a fake login portal, then followed up with a call from someone impersonating an IT staffer using AI-generated speech. The result: MFA was bypassed and 27 accounts were compromised in a multi-stage, deepfake-assisted campaign that relied on timing, social engineering, and synthetic audio.
The pattern is worth noting. Deepfakes aren’t always the entire attack. Often they’re the ingredient that makes an otherwise detectable attack convincing enough to succeed.
How Bad Is the Problem Right Now?
Pretty bad, and getting worse. The Gartner survey we mentioned earlier found that 62% of organizations had experienced a deepfake attack in 12 months, encompassing social engineering, impersonation during calls, and exploitation of automated verification systems. They’ve also predicted that by 2026, 30% of enterprises will no longer consider standalone identity verification solutions reliable in isolation because of deepfake threats.
Part of what’s driving this is accessibility. Deepfake-as-a-service platforms became widely available in 2025, making the technology accessible to cybercriminals of all skill levels. You no longer need technical expertise to run one of these attacks.

What You Can Actually Do About It
There’s no single fix here, but there are concrete, practical steps that make a real difference.
1. Tighten Your Helpdesk Verification Processes – If helpdesks are a primary attack vector (and they are), the verification process for account changes and credential resets needs to be robust. Security questions and one-time passcodes are not sufficient against deepfake voice attacks.
Security advisors now recommend requiring individuals requesting a credential reset to go on camera with a physical ID, with the helpdesk holding a photo of that employee on file for visual comparison. In higher-risk scenarios, in-person verification is worth considering.Any reset of MFA credentials especially should trigger elevated scrutiny, not just the standard process.
2. Use Out-of-Band Verification for High-Stakes Actions – Payment approvals, access grants, and sensitive data requests need a second verification channel that isn’t the same one the request came through. If someone calls to authorize a wire transfer, call them back on a number already on record. If a “colleague” requests access via email, confirm through a different channel before acting.
This sounds simple, and it is. The challenge is building it into process so that urgency and authority don’t override it in the moment.
3. Apply Least Privilege Principles Rigorously – A lot of deepfake attacks succeed not because someone was deceived into handing over credentials, but because those credentials gave the attacker broad access once they had them. Limiting what any single account can do limits the blast radius of a successful impersonation.
This is where Admin By Request’s EPM solution is directly relevant. By removing standing admin rights and replacing them with just-in-time privilege elevation, even if an attacker successfully impersonates someone and gains access to their account, they can’t immediately run wild with admin-level permissions across the organization. Access is granted per request, per application, with a full audit trail. An attacker impersonating a regular user gets regular user access, not the keys to the kingdom.
4. Require MFA, But Choose It Carefully – MFA is still important, but the Retool case shows it’s not invincible against a well-executed deepfake social engineering attack. Phishing-resistant MFA, like hardware security keys or FIDO2-based authentication, is significantly harder to circumvent than app-based codes.
These methods use cryptographic keys tied to a specific device, so there’s no code to intercept or trick someone into entering on a fake portal.
5. Train Your People, But Be Realistic About It – Security awareness training around deepfakes is worth doing, and some organizations are already using deepfake simulations of their own executives to train employees to question what they see and hear. That kind of hands-on training is more effective than a slideshow.
That said, human detection rates for high-quality deepfakes are around 24.5%. Training helps, but it can’t carry the entire load. Processes and technical controls need to back it up.
6. Build Approval Workflows That Don’t Rely Solely on Identity – The Gartner recommendation that stood out from their 2025 research: build authorization into the application layer, not just the identity layer. If a CFO calls to request a transfer, the payment should still require the CFO to log into the financial system, authenticate with phishing-resistant MFA, and actually approve the transaction there, not just verbally authorize it over a call. That way, even a perfect deepfake of the CFO can’t complete the fraud on its own.
The Bigger Picture
Deepfake attacks work because they exploit trust, and trust is load-bearing in most organizations. We extend it to familiar voices, familiar faces, authority figures, and urgent requests. Attackers are simply finding new ways to fake all of those things.
The organizations that will fare best aren’t necessarily the ones who can spot a deepfake on sight. They’re the ones who’ve built processes that don’t rely on it. Verification that doesn’t depend on visual or audio cues alone. Access controls that limit damage when someone does get deceived. Approval workflows that require action inside a controlled system, not just a convincing phone call.
Deepfakes are a new tool applied to a very old strategy: get someone to trust you enough to do what you want. The defenses that work are the ones that remove trust as the only safeguard.

