A single email sitting in your inbox just became a data theft vector. No clicks required, no suspicious attachments, no warning signs. That’s EchoLeak, the first documented zero-click vulnerability targeting an AI assistant.
Researchers at Aim Security discovered this critical flaw in Microsoft 365 Copilot earlier this year. Tracked as CVE-2025-32711 with a CVSS score of 9.3, the vulnerability allowed attackers to steal sensitive organizational data through carefully crafted email prompts.
Microsoft patched the issue in May, and there’s no evidence of real-world exploitation, but EchoLeak represents something bigger: the emergence of a new class of AI-specific security threats.
How EchoLeak Turned Helpful AI Into a Silent Data Thief
The attack exploits what researchers call an “LLM Scope Violation“, essentially tricking an AI model into accessing and leaking data outside its intended boundaries. Here’s how it works:
The Setup
An attacker sends what appears to be a normal business email to their target. Hidden within the message are instructions designed to manipulate Copilot’s behavior. The email never mentions AI or Copilot directly, instead reading like typical corporate communication about employee onboarding, HR processes, or project management.
The Trigger
When the victim later asks Copilot a business-related question, the AI’s Retrieval-Augmented Generation (RAG) system automatically scans available content for relevant information. This includes that seemingly innocent email in the inbox.
The Exploitation
Once Copilot processes the malicious email, the embedded instructions activate. The AI begins extracting sensitive data from across the Microsoft 365 environment – chat histories, OneDrive files, SharePoint documents, Teams conversations – and packages it into specially crafted URLs that send the information to the attacker’s server.
The victim never opens the malicious email or clicks any links. The attack happens entirely in the background while they’re using Copilot for legitimate work tasks.
Why Traditional Security Couldn’t Stop This Attack
EchoLeak succeeded because it exploited fundamental aspects of how AI assistants work. Microsoft 365 Copilot is designed to process both trusted internal data and external inputs without strict isolation, creating what researchers described as a “silent leak vector.”
The attack bypassed multiple security mechanisms:
- Cross-Prompt Injection Attack (XPIA) classifiers meant to detect malicious AI prompts
- Content Security Policy (CSP) designed to prevent unauthorized data transmission
- Link and image redaction systems that should block suspicious URLs
As Adir Gruss from Aim Security explained: “They tried to block it in multiple paths across the chain, but they just failed to do so because AI is so unpredictable and the attack surface is so big.”

The Real Problem: AI Systems Mixing Trusted and Untrusted Data
EchoLeak highlights a fundamental design challenge in modern AI assistants. These systems are built to be helpful by pulling information from everywhere they can access – your emails, documents, chat history, and external sources. But this same capability becomes dangerous when untrusted external input can manipulate the AI’s behavior.
Traditional software vulnerabilities usually stem from improper input validation. With AI systems, the challenge is that inputs are inherently unstructured and difficult to validate. A perfectly formatted email containing natural language instructions can bypass security filters precisely because it looks legitimate.
This isn’t just a Microsoft problem. Any AI system using Retrieval-Augmented Generation could be vulnerable if it processes external inputs alongside sensitive internal data. That includes customer service chatbots, enterprise AI assistants, and other AI-powered tools that organizations are rapidly adopting.
The Growing AI Security Arms Race
EchoLeak is likely just the beginning.
We’re entering an era where the same technology that strengthens our defenses also empowers attackers. AI tools that help security teams detect threats can be turned around to find new vulnerabilities. Systems designed to understand human language can be manipulated through carefully crafted instructions.
Organizations need to prepare for a new category of threats that exploit AI’s core strengths (its ability to understand context, follow instructions, and access vast amounts of data) against the systems they’re meant to protect.
What Organizations Can Learn From EchoLeak
While Microsoft fixed this specific vulnerability, the underlying security challenges aren’t going anywhere. Organizations deploying AI tools should consider several important lessons:
Rethink Trust Boundaries
Traditional security assumes clear boundaries between trusted and untrusted data. AI systems blur these lines by design, requiring new approaches to data isolation and access control.
Plan for AI-Specific Threats
Standard threat modeling may not account for prompt injection, scope violations, and other AI-specific attack vectors. Security teams need to expand their thinking about how these systems can be exploited.
Implement Comprehensive Monitoring
Organizations need detailed logging and monitoring of AI interactions with sensitive data. Understanding what information AI systems access and how they use it becomes critical for detecting potential compromises.
Maintain Human Oversight
While AI systems can enhance productivity, they shouldn’t operate without appropriate oversight, especially when accessing sensitive organizational data.

Microsoft’s Response and Industry Implications
Microsoft’s handling of EchoLeak demonstrates both the challenges and potential solutions for AI security. The company implemented server-side fixes without requiring customer action and has introduced additional controls like Data Loss Prevention (DLP) tags to restrict Copilot’s access to external emails.
However, enabling these protective controls can reduce Copilot’s functionality, highlighting the ongoing tension between security and usability in AI systems.
The vulnerability has prompted broader discussions about AI security standards and the need for new defensive approaches. As more organizations adopt AI assistants for business-critical tasks, the industry will need to develop security frameworks specifically designed for these systems.
Preparing for the Next Wave of AI Vulnerabilities
EchoLeak almost certainly won’t be the last zero-click AI vulnerability we see. As these systems become more sophisticated and deeply integrated into business operations, they’ll present increasingly attractive targets for attackers.
Organizations should start preparing now by establishing AI governance frameworks, implementing appropriate monitoring and logging, and ensuring their security teams understand the unique risks that AI systems introduce.