Agentic browsers are the latest productivity tools making waves in workplaces. These AI-powered browsers, like Perplexity’s Comet and OpenAI’s ChatGPT Atlas, do much more than traditional browsing. They can summarize pages, translate content, book flights, complete forms, and even navigate authenticated web sessions on your behalf.
The productivity gains are real, but so are the security risks. Back in October, we covered how agentic AI is creating security vulnerabilities across organizations. Agentic browsers take those same risks and embed them directly into the tool employees use most: their web browser.
How Agentic Browsers Work
Traditional browsers load webpages and stop there. Agentic browsers add two capabilities that fundamentally change the security picture:
AI Sidebar Functionality
An embedded AI assistant that can read, summarize, translate, and analyze whatever’s on your screen. This means sending content (including sensitive data from authenticated sessions) to external cloud services for processing.
Autonomous Transaction Capability
The browser can navigate websites, fill out forms, make purchases, and complete multi-step workflows without constant human oversight. It operates with your credentials and permissions, acting as your proxy across the web.
These features create what’s essentially another user on your network, except this one can be manipulated through prompt injection, doesn’t understand security policies, and has sweeping access to everything visible in your browser.

Three Security Risks You Can’t Ignore
1. Uncontrolled Data Exfiltration
When employees use agentic browsers, information gets sent to external AI services for processing: active web content, open tabs, browsing history, authenticated session data, internal dashboards, customer records, and financial data.
Most users don’t realize this is happening. They ask for a summary and assume the AI processed everything locally. That data left your network and went to cloud servers you don’t control. Once information reaches an external AI service, you can’t get it back.
Your security team can’t easily monitor these transfers because they look like normal HTTPS traffic. Traditional DLP tools struggle with the conversational nature of AI interactions.
2. Prompt Injection Vulnerabilities
Agentic browsers are vulnerable to indirect prompt injection, where malicious instructions get embedded in webpage content. When the browser’s AI processes that page, it follows those hidden instructions while appearing to handle the user’s legitimate request.
Research from Brave’s security team demonstrated this with Comet browser. They showed how a malicious Reddit comment could trick the AI into revealing a user’s email address when the user asked it to summarize the page. LayerX later described “CometJacking,” where a booby-trapped URL caused Comet’s AI layer to steal sensitive data exposed in the browser.
Employees won’t realize anything went wrong. The AI generates output that looks helpful, the employee runs it, and malicious actions execute with their full privileges.
3. Inadequate Phishing Protection
Testing showed ChatGPT Atlas blocked only 5.8% of phishing attacks, compared to Chrome’s 47% and Edge’s 53%. Users of AI browsers face significantly higher exposure to malicious websites than they would with standard browsers.
When your browser catches less than one in twenty phishing attempts, you’re relying almost entirely on user judgment. Add in autonomous capabilities that can navigate and interact with sites automatically, and credential theft becomes much easier for attackers.
The Risk-Reward Calculation Doesn’t Add Up
Research firm Gartner issued a blunt advisory in December 2024: organizations should block all AI browsers “for the foreseeable future” to minimize risk exposure. Their report, titled “Cybersecurity Must Block AI Browsers for Now,” states that default settings prioritize convenience over endpoint security.
When you look at what agentic browsers actually deliver versus the exposure they create, the math just doesn’t work for most organizations.
These browsers offer faster research, automated form filling, streamlined workflows, and multi-step task completion without manual intervention. The productivity gains exist. Employees using these tools can accomplish certain tasks faster than manual methods.
But you’re accepting uncontrolled data exfiltration to external AI services, prompt injection vulnerabilities that bypass traditional security, autonomous actions with user privileges that can be manipulated, phishing protection worse than standard browsers, and credentials exposed to AI processing.
Agentic browsers concentrate these risks into a single tool with sweeping access to authenticated sessions and sensitive data. The browser is the foundation of how employees access internal systems, customer data, financial records, email, and cloud applications. Organizations with strong security requirements, regulated data, or low risk tolerance can’t justify giving an AI agent that level of access.

What You Should Do Instead
Blocking agentic browsers might slow adoption, but it won’t stop employees from using AI tools that create similar risks. You need technical controls that limit damage regardless of which AI tool someone chooses:
Remove Standing Administrative Rights
When AI-generated commands can’t automatically run with elevated privileges, you’ve eliminated a major attack vector. Admin By Request’s EPM solution implements just-in-time privilege elevation with approval workflows. Every privileged action requires explicit approval and creates an audit trail, even if an employee trusts AI-generated output without reviewing it.
Segment Access to Sensitive Systems
Your most critical databases, file shares, and internal applications should require additional authentication that can’t be automated through a browser session. An agentic browser working through an employee’s context shouldn’t reach everything that employee technically has access to.
Monitor Privileged Activity Patterns
Track when elevated privileges are used and what processes run during those sessions. If employees constantly request elevation to run scripts (potentially AI-generated), that pattern should trigger review by your security team.
Treat AI-Generated Code Like Untrusted Input
Implement review requirements that treat AI-generated code the same way you’d treat code from someone who doesn’t understand enterprise security standards. Static analysis tools can catch common vulnerabilities before that code reaches production or runs with admin rights.
Rotate Credentials Regularly
If employees use AI tools to interact with internal systems, assume those credentials could be exposed through the AI platform. Regular rotation limits the window of exposure if an AI account or browser gets breached.
Agentic browsers will improve. Security controls will get better. Enterprises will develop governance models for AI agents. But organizations that implement proper privilege management now will be ready when agentic browsers become safe enough for enterprise use. Those that wait will find themselves dealing with breaches caused by AI agents that had too much access and not enough oversight.
Admin By Request EPM gives you the controls needed to protect your systems whether threats come from AI agents, malware, or compromised accounts. Try our free plan for up to 25 endpoints and see how just-in-time elevation prevents unauthorized commands from running with admin rights, or book a demo to walk through the solution.

