AI Browser Security Risks: What to Know
Gartner just told enterprises to block all AI browsers (Gartner, December 2025). The advisory arrives as 27.7% of organisations already have at least one user with ChatGPT Atlas installed, with adoption highest in technology (67%), pharmaceuticals (50%), and finance (40%). These are sectors with the strictest security requirements (Cyberhaven, 2025).
The recommendation reflects documented vulnerabilities, not theoretical concerns. Security researchers have demonstrated attacks that hijack browser memories, exfiltrate corporate data, and trick AI agents into making unauthorised purchases.
Get threat intelligence like this delivered to your inbox. Subscribe to CyberDesserts for practical security insights, no fluff.
What Are AI Browsers?
AI browsers integrate large language models directly into browsing. Unlike traditional browsers where AI lives in a separate tab, these tools give AI agents access to everything you see and the ability to act on your behalf.
Current players:
- ChatGPT Atlas (OpenAI): Launched October 2025, combines browsing with Agent Mode that navigates sites autonomously
- Comet (Perplexity): Agentic browser that executes multi-step tasks across your authenticated sessions
- Claude for Chrome (Anthropic): Browser extension currently in limited beta with 1,000 Max plan users
- Edge Copilot Mode (Microsoft): Launched July 2025, with enterprise version announced at Ignite November 2025
Each offers genuine productivity gains. The security problem is that AI agents cannot reliably distinguish between your instructions and malicious commands hidden in web content.
Google and Microsoft Are Coming
The browser wars are accelerating. Both tech giants are adding agentic capabilities to their dominant browsers.
Google Chrome announced in September 2025 that Gemini would gain agentic browsing capabilities "in the coming months." The browser will complete multi-step tasks like booking appointments and ordering groceries autonomously (Google, September 2025). Google has already integrated Gemini Nano for real-time scam detection and is building a "User Alignment Critic" to prevent prompt injection attacks before full agent features launch.
Microsoft Edge for Business was unveiled at Ignite 2025 as "the world's first secure enterprise AI browser." Copilot Mode already offers Actions that complete tasks like making reservations and unsubscribing from emails. The enterprise version adds integration with Microsoft Graph, pulling in calendar, email, and document context while browsing (Microsoft, November 2025).
With Chrome holding 65% global browser share and Edge dominating enterprise environments, these capabilities will reach billions of users. Security teams should prepare now for the risks this scale introduces.
The Core Security Risks
Prompt Injection at Scale
Prompt injection ranks #1 on OWASP's LLM Top 10 for good reason. When an AI browser summarises a webpage, hidden text can hijack the agent. Attackers embed instructions in Reddit comments, email signatures, or invisible CSS. The AI follows them.
Brave's security team demonstrated this against Comet: a Reddit post with concealed commands caused the browser to access a victim's email, extract their address, retrieve an OTP, and send both to an attacker-controlled server (Brave, August 2025). No additional user interaction required after clicking "summarise."
Memory Corruption Attacks
Atlas introduced "Browser Memories" for persistent storage of browsing behaviour. LayerX researchers discovered attackers can inject malicious instructions into this memory via CSRF attacks. The corrupted memory persists across devices and sessions, activating whenever the user makes a legitimate query (LayerX, October 2025).
Credential and Session Exposure
AI browsers operate with your full privileges across all authenticated sessions. A successful attack gains potential access to banking, email, cloud storage, and corporate systems simultaneously. Traditional browser security models like same-origin policy become irrelevant when the AI itself follows malicious instructions.
Phishing Vulnerability
LayerX testing found Atlas 90% more vulnerable to phishing attacks than Chrome or Edge (LayerX, October 2025). Agentic browsers lack the mature anti-phishing infrastructure built into traditional browsers over two decades.
Documented Attack Techniques
| Attack | How It Works |
|---|---|
| CometJacking | Single malicious URL hijacks Comet's AI to exfiltrate email and calendar data. Base64 encoding bypasses data loss prevention checks. |
| HashJack | Malicious prompts hidden after the # symbol in legitimate URLs. Weaponises trusted sites to manipulate AI assistants. |
| Tainted Memory | CSRF exploits inject persistent instructions into ChatGPT's memory. Triggers code execution on future legitimate queries. |
| Screenshot Injection | Near-invisible text embedded in images is extracted via OCR and executed as commands when users screenshot webpages. |
| Zero-Click Data Wiper | Crafted emails instruct AI browser agents to delete entire Google Drive contents without user interaction. |
What Vendors Are Doing
OpenAI's CISO acknowledged prompt injection remains "a frontier, unsolved security problem" (The Register, October 2025). Anthropic reduced prompt injection success rates from 23.6% to 11.2% through mitigations, and blocked Claude for Chrome from financial services, adult content, and cryptocurrency sites entirely (Anthropic, August 2025).
Google announced a "User Alignment Critic" for Chrome. A second AI model reviews every action the primary agent wants to take. The oversight model never sees web content directly, creating separation between decision-making and potentially compromised data (Google, December 2025). The company is also offering $20,000 bounties for researchers who find flaws in these security boundaries.
Microsoft says Edge for Business will respect existing data protection policies and require explicit user approval for sensitive actions. Agent mode will not access passwords or payment data without permission (Microsoft, November 2025).
These are meaningful steps. None solve the fundamental problem that LLMs cannot reliably separate trusted instructions from untrusted content.
For Security Teams
Gartner's guidance is straightforward: block AI browsers until risks are better understood. For organisations that cannot implement blanket bans, consider these controls:
- Assess the back-end AI services powering each browser before permitting use
- Restrict access to sensitive systems. Keep AI browsers away from financial, HR, and authentication workflows
- Educate users that anything visible in the browser could be sent to cloud AI services
- Monitor for shadow adoption. ChatGPT Atlas had 62x more corporate downloads than Comet in its first week
- Develop incident response playbooks specific to AI agent compromise scenarios
Not sure where your AI security gaps are? Take our AI Security Maturity Assessment to get a prioritised action plan.
Summary
AI browsers represent a genuine productivity opportunity with genuinely unsolved security problems. The vendors building them acknowledge prompt injection has no reliable fix. Gartner recommends blocking them. Enterprise adoption is happening anyway.
For security teams, this requires a balanced approach across four domains:
- Governance: Establish clear policies on AI browser use, approved tools, and acceptable use cases
- Technical Controls: Implement blocking where possible, monitoring where not, and restrict access to sensitive systems
- Data Security: Assume anything visible in an AI browser may be processed by cloud services
- Human Factors: Train users to recognise that AI agents can be manipulated through content they browse
For the complete AI threat landscape including prompt injection, deepfakes, and agentic AI attacks, see our AI Security Threats: Complete Guide to Attack Vectors
AI browser security is evolving rapidly. Subscribers get notified when guidance changes, plus weekly practical security content. No sales pitches, no fluff.
This article is part of our AI Security Threats series. Last updated: December 2025
References and Sources
- Gartner. (December 2025). Cybersecurity Must Block AI Browsers for Now. Advisory recommending enterprises block AI browsers due to unmitigated risks.
- Cyberhaven. (October 2025). AI Browser Enterprise Adoption Report. 27.7% of organisations have Atlas users. Adoption highest in technology (67%), pharma (50%), finance (40%).
- LayerX Security. (October 2025). ChatGPT Tainted Memories Vulnerability. CSRF exploit allows persistent memory injection. Atlas 90% more vulnerable to phishing than Chrome/Edge.
- Brave Software. (August 2025). Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet. Demonstrated email/OTP exfiltration via hidden webpage instructions.
- Anthropic. (August 2025). Piloting Claude for Chrome. Prompt injection success rate reduced from 23.6% to 11.2%. High-risk site categories blocked.
- Cato Networks. (December 2025). HashJack Attack Technique. URL fragment-based prompt injection weaponising legitimate websites.
- Google. (September 2025). Chrome Reimagined with AI. Agentic browsing capabilities coming to Chrome in the coming months.
- Google Chrome Security. (December 2025). User Alignment Critic. Dual-model architecture to prevent indirect prompt injection in agentic browsing.
- Microsoft. (November 2025). Edge for Business: The World's First Secure Enterprise AI Browser. Copilot Mode with enterprise security controls announced at Ignite 2025.
- The Register. (October 2025). OpenAI defends Atlas as prompt injection attacks surface. OpenAI CISO acknowledges prompt injection as "unsolved security problem."
Member discussion