OpenAI is working to harden its Atlas AI browser against cyberattacks and acknowledges that prompt injection is a type of attack that manipulates an AI agent to follow malicious instructions hidden in web pages or emails. This is a risk that isn't going away anytime soon, raising questions about how securely AI agents can operate on the open web.
“As with fraud and social engineering on the web, instant attacks are unlikely to be fully 'solved',” OpenAI said in a blog post on Monday, detailing how the company is hardening Atlas' defenses to counter the constant attacks. The company acknowledged that ChatGPT Atlas' “Agent Mode” “expands the surface of security threats.”
OpenAI announced its ChatGPT Atlas browser in October, and security researchers have rushed to release a demo showing that you can change the behavior of the underlying browser by writing a few words in a Google Doc. On the same day, Brave published a blog post explaining how indirect prompt injection is an organizational challenge for AI-powered browsers, including Perplexity's Comet.
OpenAI isn't the only company to realize that prompt-based injection isn't going away. Britain's National Cyber Security Center warned earlier this month that prompt injection attacks on generative AI applications “may not be fully mitigated”, leaving websites at risk of data breaches. UK government agencies have advised cyber experts to reduce the risk and impact of immediate injections, rather than thinking they can “stop” an attack.
Regarding OpenAI, the company said, “We believe rapid injection is a long-term AI security challenge, and we need to continually strengthen our defenses against it.”
What's the company's answer to this Sisyphean-like challenge? The company says its proactive and rapid response cycle is showing early promise in helping companies discover new attack strategies before they can be exploited “in the wild.”
This is not entirely different from what competitors like Anthropic and Google claim. This means defenses must be layered and continually stress-tested to combat the persistent risk of prompt-based attacks. For example, recent efforts at Google have focused on architectural and policy-level controls for agent systems.
But what OpenAI does differently is its “LLM-based automated attacker.” The attacker is essentially a bot trained by OpenAI using reinforcement learning to play the role of a hacker looking for a way to secretly send malicious instructions to an AI agent.
Bots can test attacks in a simulation before actually using them, and the simulator shows how the target AI will think and act if it recognizes the attack. The bot can then study that response, fine-tune its attack, and try again and again. In theory, OpenAI's bots should be able to discover flaws faster than real-world attackers, since insights into the target AI's internal reasoning are inaccessible to outsiders.
This is a common tactic in AI safety testing. Build an agent to find edge cases and quickly test it in simulation.
“Our [reinforcement learning]”Trained attackers can coax agents into executing long-lasting, sophisticated, and harmful workflows that unfold over dozens (or even hundreds) of steps. We also observed novel attack strategies that had not appeared in human red teaming efforts or external reports,” OpenAI wrote.
Image credit: OpenAI
In a demo (partially pictured above), OpenAI showed how an automated attacker could sneak a malicious email into a user's inbox. Later, when the AI agent scanned the inbox, it followed the instructions hidden in the email and sent a resignation message instead of creating an out-of-office reply. However, the company says that after a security update, “Agent Mode” was able to successfully detect the prompt injection attempt and flag the user.
The company says prompt injections are difficult to defend against in a fool-proof manner, but it relies on extensive testing and faster patch cycles to harden systems before they appear in an actual attack.
An OpenAI spokesperson declined to say whether Atlas' security updates led to a measurable reduction in successful injections, but said the company has been working with third parties to harden Atlas against rapid injections since before its launch.
Rami McCarthy, principal security researcher at cybersecurity firm Wiz, said reinforcement learning is one way to continually adapt to an attacker's behavior, but it's only part of the picture.
“A useful way to infer risk in an AI system is to multiply autonomy with access,” McCarthy told TechCrunch.
“Agent browsers tend to be at the difficult end of the spectrum, which is a combination of moderate autonomy and very high access,” McCarthy said. “Many of the current recommendations reflect that trade-off: Restricting login access primarily reduces risk, but requiring review of confirmation requests constrains autonomy.”
These are two of OpenAI's recommendations to help users reduce their own risks, and a spokesperson said Atlas is also trained to obtain confirmation from users before sending messages or making payments. OpenAI also suggests that users give the agent specific instructions, rather than giving the agent access to their inbox and telling them to “perform the required action.”
According to OpenAI, “The wide tolerance makes it easier for hidden or malicious content to impact agents, even when safety measures are in place.”
OpenAI says protecting Atlas users from prompt injections is a top priority, but McCarthy is skeptical about the return on investment for the risk-prone browser.
“For most everyday use cases, agent browsers still don't provide enough value to justify their current risk profile,” McCarthy told TechCrunch. “Even though that access is what makes them powerful, given their access to sensitive data like email and payment information, the risks are high. That balance will evolve, but the trade-offs are still very real today.”

