Anthropic released a preview of its new Frontier model Mythos on Tuesday. Mythos says it will be used by smaller partner organizations for cybersecurity work. In a previously leaked memo, the AI startup called the model one of its “most powerful” to date.
The model's limited debut is part of a new security initiative called Project Glasswing, in which more than 40 partner organizations will deploy the model for “defensive security work” and to secure critical software, Anthropic said. Although not specifically trained for cybersecurity work, Preview will be used to scan code for vulnerabilities in both first-party and open source software systems, the company said.
Anthropic claims that over the past few weeks, Mythos has “identified thousands of zero-day vulnerabilities, many of which were critical.” Many of the vulnerabilities are 10 to 20 years old, the company added.
Mythos is a general-purpose model of Anthropic's Claude AI system, which the company claims has powerful agent coding and inference skills. Anthropic's Frontier models are considered the most sophisticated and high-performance models designed for more complex tasks such as building and coding agents.
Partner organizations previewing Mythos include Amazon, Apple, Broadcom, Cisco, CrowdStrike, Linux Foundation, Microsoft, and Palo Alto Networks. As part of this effort, these partners will ultimately share what they learn from using this model so that other technology industries can benefit from it. Anthropic said the preview is not expected to be made available to the public.
Anthropic also claims to be in “ongoing discussions” with federal officials about its use of Mythos, but those discussions are likely complicated by the fact that Anthropic and the Trump administration are currently locked in a legal battle over Anthropic's refusal to autonomously target or monitor American citizens after the Department of Defense designated its AI lab as a supply chain risk.
The Mythos news was originally leaked in a data security incident reported by Fortune magazine last month. A draft blog about this model (then called “Capybara”) was left in an insecure cache of documents available on a publicly inspectable data lake. The breach, which Anthropic later attributed to “human error,” was originally discovered by security researchers. “'Capybara' is the new name for a new layer of models that is larger and more intelligent than our most powerful Opus model to date,” the leaked document says, later adding that it is “the most powerful AI model we have ever developed.”
tech crunch event
San Francisco, CA | October 13-15, 2026
In the leak, Anthropic claimed that its new model far exceeds the performance areas currently met by publicly available models (such as “software coding, academic reasoning, and cybersecurity”) and could potentially pose a cybersecurity threat if bad actors find and weaponize bugs for exploitation (rather than fixing them). This is how Mythos is deployed.
A mistake the company made last month when it released version 2.1.88 of its Claude Code software package resulted in the company accidentally releasing approximately 2,000 source code files and more than 500,000 lines of code. The company then accidentally deleted thousands of code repositories on Github in an attempt to clean up the mess.

