At the first developer meeting on Thursday, humanity launched two new AI models with the best startup claims in the industry, at least in terms of how they score on popular benchmarks.
Claude Opus 4 and Claude Sonnet 4 are part of humanity's new Claude 4 family model, allowing you to analyze large datasets, perform long-term challenges, and perform complex actions. Both models were tuned to work well in programming tasks, and humanity states they are suitable for writing and editing code.
Both users and users of the company's free chatbot app can access Sonnet 4, but only users can access Opus 4 with the APIs of Anthropic via Amazon's Bedrock Platform and Google's Vertex AI.
A token is the raw bit of data on which the AI model runs. A million tokens amount to about 750,000 words. It is about 163,000 words more than “war and peace.”
Image credits: Humanity
Anthropic's Claude 4 model arrives as the company appears to increase revenue significantly. The outfit, which was reportedly founded by a former Openy researcher, aims to raise revenues of $12 billion in 2027 from its $2.2 billion forecast this year. Humanity recently closed its $2.5 billion credit facility and raised billions from Amazon and other investors in anticipation of rising costs associated with the development of frontier models.
Rivals aren't making it easier to maintain pole position in AI races. Humanity launched a new flagship AI model earlier this year, and Claude Sonnet 3.7, along with an agent coding tool called Claude Code, is competing with competitors, including Openai and Google, to outperform the company with powerful models and proprietary development tools.
Humanity is being played in Keeps with Claude 4.
The more capable the two models introduced today, the Opus 4, are, the more capable they can maintain “intensive effort” across many steps in the workflow, says humanity. Meanwhile, Sonnet 4 – designed as a “drop-in exchange” in Sonnet 3.7 – has improved coding and mathematics compared to previous models of humanity, and more accurately follows the instructions.
Humanity argues that the Claude 4 family is less likely to engage in “reward hacking” than Sonnet 3.7. Reward hacking, also known as spec game, is the action in which a model obtains shortcuts and loopholes to complete a task.
To be clear, these improvements do not bring the best models in the world through any benchmark. For example, Opus 4 defeats Google's Gemini 2.5 Pro and Openai's O3 and GPT-4.1 on the SWE bench with SWE bench verification designed to assess the coding capabilities of the model, but cannot exceed the O3 of MMMU or GPQA diamond.
Results of human internal benchmark tests. Image credit: Humanity
Still, humanity is releasing OPUS 4 under stricter safeguards, including enhanced harmful content detectors and cybersecurity defenses. In internal testing, the company discovers that the OPUS 4 could “significantly increase” the ability of someone with a STEM background to acquire, produce or deploy chemicals, living organisms, or nuclear weapons, reaching the specifications of the human “ASL-3” model.
Humanity says that both the Opus 4 and Sonnet 4 are “hybrid” models. This allows for extended thinking for nearby reactions and deeper reasoning (which allows AI to “infer” and “think” as if they understand these concepts). With inference mode turned on, the model can spend more time considering possible solutions to a particular problem before answering.
Because of the model, they provide a “user-friendly” overview of the thought process, says humanity. Would you like to show the whole thing? To protect the “competitive advantages” of humanity, the company admits in a draft blog post provided to TechCrunch.
Opus 4 and Sonnet 4 can use multiple tools in parallel, such as search engines, and alternate inference and tools to improve the quality of your answers. It can also extract and store facts in “memory” to handle tasks more reliably, and construct what humanity describes as “implicit knowledge” over time.
To make the model more programmer-friendly, humanity is deploying the aforementioned Claude Code upgrade. Claude code, which allows developers to perform specific tasks directly from the device via Anthropic's model, integrates with the IDE and provides an SDK that allows developers to connect to third-party applications.
Announced earlier this week, the Claude Code SDK allows Claude Code to run as a subprocess of a supported operating system, providing a way to build AI-powered coding assistants and tools that take advantage of the capabilities of the Claude model.
Anthropic has released Claude code extensions and connectors for Microsoft's VS Code, Jetbrain and Github. GitHub connector allows developers to respond to reviewer feedback by tagging their Claude code and attempt to fix or change errors in their code.
AI models still struggle to code high quality software. Code generation AI tends to introduce security vulnerabilities and errors due to weaknesses in areas such as the ability to understand programming logic. But their promise to increase coding productivity is to drive rapid adoption by businesses and developers.
This humanity is keenly aware of this and promises more frequent model updates.
“We're moving to more frequent model updates and providing a stable flow of improvement that will bring breakthrough capabilities to our customers faster,” wrote the startup in a draft post. “This approach will become cutting edge as we continue to refine and strengthen our models.”