Anthropic, a close rival of OpenAI, has agreed to raise an additional $4 billion from Amazon and make Amazon's cloud computing arm, Amazon Web Services (AWS), its primary location for training its flagship generative AI models. did.
Anthropic also said it is working with Annapurna Labs, the chip manufacturing arm of AWS, to develop the next generation of its Trainium accelerator, AWS's custom-built chips for training AI models.
“Our engineers will work closely with Annapurna's chip design team to extract maximum computational efficiency from the hardware, which we will use to train our state-of-the-art foundational models,” Anthropic said. I mentioned it in a blog post. “Together with AWS, we are building the technology foundation that will power the next generation of AI research and development, from silicon to software.”
Amazon revealed in its own post that Anthropic will use Trainium to train future models and Inferentia, Amazon's in-house chip that speeds up model execution and delivery, to deploy those models.
The new capital injection from Amazon brings the tech giant's total investment in Anthropic to $8 billion, but Anthropic said it will maintain Amazon's status as a minority investor. Anthropic has raised $13.7 billion in venture capital to date, according to Crunchbase.
The Information reported earlier this month that Amazon was in talks to invest billions of dollars in Anthropic. This is the company's first financial commitment since closing a $4 billion deal last year. The new investment is reportedly structured similarly to the previous investment, but with a twist. Amazon insisted that Anthropic use Amazon-developed silicon hosted on AWS to train its AI.
Anthropic is said to prefer Nvidia chips. But the money may have been too good to part with. Earlier this year, it was reported that Anthropic expected to burn through more than $2.7 billion in 2024 as it trains and scales its AI products. According to The Information, Anthropic has been in talks for several months about raising new funding at a valuation of $40 billion, so there was definitely pressure to land something soon.
Anthropic notes that its collaboration with AWS has grown over the years. Anthropic says its Claude family of models is used by “tens of thousands” of companies through Amazon Bedrock, AWS's platform for hosting and fine-tuning models.
Recently, Anthropic partnered with AWS and Palantir to provide U.S. intelligence and defense agencies with access to Claude.
Beyond AWS, Amazon is also said to be working with Anthropic to revamp its consumer products. Amazon will reportedly replace its model powering virtual assistant Alexa with Anthropic's after facing technical challenges.
The partnership and investment have drawn regulatory scrutiny.
The FTC sent letters to Amazon, Microsoft, and Google earlier this year asking them to explain how their investments in startups like Anthropic affect the competitive landscape for generative AI. Google is also an investor in Anthropic, pumping $2 billion into the company at the end of last October for a 10% stake, while Microsoft is a major backer of OpenAI.
Britain's competition regulator, the Competition and Markets Authority, has launched multiple investigations into large-scale tech partnerships with AI startups, and after greenlighting Amazon's investment last year, most recently Alphabet's partnership and investment. He also gave the OK to Anthropic.
Anthropic continues to keep pace with other frontier AI labs, including OpenAI, releasing new features such as Computer Use, which allows today's best models to perform tasks autonomously on a PC. But the company has faced setbacks as well. The company recently unexpectedly increased the price of one of its models. Then we learned that the release schedule for the next generation top-of-the-line Claude model, the 3.5 Opus, has been pushed back.