Cloud providers are chasing AI unicorns, the latest being Fei-Fei Li's World Labs. The startup just hired Google Cloud as its primary compute provider for training its AI models, a deal that could be worth hundreds of millions of dollars. But Lee's tenure as chief AI scientist at Google Cloud was not a factor, the company said.
At the Google Cloud Startup Summit on Tuesday, the companies announced that World Labs will use the majority of the funding to license GPU servers on Google Cloud Platform and eventually train “spatially intelligent” AI models.
A small number of well-funded startups building AI-based models are very popular in the world of cloud services. Some of the largest deals include OpenAI, which will train and run AI models exclusively on Microsoft Azure, and Anthropic, which uses AWS and Google Cloud. These companies regularly pay millions of dollars for computing services, and as their AI models expand, they may someday need even more money. This makes these companies valuable customers for Google, Microsoft, and AWS to build relationships with early on.
World Labs is certainly building unique multimodal AI models with big computing needs. The startup just raised $230 million at a valuation of more than $1 billion in a deal led by A16Z to build AI world models. James Lee, general manager of startups and AI at Google Cloud, told TechCrunch that World Labs' AI models will one day be able to process, generate and manipulate video and geospatial data. World Labs calls these AI models “spatial intelligence.”
Lee led the company's AI efforts in 2018 and has deep ties to Google Cloud. But Google denies that the deal is a product of that relationship and rejects the idea that its cloud services are just a commodity. Rather, Lee said services such as high-performance toolkits for scaling AI workloads and the plentiful supply of AI chips are more of a factor.
“Feifei is clearly a friend of GCP,” Li said in an interview. “GCP wasn't the only option they considered, but ultimately they chose mine for all the reasons we talked about, namely its ability to meet their AI-optimized infrastructure and scalability needs. They came to us.”
Google Cloud offers AI startups a choice between its own AI chips, tensor processing units (TPUs), and Nvidia's GPUs, which are purchased by Google and are in limited supply. Google Cloud is working to get more startups to train AI models on TPUs, primarily as a way to reduce dependence on Nvidia. All cloud providers are currently limited by the lack of Nvidia GPUs, so many are building their own AI chips to meet demand. According to Google Cloud, some startups are doing training and inference exclusively on TPUs, but GPUs remain the industry's favorite AI training chip.
World Labs chose to train its AI models on GPUs in this contract. However, Google Cloud did not reveal what led to that decision.
“We worked with Fei Fei and her product team, and at this stage of their product roadmap, it made more sense for them to work with us on the GPU platform,” said Lee. said in an interview. “But that doesn't necessarily mean it's a permanent decision…in some cases [startups] Migrate to another platform such as TPU. ”
Lee did not reveal the size of World Labs' GPU cluster, but cloud providers often dedicate large supercomputers to startups training AI models. Google Cloud has committed to Magic, another startup that trains AI-based models. It's a cluster with “tens of thousands of Blackwell GPUs,” each more powerful than a high-end gaming PC.
These clusters are easier to promise than to deliver. Google's cloud services competitor Microsoft is reportedly struggling to meet OpenAI's insane computing demands, forcing the startup to turn to other options for computing power. has been done.
World Labs' agreement with Google Cloud is not exclusive. That means the startup may continue to sign deals with other cloud providers. However, Google Cloud has said it will continue to carry over most of its business.