The list of flagship AI models that have missed their promised launch window continues to grow.
Last summer, billionaire Elon Musk, founder and CEO of AI company xAI, said xAI's next major AI model, Grok 3, would arrive by the “end of the year” in 2024. Grok is xAI's answer to models like OpenAI's GPT-4o and Google's Gemini can analyze images and respond to questions, powering many of the features of X, Musk's social network .
“The end of Grok 3 should be really special after training on 100,000 H100s,” Musk said in a July post about X, referring to xAI's massive GPU cluster in Memphis. “Grok 3 will be a huge leap forward,” he said in a follow-up post in mid-December.
But it's now January 2nd and the Grok 3 has yet to arrive. There's also no indication that its release is imminent.
In fact, a piece of code on xAI's website discovered by AI tipster Tibor Blaho suggests that an intermediate model, Grok 2.5, could be the first to arrive.
Grok[.]com may soon release the Grok 2.5 model (grok-2-latest – “our most intelligent model”). Thanks for the tip, anon! pic.twitter.com/emsvmZyaf7
— Tibor Blaho (@btibor91) December 20, 2024
To be sure, this isn't the first time Musk has set lofty goals and failed to achieve them. It's no secret that Musk's statements about the timing of product launches are often unrealistic at best.
And to be fair, in an interview with podcaster Rex Fridman in August, Musk said Grok 3 would be available “hopefully” in 2024, “if we're lucky.”
But Grok 3's MIA status is interesting because it's part of a growing trend.
Last year, AI startup Anthropic failed to deliver a replacement for its top-of-the-line model, the Claude 3 Opus. Months after announcing that its next-generation model, the Claude 3.5 Opus, would be released by the end of 2024, Anthropic has removed all mention of the model from its developer documentation. (According to one report, Anthropic finished training Claude 3.5 Opus sometime last year, but decided it didn't make financial sense to release it.)
Google and OpenAI have also reportedly experienced setbacks with their flagship models in recent months.
This could be evidence of the limits of current AI scaling laws, the techniques companies are using to improve the capabilities of their models. In the not-so-distant past, it was possible to achieve significant performance improvements by training models using large amounts of computing power and increasingly large datasets. However, the profits of each model generation began to shrink, prompting companies to pursue alternative technologies.
Grok 3 trains with 10x and soon 20x more compute than Grok 2
— Elon Musk (@elonmusk) September 21, 2024
Musk himself alluded to this in an interview with Fridman.
“You're expecting [Grok 3] How to become cutting edge? ” Friedman asked.
“Hopefully,” Musk replied. “So this is the goal. This goal may not be achieved. That's the aspiration.”
There may be other reasons for Grok 3's lag. For example, xAI's team is much smaller than many of its competitors. Nevertheless, the staggered start times add to the body of evidence that traditional AI training approaches are hitting a wall.