Let's call it the Reasoning Renaissance.
Following the release of OpenAI's o1 (so-called inference model), there has been an explosion of inference models from competing AI labs. In early November, DeepSeek, an AI research firm backed by quantitative traders, began previewing its first inference algorithm, DeepSeek-R1. In the same month, Alibaba's Qwen team announced what it claimed was the first “open” challenger to o1.
So what opened the floodgates? Well, one of them is exploring new approaches to refining generative AI techniques. As my colleague Max Zeff recently reported, “brute force” methods of scaling up models no longer yield the improvements they once did.
AI companies are under intense competitive pressure to maintain the current pace of innovation. According to some estimates, the global AI market could reach $196.63 billion in 2023 and reach a value of $1.81 trillion by 2030.
OpenAI, for one, claims that its inferential model can “solve harder problems” than previous models and represents a step change in generative AI development. But not everyone is convinced that inferential models are the best way to go.
Ameet Talwalkar, associate professor of machine learning at Carnegie Mellon University, said he finds the first crop of inference models “very impressive.” But he also told me that he “questions the motives” of those who make confident claims about how far inference models will advance the industry.
“AI companies have a financial incentive to provide rosy predictions about the capabilities of future versions of their technology,” Talwalker said. “We run the risk of short-sightedly focusing on a single paradigm, which is why it is important for the broader AI research community to avoid blindly believing the hype and marketing efforts of these companies. It is important to focus on concrete results instead.”
Two disadvantages of inference models are (1) they are expensive and (2) they consume a lot of power.
For OpenAI's API, for example, the company charges $15 for every approximately 750,000 words it analyzes and $60 for every approximately 750,000 words its models generate. This is three to four times the cost of OpenAI's latest “non-inference” model, GPT-4o.
O1 is available for free with limitations on ChatGPT, OpenAI's AI-powered chatbot platform. But earlier this month, OpenAI introduced o1 Pro mode, a more advanced o1 tier that costs a staggering $2,400 per year.
“The overall cost is [large language model] Inference is certainly not slowing down,” Guy van den Broeck, a computer science professor at UCLA, told TechCrunch.
One reason why inference models are so expensive is because they require large amounts of computing resources to run. Unlike most AIs, o1 and other inference models attempt to check their work as they run. This helps avoid some of the pitfalls that models usually stumble upon, but the downside is that it often takes a long time to reach a solution.
OpenAI envisions that future inference models will “think” for hours, days, or even weeks. The company admits it will be more expensive to use, but the rewards, from breakthrough batteries to new cancer drugs, could be worth it.
The value proposition of today's inference models is less clear. Costa Huang, a researcher and machine learning engineer at the nonprofit Ai2, points out that o1 is not a very reliable calculator. Also, if you do a quick search on social media, you will find many o1 pro mode errors.
“These inference models are specialized and can perform poorly in common areas,” Huang told TechCrunch. “Some limitations will be overcome sooner than others.”
Van den Broeck argues that because inferential models do not perform actual inference, they are limited in the types of tasks they can successfully tackle. “True reasoning is valid for all problems, not just those that are likely to occur.” [in a model’s training data]” he said. “That is still the main challenge to overcome.”
Given the strong market incentives to enhance inference models, there is no doubt that inference models will improve over time. As it turns out, OpenAI, DeepSeek, and Alibaba aren't the only companies investing in this new field of AI research. Venture capitalists and founders from adjacent industries are rallying around the idea of a future dominated by reasoning AI.
But Talwalker worries that large labs will end up managing these improvements.
“It's understandable that major labs are secretive for competitive reasons, but this lack of transparency severely hampers the research community's ability to work on these ideas,” he said. “I'm hopeful as more people work in this direction.” [reasoning models to] Go fast. But while some ideas will come from academia, given the economic incentives here, most, if not all, models will come from large industrial labs like OpenAI. It will be provided. ”