OpenAI released the new o1 models on Thursday, giving ChatGPT users their first chance to try out an AI model that pauses and “thinks” before responding. There are high hopes for these models, code-named “Strawberry” within OpenAI. But does Strawberry live up to the hype?
I agree.
Compared to GPT-4o, the o1 model feels like one step forward and two steps back. OpenAI o1 excels at inference and answering complex questions, but the model is roughly four times more expensive to use than GPT-4o. OpenAI's latest model lacks the tooling, multimodal capabilities, and speed that made GPT-4o so great. In fact, OpenAI acknowledges on its help page that “GPT-4o remains the best option for most prompts,” while noting elsewhere that o1 struggles with simpler tasks.
“It's a great achievement, but I don't think the improvement is that dramatic,” said Ravid Schwartz Ziv, a professor at New York University who studies AI models. “It's better on certain problems, but it's not better across the board.”
For all these reasons, it's important to only use o1 for the questions it's designed to be truly useful for: the big questions. To be clear, most people aren't using generative AI to answer these kinds of questions today, primarily because today's AI models aren't very good at it. But o1 is a tentative step in that direction.
Think big ideas
OpenAI o1 is unique in that it “thinks” before it gives an answer: it breaks down large problems into smaller steps and tries to identify whether one of the steps is right or wrong. This “multi-step reasoning” is not entirely new (researchers have been proposing it for years, and You.com uses it for complex queries), but it hasn't been put to practical use until recently.
“There's a lot of excitement in the AI community,” Kian Catanforouche, Workera's CEO and an adjunct lecturer at Stanford University who teaches machine learning classes, said in an interview. “If you can train reinforcement learning algorithms in conjunction with OpenAI's language model technology, you can create incremental thinking in a technical sense, and have the AI model work backwards from the big idea that you're trying to solve.”
OpenAI o1 is also very expensive. With most models, you pay for input and output tokens. But o1 adds a hidden process (where the model breaks down a large problem into smaller steps) and adds a ton of computation that isn't entirely visible. OpenAI hides some of the details of this process to maintain a competitive advantage. That said, you do get charged for these in the form of “inference tokens”. This further highlights why you should be cautious about using OpenAI o1. You're not going to be charged a ton of tokens just for asking me where the capital of Nevada is.
But the idea of an AI model that can help us “work backwards from the big idea” is a powerful one. In fact, this model is very good at doing so.
As an example, I asked ChatGPT o1 preview to help me plan Thanksgiving for my family, a task that would benefit from a little open-minded logic and reasoning. Specifically, I wanted help determining whether two ovens would be enough to cook Thanksgiving dinner for 11 people. I also wanted to discuss whether I should consider renting an Airbnb to get a third oven.
(Maxwell Zeff/OpenAI)
(Maxwell Zeff/OpenAI)
After 12 seconds of “thinking,” ChatGPT wrote up a 750+ word answer, ultimately letting me know that with careful strategy, two ovens would be enough, saving me money and allowing my family to spend more time together. But they also detailed their thinking at each step, explaining how they took into account all these external factors: cost, family time, oven maintenance, etc.
In my preview of ChatGPT o1, it showed me how to prioritize oven space in the house where I was hosting the event, which was a clever move, and oddly enough, it suggested I consider renting a portable oven for the day. That said, this model performed much better than GPT-4o, which required multiple questions about what dishes I was bringing and then offered minimal advice that I found not very helpful.
It may seem silly to ask about Thanksgiving dinner, but you'll see how this tool can help you break down a complex task.
I also asked o1 to help me plan a busy work day that involved travelling between airports, multiple in-person meetings at different locations, and the office. It came up with a very detailed plan, but maybe a bit too much. Sometimes all of the added steps can be a bit overwhelming.
For simpler questions, o1 overdoes it. It doesn't know when to stop overthinking. When asked where cedar trees are found in the United States, it returned an 800+ word answer outlining every type of cedar tree in the country, including their scientific names. For some reason, it even had to consult OpenAI's policies at one point. GPT-4o answered this question much better, returning about three sentences explaining that cedar trees can be found all over the country.
Temper your expectations
In some ways, Strawberry never lived up to expectations. Reports about OpenAI's inference model date back to November 2023, right around the time everyone was looking for answers about why OpenAI's board had fired Sam Altman. That sparked rumors in AI circles, with some speculating that Strawberry was a kind of AGI, an enlightened version of the AI that OpenAI ultimately aims to create.
To clear up any doubts, Altman acknowledged that o1 is not AGI, but it won't be confusing after use. The CEO also lowered expectations for the launch, tweeting that “o1 still has flaws and limitations, and your first impressions will be stronger than your first impressions over time.”
Other companies in the AI industry are coming to terms with less exciting launches than expected.
“This hype has, in some ways, happened outside of OpenAI's control,” said Rohan Pandey, a research engineer at ReWorkd, an AI startup that uses OpenAI's models to build web scrapers.
He hopes that o1’s inference capabilities will be enough to solve niche, complex problems that GPT-4 can’t address. While most in the industry likely see o1 that way, they don’t see it as the revolutionary advancement that GPT-4 has brought to the industry at all.
“Everyone's waiting for a step change in capabilities, but it's unclear if this represents that. I think that's it,” Mike Conover, CEO of BrightWave, which previously co-developed Databricks' AI model Dolly, said in an interview.
What is the value here?
The underlying principles used to create o1 go back several years: Google used a similar technique in 2016 to create AlphaGo, the first AI system to beat a world champion at the board game Go, noted Andy Harrison, a former Googler and CEO of startup S32. AlphaGo trained by playing against itself countless times, essentially self-teaching until it reached superhuman abilities.
He points out that this brings up an old debate in the world of AI.
“The first camp thinks we can automate the workflow through this agent process. The second camp thinks that with general intelligence and reasoning, we don't need the workflow and the AI will just make the decisions, just like a human would,” Harrison said in an interview.
Harrison said he is in the first camp, and the second camp is that we need to trust that AI can make the right decisions. He doesn't think we're there yet.
But some people see o1 as more of a tool to question your thinking on big decisions, rather than a decision maker.
Katanforoosh, CEO of Workera, gave the example of a guy who's going to interview a data scientist to work at his company. He tells OpenAI o1 that he only has 30 minutes and he wants to assess a certain skill. He can work backwards with the AI model to understand if he's thinking about this right, and o1 understands the time constraints, etc.
The question is whether this useful tool is worth the high price tag: AI models are getting cheaper and cheaper, but the o1 is one of the more expensive AI models in a long time.