OpenAI is introducing o1, its “inference” AI model, to the API, but it is only for a limited number of developers.
Starting Tuesday, o1 will begin rolling out to developers in OpenAI's “Tier 5” usage category, the company said. To qualify for Tier 5, developers must have an account with OpenAI that has paid at least $1,000 and is at least 30 days old from the first successful payment.
O1 replaces the o1-preview model that was already available in the API.
Unlike most AIs, inference models like o1 effectively fact-check, allowing them to avoid some of the pitfalls that models typically stumble upon. The downside is that it often takes time to arrive at a solution.
It's also very expensive. One reason for this is that they require large amounts of computing resources to run. OpenAI charges $15 for approximately every 750,000 words of o1 analysis and $60 for approximately every 750,000 words the model produces. This is six times the cost of OpenAI's latest “non-inference” model, GPT-4o.
O1 for OpenAI API is more advanced than o1-preview thanks to new features such as function calls (allowing models to connect to external data) and developer messages (allowing developers to dictate tone and style to models). It's much more customizable. Image analysis. In addition to structured output, o1 also has an API parameter “reasoning_effort” that allows you to control how long the model “thinks” before responding to a query.
OpenAI said that o1's version of its API, and soon its AI chatbot platform, ChatGPT, is a “new post-training” version of o1. Compared to the o1 model released on ChatGPT two weeks ago, OpenAI vaguely stated that this model, o1-2024-12-17, has improvements in “areas of model behavior based on feedback.”
“We are gradually rolling out access as we work to expand access to additional usage tiers and tighten rate limits,” the company said in a blog post.
OpenAI said in a note on its website that the latest o1 should provide “more comprehensive and accurate answers” to questions about programming and business, among other things, and that requests are less likely to be rejected by mistake. Ta.
In other development news on Tuesday, OpenAI is introducing GPT-4o and GPT-4o mini models as part of the Realtime API, OpenAI's API for building apps with low-latency, AI-generated voice responses. A new version has been released. New models with improved data efficiency and reliability (“gpt-4o-realtime-preview-2024-12-17'' and “gpt-4o-mini-realtime-preview-2024-12-17'') are also cheaper. OpenAI said:
Speaking of the real-time API (no pun intended), while it's still in beta, it has added some new features such as simultaneous out-of-band response, which allows background tasks like content moderation to Interactions can now be performed without interruption. The API now also supports WebRTC, an open standard for building real-time voice applications for browser-based clients, smartphones, and Internet of Things devices.
Not coincidentally, OpenAI hired WebRTC creator Justin Uberti in early December.
“Our WebRTC integration is designed to enable smooth and responsive interactions in real-world situations, even when network quality fluctuates,” OpenAI wrote in a blog. “It handles audio encoding, streaming, noise suppression, and congestion control.”
In its last update on Tuesday, OpenAI introduced configuration tweaks to its tweak API. Preference fine-tuning involves comparing pairs of model responses to “teach” the model to distinguish between recommended and “non-recommended” answers to a question. And the company has launched an “early access” beta version of its official software developer kits for Go and Java.