OpenAI on Thursday announced its latest miniature AI model, GPT-4o mini, which the company says is cheaper and faster than OpenAI's current state-of-the-art AI models. GPT-4o mini will be released to developers starting today, and to consumers through the ChatGPT web and mobile apps. Enterprise users will be able to access it starting next week.
The company claims that GPT-4o mini outperforms industry-leading small AI models on reasoning tasks, including text and vision. As small AI models improve, they are gaining popularity among developers due to their speed and cost-effectiveness compared to larger models such as GPT-4 Omni and Claude 3.5 Sonnet. Small AI models are a convenient choice for high-volume, simple tasks that developers may ask their AI models to perform repeatedly.
GPT-4o mini replaces GPT-3.5 Turbo as the smallest model offered by OpenAI. The company claims that its latest AI model scored 82% on MMLU, a benchmark that measures inference, according to data from Artificial Analysis. This is better than Gemini 1.5 Flash's 79% and Claude 3 Haiku's 75%. On MGSM, which measures mathematical reasoning, GPT-4o mini scored 87%. This is better than Flash's 78% and Haiku's 72%.
A comparison chart of Artificial Analysis's small scale AI models. Prices here are a combination of input and output tokens. Image credit: Artificial Analysis
Additionally, according to OpenAI, GPT-4o mini is significantly cheaper than previous state-of-the-art models and over 60% cheaper than GPT-3.5 Turbo. Currently, GPT-4o mini supports text and vision in its API, and OpenAI says the model will also support video and audio capabilities in the future.
“To harness the power of AI everywhere in the world, we need to make models more affordable,” Olivier Godement, head of product and APIs at OpenAI, told TechCrunch in an interview. “We think GPT-4o mini is a big step in that direction.”
For developers building with OpenAI's API, GPT4o mini is priced at 15 cents per million input tokens and 60 cents per million output tokens. The model has a context window of 128,000 tokens, roughly the length of a book, and the knowledge cutoff is October 2023.
OpenAI didn't reveal the exact dimensions of GPT-4o mini, but said it was roughly on par with other small AI models such as Llama 3 8b, Claude Haiku, and Gemini 1.5 Flash. But the company claims that GPT-4o mini is faster, more cost-effective, and smarter than the industry-leading small models, based on pre-launch testing in the LMSYS.org chatbot space. Early independent testing seems to back this up.
“Compared to comparable models, GPT-4o mini is extremely fast, with an average output rate of 202 tokens per second,” Artificial Analysis co-founder George Cameron said in an email to TechCrunch. “This is more than 2x faster than GPT-4o and GPT-3.5 Turbo, making it an attractive product for speed-dependent use cases, such as many consumer applications and agent-based approaches that use LLMs.”
Separately, OpenAI on Thursday announced a new tool for its enterprise customers. In a blog post, OpenAI announced the Enterprise Compliance API to help companies in highly regulated industries such as finance, healthcare, legal services and government comply with logging and auditing requirements.
The company said these tools will enable administrators to audit and act on ChatGPT Enterprise data, and the API will provide a time-stamped record of interactions, including conversations, files uploaded, and workspace users.
OpenAI also gives admins more granular control over Workspace GPT, a custom version of ChatGPT built for specific business use cases. Previously, admins could only completely allow or block GPT actions made in a workspace, but now workspace owners can create an approved list of domains that GPT can interact with.