OpenAI releases mini versions of its latest AI models.
These tiny AI models are intended to be faster and more affordable than the full versions, and are especially useful for simple, high-volume tasks. This makes them appealing to smaller developers who don't need to spend a lot of money on AI costs, but want a relatively lightweight way to incorporate AI into their websites and apps.
In this case, OpenAI unveiled its latest flagship model, GPT-4o, in May, where the “o” stands for “omni,” indicating that the model should be able to understand not only text but also audio and video.
Now, a scaled-down version of it, the GPT-4o mini, is here, which currently supports text and images, but OpenAI says it will add video and audio capabilities in the future. The new mini model is expected to be over 60% cheaper than the GPT 3.5 Turbo it replaces as OpenAI's smallest model, and it scores better in MMLU (an industry benchmark for inference) than competing mini models.
Hit the play button to find out more, and let us know what you think in the comments.