A common assumption is that companies like Google, OpenAI, and Anthropic have bottomless pockets and hundreds of top researchers, which allows them to create state-of-the-art foundational models. But as one of them famously put it, they “have no moat.” And AI2 proved that today with the release of Molmo, a multimodal AI model that most closely rivals those companies, but that is small, free, and truly open source.
To be clear, Molmo (a multimodal open language model) is a visual understanding engine, not a full-service chatbot like ChatGPT. It does not have an API, does not support enterprise integration, and does not search the web on your behalf or for its own purposes. You can think of Molmo as part of a model that can see and understand images, and describe them or answer questions about them.
Molmo (available in 72B, 7B, and 1B parameter variations), like other multimodal models, can identify and answer questions about almost any everyday situation or object: How do I use this coffee maker? How many dogs are sticking out their tongues in this picture? Which options in this menu are vegan? What are the variables in this diagram? This is a type of visual comprehension task that has proven successful over the years, albeit with varying levels of success and latency.
What's different isn't necessarily what Molmo does (you can see it in the demo below or test it here ), but how it achieves that functionality.
Of course, visual understanding is a broad domain, ranging from counting sheep in a field to inferring a person's emotional state to summarizing a menu, and so is difficult to quantitatively test or even describe, but it is possible to show that the two models are at least as capable as AI2 president Ali Farhadi explained at a demo event at the lab's headquarters in Seattle.
“One of the things we're showing today is that open equals closed,” he said, “and small equals big.” (He clarified that he meant ==, not equality, a subtle distinction that may be easy for some to understand.)
A fairly consistent theme in AI development is that bigger is better: more training data means more parameters for the resulting models, and more computing power to create and manipulate them. But at a certain point, you literally can't get any bigger – you either run out of data, or the computing costs and time become so high that it becomes self-defeating. You simply have to make do with what you have, or do more with less.
Farhadi explained that Molmo is comparable in performance to products like the GPT-4o, Gemini 1.5 Pro, and Claude-3.5 Sonnet, but weighs (by best estimates) about one-tenth of those products, approaching their performance levels in a model that is one-tenth the size.
Image credit: AI2
“There are dozens of benchmarks that people evaluate. Scientifically, I don't like this game, but I had to show people the numbers,” he explained. “Our largest model is the small model 72B, and it outperforms GPT, Claudes, and Geminis in these benchmarks. Again, take this with a pinch of salt. Does this really mean it's better than them, or not? We don't know. But at least for us, this means we're playing the same game.”
If you want to try it out, check out the public demo, which also works on mobile. (If you don't want to log in, you can update the original prompt or scroll up and “Edit” to replace the image.)
The secret is to use less but better quality data. Instead of training on a library of billions of images that cannot be quality controlled, explained or de-duplicated, AI2 curated and annotated a set of just 600,000 images. Of course, that's still a lot, but compared to 6 billion it's a tiny amount – a fraction of 1 percent. This leaves out a bit of the long tail, but the selection process and interesting annotation methodology give us very high quality explanations.
Why is it interesting? It shows people an image and asks them to describe it out loud. It turns out that when people talk about things, they write about them differently, and this produces results that are not only accurate, but conversational and useful. The image descriptions Molmo generates are rich and actionable.
Nowhere is this more evident than with this new, and for at least a few days unique, ability to “point” to relevant parts of an image. Ask it to count the number of dogs in a picture (33), and it put a dot on each face. Ask it to count the number of tongues, and it put a dot on each tongue. This idiosyncrasy allows for all sorts of new zero-shot actions. And crucially, it even works in web interfaces: without even seeing the website code, the model understands how to navigate pages, submit forms, and so on. (Rabbit recently announced a similar feature in their r1, due for release next week.)
Image credit: AI2
So why does this matter? Models are emerging almost daily. Google just announced some, OpenAI has its Demo Day coming up, Perplexity is constantly teasing something, and Meta is heavily promoting its version of Llama.
Molmo is completely free and open source, and small enough to run locally – no APIs, no subscriptions, no water-cooled GPU cluster required. Our goal in building and releasing this model is to empower developers and creators to create AI-powered apps, services, and experiences without having to ask permission (and pay) from one of the world's largest technology companies.
“We’re targeting researchers, developers, app developers, people who don’t know how to address these issues. [large] “The key principle in targeting such a broad audience is a key principle we've long advocated for: making it more accessible,” Farhadi says. “We're publishing everything we've ever done. This includes data, cleaning, annotations, training, code, checkpoints, evaluation. We're publishing everything we've developed.”
He added that he expects people to start building with the dataset and code soon, including from well-funded rivals who will siphon off any unconfirmed “public” data. (“Whether they mention it or not is a whole other story,” he added.)
The world of AI is evolving rapidly, but the big players are locked in a price war, lowering prices to rock bottom while raising hundreds of millions of dollars to cover costs. Is the value these companies offer really astronomical when similar functionality is available in free, open-source options? At the very least, Mormo shows us that while it's an open question whether the emperor has clothes, he certainly doesn't have a moat.