State-of-the-art language models like GPT-4o and Gemini 1.5 Pro are touted as “multimodal” — able to understand not just text but also images and speech — but new research reveals that these models don't actually see in the ways we might expect them to — in fact, they may not see at all.
Let's be clear up front: no one is claiming that their AI can see like humans can (well, maybe some people can). But the marketing and benchmarks used to promote these models use phrases like “visual capabilities” and “visual understanding” to describe how the models see and analyze images and videos so they can do anything from solve homework problems to watch a game.
So while these companies' claims are cleverly worded, it's clear that they want to express what their models see in some sense of the word. And they do, but in a way similar to solving math or writing a story: by matching patterns in input data with patterns in training data. This causes their models to fail in the same way that they would fail at a seemingly trivial task like picking random numbers.
Researchers from Auburn University and the University of Alberta conducted a somewhat informal but systematic study of current AI models' visual understanding. They tasked the largest multimodal models with a series of very simple vision tasks, such as determining whether two shapes overlap, how many pentagons are in a picture, and which letters in a word are circled. (You can see the summary micropage here.)
These are the sorts of questions that even a first-grader could get right, but they posed significant challenges for the AI model.
“Our seven tasks are extremely simple and humans can perform them with 100% accuracy. We would expect AI to be able to do the same, but right now that's not the case,” co-author Anh Nguyen said in an email to TechCrunch. “Our message is: 'Look, these best models still fail.'”
Image credit: Rahmanzadehgervi et al.
Let's test overlapping shapes, one of the simplest visual reasoning tasks possible. When presented with two circles that overlap slightly, touch slightly, or are at some distance apart, the models consistently failed to get it right. Indeed, GPT-4o got it over 95% right when the two circles were far apart, but when the distance was zero or small, it only got 18% right. Gemini Pro 1.5 performed best, but still only got a 7/10 at close distances.
(The figures are not intended to show the exact performance of the models, but rather to illustrate the discrepancy between models across conditions. Statistics for each model are provided in the paper.)
Or how about counting the number of connected circles in an image? I think any above average horse could do this.
Image credit: Rahmanzadehgervi et al.
With 5 rings, it gets it all right 100% of the time. A great job for visual AI. But adding one more ring totally messes up the results. Gemini gets lost and doesn't get it right even once. Sonnet-3.5 answers 6… in a third of the time, GPT-4o in just under half the time. Adding one more ring makes it even harder, but adding one more makes it easier for some people.
The point of this experiment is to show that whatever these models do, it doesn't really match what we see. After all, even if they looked bad, we wouldn't expect the success of 6-, 7-, 8- and 9-ring images to vary that much.
Similar patterns emerged in other tests: rather than better or worse eyesight or reasoning abilities, there seemed to be other reasons why people could count in some cases but not in others.
Of course, one possible answer is right in front of us: why are they so good at correctly recognizing images of five circles, but fail miserably for the rest and for five pentagons? (To be fair, Sonnet-3.5 did pretty well in this regard.) Because the training data for all of the models prominently contains images of five circles: the Olympic rings.
Image credit: IOC
Not only is this logo repeated many times in the training data, it's likely explained at length in the alt text, usage guidelines, and articles about it. But where in the training data are those 6 connected rings, or those 7 connected rings? As the answers show, nowhere. They have no idea what they're “looking at”, and no real visual understanding of rings, overlap, or what these concepts are.
I asked what he thought about the accusations that researchers make about models being “blind.” Like other terms we use, this “blindness” has an anthropomorphic quality that, while not precise, we cannot do without.
“I agree. There are many human definitions of 'blindness,' and we don't yet have a word to describe this kind of blindness/insensitivity in an AI to the images we're showing it,” Nguyen wrote. “Currently, we don't have the technology to accurately visualize what the model sees, and its behavior is a complex function of input text prompts, input images, and billions of weights.”
He speculated that the model is not completely blind, and that the visual information it extracts from the images is approximate and abstract, such as “there is a circle on the left side.” However, the model has no way of making visual judgments, so it responds like someone who has information about the image but cannot actually see it.
As a final example, Nguyen sent the following example that supports the above hypothesis:
Image credit: Anh Nguyen
When the blue and green circles overlap (what the question encourages the model to accept as fact), the light blue shaded area often appears as a result, like a Venn diagram. If someone asked you this question, you, or any smart person, would give the same answer, because with your eyes closed, it's entirely plausible. But no one with their eyes open would answer it that way.
Does all this mean that these “vision” AI models are useless? Not at all. Their inability to make basic inferences about specific images speaks to their basic capabilities, but not their specific ones. Each of these models can be incredibly accurate about human behavior and facial expressions, as well as photos of everyday objects and situations, which is in fact what they are intended to interpret.
If we relied on the marketing of AI companies to tell us all that these models can do, we would be led to believe that their vision was 20/20. No matter how accurately the models tell us whether a person is sitting, walking, or running, we need studies like this to show that they do so without “seeing” (if we can say that) in the sense that we mean it.