Most AI benchmarks don't tell you much. They ask questions that can be solved by rote memorization, or cover topics that are irrelevant to the majority of users.
That's why some AI enthusiasts are turning to games as a way to test an AI's problem-solving skills.
Freelance AI developer Paul Calcraft built an app that allows two AI models to play a Pictionary-like game with each other. One model doodles and the other model tries to guess what the doodle represents.
“I thought this looked like a lot of fun and could be interesting from a model functionality standpoint,” Calcraft said in an interview with TechCrunch. “So I sat inside on a cloudy Saturday and got it done.”
Calcraft was inspired by a similar project by British programmer Simon Willison, who tasked his model with rendering a vector illustration of a pelican on a bicycle. Like Calcraft, Wilson chose challenges that he believed would force the model to “think” beyond the content of the training data.
Image credit: Paul Calcraft
“The goal is to have a non-gamable benchmark,” Calcraft said. “Memorizing specific answers or simple patterns you've seen before during training won't help you beat this benchmark.”
16-year-old Adonis Singh believes Minecraft also falls into this “ungameable” category. Along Microsoft's Project Malmo, he created Mcbench, a tool that allows models to control Minecraft characters and test their ability to design structures.
“I think Minecraft is resourcefully testing models and giving them more agency,” he told TechCrunch. “It’s not as restrictive and as saturated. [other] benchmark. ”
Using games to benchmark AI is nothing new. The origins of this idea go back several decades. Mathematician Claude Shannon argued in 1949 that games like chess are a worthy challenge for “intelligent” software. More recently, Alphabet's DeepMind developed a model that allows you to play Pong and Breakout. OpenAI trained an AI to compete in Dota 2 matches. And Meta has designed an algorithm that can beat even professional Texas Hold'em players.
But what's different now is that hobbyists are connecting large-scale language models (LLMs) (models with the ability to analyze text, images, etc.) to games to explore how good their logic is. It's about being there.
There are many LLMs out there, from Gemini and Claude to GPT-4o, and they all have a different “vibe” so to speak. They “feel” different from one interaction to the next, a phenomenon that is difficult to quantify.
Be careful of typos. There is no model like the Claude 3.6 Sonnet. Image credit: Adonis Singh
“LLMs are known to be sensitive to the way certain questions are asked, and are just generally unreliable and difficult to predict,” Calcraft said.
Matthew Guzdial, an AI researcher and professor at the University of Alberta, says games provide a visual and intuitive way to compare model performance and behavior, as opposed to text-based benchmarks.
“You can think of every benchmark as providing a different simplification of reality that focuses on a particular type of problem, such as reasoning or communication,” he said. “Games are just another way to use AI to make decisions, so people are using them like any other approach.”
Those familiar with the history of generative AI will note how similar Pictionary is to generative adversarial networks (GANs). In a GAN, a creator model sends images to a discriminator model, which evaluates the images.
Calcraft believes that Pictionary can capture LLM's ability to understand concepts such as shape, color, and prepositions (for example, the meaning of “in” and “on”). Although he would not go so far as to say that the game is a test of reliable reasoning, he argued that winning requires strategy and the ability to understand clues, and that neither model is easy.
“I also really like the almost adversarial nature of the GAN-like Pictionary game, where you take on two different roles: one drawing and the other guessing,” he says. said. “The best thing to draw is not what is the most artistic, but what allows you to most clearly convey your ideas to an audience of other LLMs (including faster, less capable models). ”
“Painting is a matter of toys, not immediately practical or realistic,” Calcraft warns. “That said, I believe that spatial understanding and multimodality are critical elements for the advancement of AI, so LLM Pictionary could be a small early step on that journey.”
Image credit: Adonis Singh
Singh believes that Minecraft is also a useful benchmark to measure LLM reasoning. “Based on the models I've tested so far, the results are literally perfectly consistent with how much I trust inference-related models,” he said.
Others are less sure.
Mike Cook, a researcher at Queen Mary University who specializes in AI, doesn't think Minecraft is particularly unique as an AI testbed.
“I think part of the appeal of Minecraft comes from people outside the gaming world. They see Minecraft as more like real world reasoning and behavior because it looks like the 'real world.' “Maybe we think there's a relationship,” Cook told TechCrunch. “From a problem-solving perspective, it’s not that different from video games like Fortnite, Stardew Valley, or World of Warcraft. It looks like a series of daily tasks.
Cook points out that even the best gameplay AI systems generally don't adapt well to new environments and can't easily solve problems they haven't seen before. For example, a model who excels at Minecraft is unlikely to play Doom with real skill.
“I think the great thing about Minecraft from an AI perspective is the procedural world, which means very weak reward signals and unpredictable challenges,” Cook continued. “But it's not as representative of the real world as other video games.”
If so, there is certainly something fascinating about watching an LLM build a castle.