A well-known test of artificial general intelligence (AGI) is nearing resolution. However, the test's creators argue that this is not a genuine research breakthrough, but rather indicates a flaw in the test's design.
In 2019, Francois Chollet, a leading figure in the AI world, introduced the ARC-AGI benchmark, which stands for “Abstract and Reasoning Corpus for Artificial General Intelligence.” ARC-AGI, designed to assess whether an AI system can effectively learn new skills on data other than the data used to train it, remains the only AI test that measures progress toward general intelligence. Francois argues (although other tests have been proposed as well).
Until this year, the best-performing AIs could only solve just under a third of ARC-AGI's tasks. Chollet criticizes the industry's focus on large-scale language models (LLMs), which he believes lack real “reasoning” capabilities.
“LLM is completely reliant on memorization, which makes it difficult to generalize,” he said in a series of posts about X in February. “They analyze anything that isn’t included in the training data.”
As Chollet points out, the LLM is a statistical machine. They have been trained on many examples and have learned patterns in those examples, such as “to whom” is typically placed before “may be of concern” in an email. make predictions.
Chollet argues that while LLMs may be able to memorize “inference patterns,” they are unlikely to be able to generate “new inferences” based on new situations. “If you have to train on many examples of a pattern, even implicitly, in order to learn a reusable expression, you end up memorizing it,” Chollet argued in another post.
To encourage research beyond the LLM, Chollet and Zapier co-founder Mike Knoop launched a $1 million contest in June to build an open source AI that can beat ARC-AGI. Ta. Out of 17,789 entries, the highest score was 55.5%. While this was short of the 85% “human level” threshold needed to win, it was about 20% higher than the highest scorer in 2023.
However, Knoop says this doesn't mean it's up to 20% closer to AGI.
Today we are announcing the winners of the ARC Prize 2024. We will also publish an extensive technical report on what we learned from the competition (linked in the following tweet).
Advanced technology rose from 33% to 55.5%, the largest single-year increase since 2020.
— François Chollet (@fchollet) December 6, 2024
In a blog post, Knoop said that many of the submissions to ARC-AGI can be solved using “brute force,” and that the “vast majority” of ARC-AGI tasks are “[don’t] It conveys a very useful signal to the general intelligence. ”
ARC-AGI consists of puzzle-like questions in which the AI must generate the correct “answer” grid from a given grid of differently colored squares. These problems are designed to force the AI to adapt to new problems it has never seen before. However, it is not clear whether they are achieving this.
ARC-AGI benchmark tasks. The model should solve the “problem” in the top row. The bottom line shows the solution. Image credit: ARC-AGI
“[ARC-AGI] It hasn't changed since 2019 and it's not perfect,” Knoop acknowledged in the post.
Francois and Knoop have also faced criticism for overselling ARC-AGI as a benchmark for AGI, at a time when the very definition of AGI is hotly contested. One OpenAI staff member recently argued that if AGI is defined as an AI that is “better than most humans at most tasks,” then AGI has “already” been achieved.
Knoop and Chollet say they plan to release a second generation ARC-AGI benchmark to address these issues in time for the 2025 competition. “We will continue to focus our research community's efforts on what we believe to be the most important unsolved problems in AI and accelerate our timeline for AGI,” Chollet wrote in the X post.
The fix probably won't be easy. If the shortcomings of the first ARC-AGI test are any indication, the definition of intelligence for AI will be just as unwieldy and inflammatory as it is for humans.