If you've ever tried to use ChatGPT as a calculator, you've almost certainly noticed its computational problems. Chatbots aren't good at math. In this respect, it is not unique among AIs.
Claude from Anthropic can't solve basic word problems. Gemini cannot understand quadratic equations. And meta llamas have trouble with simple addition.
So why are these bots able to write monologues but still stumble on elementary-level math?
Tokenization has something to do with it. Tokenization, the process of dividing data into chunks (for example, splitting the word “fantastic” into the syllables “fan,” “tas,” and “tic”), helps AI densely encode information. . However, the tokenizer (the AI model that does the tokenizing) doesn't actually know what the numbers are, so it often destroys the relationships between the numbers. For example, a tokenizer treats the number “380'' as a single token, but “381'' may be represented as a pair of digits (“38'' and “1'').
But tokenization isn't the only reason math is a weakness for AI.
AI systems are statistical machines. They are trained on many examples and learn patterns in those examples to make predictions (the phrase “to whom” comes before the phrase “may be of concern” in an email). ). For example, if a multiplication problem is 5,7897 x 1,2832, ChatGPT, who has experience with many multiplication problems, will know that the product of a number ending in “7'' and a number ending in “2'' is . You would guess. 4.'' However, they will struggle in the midfield. ChatGPT gave an answer of 742,021,104. The correct number is 742,934,304.
Yuntian Deng, an assistant professor at the University of Waterloo who specializes in AI, thoroughly benchmarked ChatGPT's multiplication capabilities in research earlier this year. He and his coauthors found that the default model, GPT-4o, has difficulty multiplying two numbers that each have more than four digits (for example, 3,459 x 5,284).
“GPT-4o struggles with multi-digit multiplications, achieving less than 30% accuracy over 4-digit by 4-digit problems,” Deng told TechCrunch. “Multi-digit multiplication is difficult for language models because mistakes in intermediate steps can add up and lead to inaccurate final results.”
Is OpenAI's o1 a good calculator? We tested it with up to 20×20 multiplications. o1 solved up to 9×9 multiplications with decent accuracy, but gpt-4o struggled beyond 4×4. As a context, this task can be solved by a small LM using implicit CoT with gradual internalization. 1/4 pic.twitter.com/et5DB9bhNL
— Yuntian Deng (@yuntiandeng) September 17, 2024
So will math skills escape ChatGPT forever, or is there reason to believe that bots might one day be as good at numbers as humans (or TI-84s)?
Deng Xiaoping is hopeful. In the study, he and his colleagues also tested o1, an OpenAI “inference” model recently introduced to ChatGPT. o1, which “thinks through” the problem step by step before answering it, performed much better than GPT-4o, getting the 9-digit x 9-digit multiplication problem correct in about half the time.
“The model may be solving the problem in a different way than how we would solve it manually,” Deng says. “We were interested in the internal approach of the model and how it differs from human reasoning.”
Deng believes this progress shows that at least some math problems (multiplication problems being one of them) will eventually be “completely solved” by systems like ChatGPT. I am. “This is a well-defined task using known algorithms,” Deng said. “We are already seeing significant improvements from GPT-4o to o1, and it is clear that inference enhancements are occurring.”
Just don't let go of your calculator right away.