Quantization, one of the most widely used techniques to make AI models more efficient, has limits, and the industry may be rapidly approaching its limits.
In the context of AI, quantization refers to reducing the number of bits (the smallest unit a computer can process) needed to represent information. Consider this analogy. If someone asks you the time, you'll probably say “noon” instead of “1200, 1 second, 4 milliseconds.” That is quantization. Both answers are correct, but one answer is slightly more accurate. How much precision you actually need depends on your situation.
An AI model consists of several components that can be quantized. In particular, parameters are internal variables that the model uses to make predictions and decisions. This is useful considering that the model performs millions of calculations at runtime. Quantized models with fewer bits representing parameters are mathematically and therefore computationally less demanding. (To be clear, this is a different process than “distillation,” which is a more complex and selective removal of parameters.)
However, quantization may have more tradeoffs than previously assumed.
The ever-shrinking model
A study by researchers at Harvard University, Stanford University, MIT, Databrix, and Carnegie Mellon University found that when the original unquantized version of the model was trained on large amounts of data over a long period of time, the quantized model Performance will be degraded. In other words, at some point it may actually be better to just train a small model than cook a large one.
This could be bad news for AI companies that train very large models (known to improve the quality of answers) and quantize them to lower the cost of delivery.
The effects are already visible. A few months ago, developers and academics reported that the quantization of Meta's Llama 3 models tends to be “more harmful” than other models, but this may be due to the training method. There is a gender.
“In my opinion, the biggest cost to everyone in AI is and will continue to be inference, and our research is one important way to reduce that.” “It shows that no method works forever,” the paper, a math student at Harvard University, told TechCrunch.
Contrary to popular belief, inferencing an AI model (running the model, such as when ChatGPT answers a question) is often more expensive overall than training the model. For example, consider that Google spent an estimated $191 million to train one of its flagship Gemini models. This is certainly expensive. But if the company used a model that generated just 50 word answers for half of all Google search queries, it would spend about $6 billion a year.
Leading AI labs are adopting training models on large datasets with the assumption that “scaling up” (increasing the amount of data and computing used for training) will make AI increasingly capable. I'm doing it.
For example, Meta trained Llama 3 on a set of 15 trillion tokens. (Tokens represent bits of raw data; 1 million tokens is equivalent to about 750,000 words.) The previous generation, Llama 2, was trained with “only” 2 trillion tokens.
There is evidence that scaling up ultimately leads to diminishing returns. Anthropic and Google recently reportedly trained huge models that fell short of internal benchmark expectations. However, there are few signs that the industry is ready to meaningfully move away from these entrenched scaling approaches.
How accurate is it exactly?
So if labs are reluctant to train models on small datasets, is there a way to make them less susceptible to degradation? Perhaps. Kumar and his co-authors say they found that training the model with “lower accuracy” can make it more robust. Please wait for a moment as I will explain it in more detail.
“Precision” here refers to the number of digits that a numeric data type can accurately represent. A data type is a collection of data values, typically specified by a set of possible values and allowed operations. For example, data type FP8 uses only 8 bits to represent floating point numbers.
Most current models are trained at 16-bit or “half-precision” and “post-train quantized” to 8-bit precision. Certain model components (such as parameters) are converted to a less precise form, sacrificing some precision. Think of it like calculating to the nearest decimal place and then rounding to the nearest tenth. This often gives you the best of both worlds.
Hardware vendors such as Nvidia are pushing to reduce the precision of quantized model inference. The company's new Blackwell chips support 4-bit precision, specifically a data type called FP4. Nvidia touts this as a boon for memory- and power-constrained data centers.
However, extremely low quantization precision may not be desirable. According to Kumar, unless the original model has a very large number of parameters, precision below 7 or 8 bits can significantly degrade quality.
If this may seem a little technical, don't worry. In fact, it is. But the key here is that AI models are not fully understood, and known shortcuts that work for different types of calculations won't work here. If you were asked when to start your 100-meter run, you wouldn't say “noon,” right? Of course, it's not as obvious, but the idea is the same.
“The key point of our work is that there are limitations that simply cannot be avoided,” Kumar concluded. “We hope our work can add nuance to a debate in which the default accuracy of training and inference is often lower and lower.”
Kumar acknowledges that his and his colleagues' study was relatively small. They plan to test with more models in the future. But he believes at least one insight holds. That said, there is no free lunch when it comes to reducing inference costs.
“Bit precision is important, but it's not free,” he said. “It cannot be reduced permanently without causing pain to the model. Since the capacity of the model is finite, more effort is required with great care than trying to fit 1 quadrillion tokens into a small model. I believe that the money spent on curation and filtering of the data is spent and only the highest quality data is placed into the small model.I purposefully aimed to stabilize low accuracy training. We are optimistic that new architectures will be important in the future.”