Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Mysterious hacking group Careto was run by the Spanish government, sources say

May 23, 2025

Klarna CEO and Sutter Hill wins lap after Jony Ive's Openai deal

May 22, 2025

Bluesky begins to check for “notable” users

May 22, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Bluesky begins to check for “notable” users

    May 22, 2025

    Mozilla shuts down its Read-It-Later app pocket

    May 22, 2025

    Opening a Social Web Browser Surf makes it easy for anyone to create custom feeds

    May 22, 2025

    Anthropic's new Claude4 AI model can be inferred in many steps

    May 22, 2025

    Strava buys athletic training app – First Runna, and now Breakaway

    May 22, 2025
  • Crypto

    Starting from up to $900 from Ticep, 90% off +1 in 2025

    May 22, 2025

    Early savings for 2025 will end on May 25th

    May 21, 2025

    Coinbase says its data breach will affect at least 69,000 customers

    May 21, 2025

    There are 6 days to save $900 to destroy 2025 tickets

    May 20, 2025

    Save $900 to destroy 2025 tickets before prices rise on May 25th

    May 19, 2025
  • Security

    Mysterious hacking group Careto was run by the Spanish government, sources say

    May 23, 2025

    Microsoft says Lumma Password Stealer Malware found on 394,000 Windows PCs

    May 22, 2025

    Signal's new Windows update prevents the system from capturing screenshots of chat

    May 22, 2025

    Wyden: AT&T, T-Mobile and Verizon did not inform senators of surveillance requests

    May 21, 2025

    US students agree to plead guilty to hacking affecting tens of millions of students

    May 21, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Klarna CEO and Sutter Hill wins lap after Jony Ive's Openai deal

    May 22, 2025

    Wild story of how Moxxie-led Intestinal Toilet Startup Sloan was registered as a gut toilet startup throne

    May 22, 2025

    Submitted submission raises $17 million to automate tax preparation dr voyages

    May 21, 2025

    In a busy VC landscape, Elizabeth Weil's graffiti venture shows that networks are still important

    May 21, 2025

    A comprehensive list of 2025 tech layoffs

    May 21, 2025
TechBrunchTechBrunch

Common techniques for making AI more efficient have drawbacks

TechBrunchBy TechBrunchNovember 17, 20246 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


Quantization, one of the most widely used techniques to make AI models more efficient, has limits, and the industry may be rapidly approaching its limits.

In the context of AI, quantization refers to reducing the number of bits (the smallest unit a computer can process) needed to represent information. Consider this analogy. If someone asks you the time, you'll probably say “noon” instead of “1200, 1 second, 4 milliseconds.” That is quantization. Both answers are correct, but one answer is slightly more accurate. How much precision you actually need depends on your situation.

An AI model consists of several components that can be quantized. In particular, parameters are internal variables that the model uses to make predictions and decisions. This is useful considering that the model performs millions of calculations at runtime. Quantized models with fewer bits representing parameters are mathematically and therefore computationally less demanding. (To be clear, this is a different process than “distillation,” which is a more complex and selective removal of parameters.)

However, quantization may have more tradeoffs than previously assumed.

The ever-shrinking model

A study by researchers at Harvard University, Stanford University, MIT, Databrix, and Carnegie Mellon University found that when the original unquantized version of the model was trained on large amounts of data over a long period of time, the quantized model Performance will be degraded. In other words, at some point it may actually be better to just train a small model than cook a large one.

This could be bad news for AI companies that train very large models (known to improve the quality of answers) and quantize them to lower the cost of delivery.

The effects are already visible. A few months ago, developers and academics reported that the quantization of Meta's Llama 3 models tends to be “more harmful” than other models, but this may be due to the training method. There is a gender.

“In my opinion, the biggest cost to everyone in AI is and will continue to be inference, and our research is one important way to reduce that.” “It shows that no method works forever,” the paper, a math student at Harvard University, told TechCrunch.

Contrary to popular belief, inferencing an AI model (running the model, such as when ChatGPT answers a question) is often more expensive overall than training the model. For example, consider that Google spent an estimated $191 million to train one of its flagship Gemini models. This is certainly expensive. But if the company used a model that generated just 50 word answers for half of all Google search queries, it would spend about $6 billion a year.

Leading AI labs are adopting training models on large datasets with the assumption that “scaling up” (increasing the amount of data and computing used for training) will make AI increasingly capable. I'm doing it.

For example, Meta trained Llama 3 on a set of 15 trillion tokens. (Tokens represent bits of raw data; 1 million tokens is equivalent to about 750,000 words.) The previous generation, Llama 2, was trained with “only” 2 trillion tokens.

There is evidence that scaling up ultimately leads to diminishing returns. Anthropic and Google recently reportedly trained huge models that fell short of internal benchmark expectations. However, there are few signs that the industry is ready to meaningfully move away from these entrenched scaling approaches.

How accurate is it exactly?

So if labs are reluctant to train models on small datasets, is there a way to make them less susceptible to degradation? Perhaps. Kumar and his co-authors say they found that training the model with “lower accuracy” can make it more robust. Please wait for a moment as I will explain it in more detail.

“Precision” here refers to the number of digits that a numeric data type can accurately represent. A data type is a collection of data values, typically specified by a set of possible values ​​and allowed operations. For example, data type FP8 uses only 8 bits to represent floating point numbers.

Most current models are trained at 16-bit or “half-precision” and “post-train quantized” to 8-bit precision. Certain model components (such as parameters) are converted to a less precise form, sacrificing some precision. Think of it like calculating to the nearest decimal place and then rounding to the nearest tenth. This often gives you the best of both worlds.

Hardware vendors such as Nvidia are pushing to reduce the precision of quantized model inference. The company's new Blackwell chips support 4-bit precision, specifically a data type called FP4. Nvidia touts this as a boon for memory- and power-constrained data centers.

However, extremely low quantization precision may not be desirable. According to Kumar, unless the original model has a very large number of parameters, precision below 7 or 8 bits can significantly degrade quality.

If this may seem a little technical, don't worry. In fact, it is. But the key here is that AI models are not fully understood, and known shortcuts that work for different types of calculations won't work here. If you were asked when to start your 100-meter run, you wouldn't say “noon,” right? Of course, it's not as obvious, but the idea is the same.

“The key point of our work is that there are limitations that simply cannot be avoided,” Kumar concluded. “We hope our work can add nuance to a debate in which the default accuracy of training and inference is often lower and lower.”

Kumar acknowledges that his and his colleagues' study was relatively small. They plan to test with more models in the future. But he believes at least one insight holds. That said, there is no free lunch when it comes to reducing inference costs.

“Bit precision is important, but it's not free,” he said. “It cannot be reduced permanently without causing pain to the model. Since the capacity of the model is finite, more effort is required with great care than trying to fit 1 quadrillion tokens into a small model. I believe that the money spent on curation and filtering of the data is spent and only the highest quality data is placed into the small model.I purposefully aimed to stabilize low accuracy training. We are optimistic that new architectures will be important in the future.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Mysterious hacking group Careto was run by the Spanish government, sources say

May 23, 2025

Klarna CEO and Sutter Hill wins lap after Jony Ive's Openai deal

May 22, 2025

Bluesky begins to check for “notable” users

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.