Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

A comprehensive list of 2025 tech layoffs

June 17, 2025

Tumblr's content filtering system is incorrectly flagging posts as “mature”, users blame AI

June 17, 2025

Unlock scaling growth in TC at all stages and earn $210 for an additional 6 days

June 17, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Tumblr's content filtering system is incorrectly flagging posts as “mature”, users blame AI

    June 17, 2025

    Facebook announces that all videos on the platform will soon be shared as reels

    June 17, 2025

    Threads extend open social web integration with Fediverse feeds, user profile search

    June 17, 2025

    Streaming viewership surpassed cable and combined broadcasts for the first time last month, according to a report.

    June 17, 2025

    Mastodon updates its term to ban AI model training

    June 17, 2025
  • Crypto

    Unique, a new social media app

    June 17, 2025

    xNotify Polymarket as partner in the official forecast market

    June 6, 2025

    Circle IPOs are giving hope to more startups waiting to be published to more startups

    June 5, 2025

    GameStop bought $500 million in Bitcoin

    May 28, 2025

    Vote for the session you want to watch in 2025

    May 26, 2025
  • Security

    Pro-Israel Hacktivist Group has allegedly blamed for alleged Iranian bank hacks

    June 17, 2025

    Pro-Israel hacktivist group claims responsiveness to alleged Iranian bank hacks

    June 17, 2025

    As food shortages continue, UNFI says it is recovering from cyberattacks

    June 17, 2025

    UK Watchdog will fine 23andMe over 2023 data breach

    June 17, 2025

    Observability Startup Coralogix is ​​an extension of Unicorn, Eye India

    June 17, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    A comprehensive list of 2025 tech layoffs

    June 17, 2025

    Unlock scaling growth in TC at all stages and earn $210 for an additional 6 days

    June 17, 2025

    The well-known global VC Endeavor catalyst has raised $300 million, according to sources

    June 17, 2025

    Spotify's Daniel Ek has a big bet on Helsing, a European defence technology darling

    June 17, 2025

    Startup Battlefield 200 application closes midnight

    June 16, 2025
TechBrunchTechBrunch

Common techniques for making AI more efficient have drawbacks

TechBrunchBy TechBrunchNovember 17, 20246 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


Quantization, one of the most widely used techniques to make AI models more efficient, has limits, and the industry may be rapidly approaching its limits.

In the context of AI, quantization refers to reducing the number of bits (the smallest unit a computer can process) needed to represent information. Consider this analogy. If someone asks you the time, you'll probably say “noon” instead of “1200, 1 second, 4 milliseconds.” That is quantization. Both answers are correct, but one answer is slightly more accurate. How much precision you actually need depends on your situation.

An AI model consists of several components that can be quantized. In particular, parameters are internal variables that the model uses to make predictions and decisions. This is useful considering that the model performs millions of calculations at runtime. Quantized models with fewer bits representing parameters are mathematically and therefore computationally less demanding. (To be clear, this is a different process than “distillation,” which is a more complex and selective removal of parameters.)

However, quantization may have more tradeoffs than previously assumed.

The ever-shrinking model

A study by researchers at Harvard University, Stanford University, MIT, Databrix, and Carnegie Mellon University found that when the original unquantized version of the model was trained on large amounts of data over a long period of time, the quantized model Performance will be degraded. In other words, at some point it may actually be better to just train a small model than cook a large one.

This could be bad news for AI companies that train very large models (known to improve the quality of answers) and quantize them to lower the cost of delivery.

The effects are already visible. A few months ago, developers and academics reported that the quantization of Meta's Llama 3 models tends to be “more harmful” than other models, but this may be due to the training method. There is a gender.

“In my opinion, the biggest cost to everyone in AI is and will continue to be inference, and our research is one important way to reduce that.” “It shows that no method works forever,” the paper, a math student at Harvard University, told TechCrunch.

Contrary to popular belief, inferencing an AI model (running the model, such as when ChatGPT answers a question) is often more expensive overall than training the model. For example, consider that Google spent an estimated $191 million to train one of its flagship Gemini models. This is certainly expensive. But if the company used a model that generated just 50 word answers for half of all Google search queries, it would spend about $6 billion a year.

Leading AI labs are adopting training models on large datasets with the assumption that “scaling up” (increasing the amount of data and computing used for training) will make AI increasingly capable. I'm doing it.

For example, Meta trained Llama 3 on a set of 15 trillion tokens. (Tokens represent bits of raw data; 1 million tokens is equivalent to about 750,000 words.) The previous generation, Llama 2, was trained with “only” 2 trillion tokens.

There is evidence that scaling up ultimately leads to diminishing returns. Anthropic and Google recently reportedly trained huge models that fell short of internal benchmark expectations. However, there are few signs that the industry is ready to meaningfully move away from these entrenched scaling approaches.

How accurate is it exactly?

So if labs are reluctant to train models on small datasets, is there a way to make them less susceptible to degradation? Perhaps. Kumar and his co-authors say they found that training the model with “lower accuracy” can make it more robust. Please wait for a moment as I will explain it in more detail.

“Precision” here refers to the number of digits that a numeric data type can accurately represent. A data type is a collection of data values, typically specified by a set of possible values ​​and allowed operations. For example, data type FP8 uses only 8 bits to represent floating point numbers.

Most current models are trained at 16-bit or “half-precision” and “post-train quantized” to 8-bit precision. Certain model components (such as parameters) are converted to a less precise form, sacrificing some precision. Think of it like calculating to the nearest decimal place and then rounding to the nearest tenth. This often gives you the best of both worlds.

Hardware vendors such as Nvidia are pushing to reduce the precision of quantized model inference. The company's new Blackwell chips support 4-bit precision, specifically a data type called FP4. Nvidia touts this as a boon for memory- and power-constrained data centers.

However, extremely low quantization precision may not be desirable. According to Kumar, unless the original model has a very large number of parameters, precision below 7 or 8 bits can significantly degrade quality.

If this may seem a little technical, don't worry. In fact, it is. But the key here is that AI models are not fully understood, and known shortcuts that work for different types of calculations won't work here. If you were asked when to start your 100-meter run, you wouldn't say “noon,” right? Of course, it's not as obvious, but the idea is the same.

“The key point of our work is that there are limitations that simply cannot be avoided,” Kumar concluded. “We hope our work can add nuance to a debate in which the default accuracy of training and inference is often lower and lower.”

Kumar acknowledges that his and his colleagues' study was relatively small. They plan to test with more models in the future. But he believes at least one insight holds. That said, there is no free lunch when it comes to reducing inference costs.

“Bit precision is important, but it's not free,” he said. “It cannot be reduced permanently without causing pain to the model. Since the capacity of the model is finite, more effort is required with great care than trying to fit 1 quadrillion tokens into a small model. I believe that the money spent on curation and filtering of the data is spent and only the highest quality data is placed into the small model.I purposefully aimed to stabilize low accuracy training. We are optimistic that new architectures will be important in the future.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

A comprehensive list of 2025 tech layoffs

June 17, 2025

Tumblr's content filtering system is incorrectly flagging posts as “mature”, users blame AI

June 17, 2025

Unlock scaling growth in TC at all stages and earn $210 for an additional 6 days

June 17, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.