Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Epic Games Ask Judges to Force Apple to Approve Fortnite

May 17, 2025

Build, not bind: Accel's Sonali de Rycker on European AI Crossroads

May 17, 2025

Google I/O 2025: What to expect including Gemini and Android 16 updates?

May 16, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Epic Games Ask Judges to Force Apple to Approve Fortnite

    May 17, 2025

    Google I/O 2025: What to expect including Gemini and Android 16 updates?

    May 16, 2025

    After adding your own billing option to iOS, Apple asks Patreon to go to an external browser

    May 16, 2025

    The epic game says Apple is blocking Fortnite from the US and EU app stores

    May 16, 2025

    Viral outrage over Apple's EU payment warning misses important facts

    May 15, 2025
  • Crypto

    Robinhood expands its footprint in Canada by getting Wonderfi

    May 13, 2025

    Stripe unveils AI Foundation model for payments, revealing a “deeper partnership” with Nvidia

    May 7, 2025

    Movie Pass explores the daily fantasy platform of film buffs

    May 1, 2025

    Speaking on TechCrunch 2025: Application is open

    April 24, 2025

    Revolut, a $45 billion Neobank, recorded a profit of $1 billion in 2024

    April 24, 2025
  • Security

    American man spiked the price of Bitcoin hacked SEC X account and sentenced to prison

    May 16, 2025

    Coinbase says that customer's personal information was stolen in a data breach

    May 15, 2025

    White House Scrap plans to block data brokers from selling sensitive American data

    May 14, 2025

    Xai's promised safety report is MIA

    May 13, 2025

    Seven things we learned from WhatsApp vs. NSO Group Spyware Litigation

    May 13, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Build, not bind: Accel's Sonali de Rycker on European AI Crossroads

    May 17, 2025

    How Silicon Valley's influence in Washington benefits high-tech elites

    May 16, 2025

    Red Point raises $650 million three years from the last big early stage fund

    May 15, 2025

    Lip Ring vs Deal Unpacking: Corporate Spy and $16.8 billion Plot Twist

    May 14, 2025

    A $2.5 billion treasured chime file for IPO reveals a $33 million deal with the Dallas Mavericks

    May 13, 2025
TechBrunchTechBrunch

Common techniques for making AI more efficient have drawbacks

TechBrunchBy TechBrunchDecember 23, 20246 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


Quantization, one of the most widely used techniques to make AI models more efficient, has limits, and the industry may be rapidly approaching its limits.

In the context of AI, quantization refers to reducing the number of bits (the smallest unit that a computer can process) needed to represent information. Consider this analogy. If someone asks you the time, you'll probably say “noon” instead of “1200, 1 second, 4 milliseconds.” That is quantization. Both answers are correct, but one answer is slightly more accurate. How much precision you actually need depends on your situation.

AI models are made up of several components that can be quantized. In particular, parameters are internal variables that the model uses to make predictions and decisions. This is useful considering that the model performs millions of calculations at runtime. Quantized models with fewer bits representing parameters are mathematically and therefore computationally less demanding. (To be clear, this is a different process than “distillation,” which is a more complex and selective removal of parameters.)

However, quantization may have more tradeoffs than previously assumed.

The ever-shrinking model

A study by researchers at Harvard University, Stanford University, MIT, Databrix, and Carnegie Mellon University found that when the original unquantized version of the model was trained on large amounts of data over a long period of time, the quantized model Performance will be degraded. In other words, at some point it may actually be better to just train a small model than cook a large one.

This could be bad news for AI companies that train very large models (known to improve the quality of answers) and quantize them to lower the cost of delivery.

The effects are already visible. A few months ago, developers and academics reported that the quantization of Meta's Llama 3 models tends to be “more harmful” than other models, but this may be due to the training method. There is a gender.

“In my opinion, the biggest cost to everyone in AI is and will continue to be inference, and our research is one important way to reduce that.” “It shows that no method works forever,” the paper, a math student at Harvard University, told TechCrunch.

Contrary to popular belief, inferencing an AI model (running the model, such as when ChatGPT answers a question) is often more expensive overall than training the model. For example, consider that Google spent an estimated $191 million to train one of its flagship Gemini models. This is certainly expensive. But if the company used a model that generated just 50 word answers for half of all Google search queries, it would spend about $6 billion a year.

Leading AI labs are adopting training models on large datasets with the assumption that “scaling up” (increasing the amount of data and computing used for training) will make AI increasingly capable. I'm doing it.

For example, Meta trained Llama 3 on a set of 15 trillion tokens. (Tokens represent bits of raw data; 1 million tokens is equivalent to about 750,000 words.) The previous generation, Llama 2, was trained with “only” 2 trillion tokens. In early December, Meta released a new model Llama 3.3 70B. According to the company, this “improves core performance at a significantly lower cost.”

There is evidence that scaling up ultimately leads to diminishing returns. Anthropic and Google recently reportedly trained huge models that fell short of internal benchmark expectations. However, there are few signs that the industry is ready to meaningfully move away from these entrenched scaling approaches.

How accurate is it exactly?

So if labs are reluctant to train models on small datasets, is there a way to make them less susceptible to degradation? Perhaps. Kumar and his co-authors say they found that training the model with “lower accuracy” can make it more robust. I will explain it in more detail, so please wait for a while.

“Precision” here refers to the number of digits that a numeric data type can accurately represent. A data type is a collection of data values, typically specified by a set of possible values ​​and allowed operations. For example, data type FP8 uses only 8 bits to represent floating point numbers.

Most current models are trained at 16-bit or “half-precision” and “post-train quantized” to 8-bit precision. Certain model components (such as parameters) are converted to a less precise form, sacrificing some accuracy. Think of it like calculating to the nearest decimal place and then rounding to the nearest tenth. This often gives you the best of both worlds.

Hardware vendors such as Nvidia are pushing to reduce the precision of quantized model inference. The company's new Blackwell chips support 4-bit precision, specifically a data type called FP4. Nvidia touts this as a boon for memory- and power-constrained data centers.

However, extremely low quantization precision may not be desirable. According to Kumar, unless the original model has a very large number of parameters, precision below 7 or 8 bits can significantly degrade quality.

If this may seem a little technical, don't worry. In fact, it is. But the key here is that AI models are not fully understood, and known shortcuts that work for different types of calculations won't work here. If you were asked when to start your 100-meter run, you wouldn't say “noon,” right? Of course, it's not as obvious, but the idea is the same.

“The key point of our work is that there are limitations that simply cannot be avoided,” Kumar concluded. “We hope our research can add nuance to a debate in which the default accuracy of training and inference is often lower and lower.”

Kumar acknowledges that his and his colleagues' study was relatively small. They plan to test with more models in the future. But he believes at least one insight holds. That said, there is no free lunch when it comes to reducing inference costs.

“Bit precision is important, but it's not free,” he said. “It cannot be reduced permanently without causing pain to the model. Since the capacity of the model is finite, more effort is required with great care than trying to fit 1 quadrillion tokens into a small model. I believe that the money spent on curation and filtering of the data is spent and only the highest quality data is placed into the small model.I purposefully aimed to stabilize low accuracy training. We are optimistic that new architectures will be important in the future.”

This article was originally published on November 17, 2024 and updated with new information on December 23.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Epic Games Ask Judges to Force Apple to Approve Fortnite

May 17, 2025

Build, not bind: Accel's Sonali de Rycker on European AI Crossroads

May 17, 2025

Google I/O 2025: What to expect including Gemini and Android 16 updates?

May 16, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.