Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

After data is wiped out, Kiranapro co-founders cannot rule out external hacks

June 7, 2025

Why investing in a growing AI startup is risky and more complicated

June 6, 2025

Humanity appoints national security experts to governing trusts

June 6, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Trump Mask feud was perfect for X and jumped on the app store chart

    June 6, 2025

    iOS 19: All the rumor changes that Apple could bring to the new operating system

    June 6, 2025

    WWDC 2025: What to expect from this year's meeting

    June 6, 2025

    The court denied requests to suspend awards regarding Apple's App Store payment fees

    June 6, 2025

    Perplexity received 780 million questions last month, the CEO says

    June 5, 2025
  • Crypto

    xNotify Polymarket as partner in the official forecast market

    June 6, 2025

    Circle IPOs are giving hope to more startups waiting to be published to more startups

    June 5, 2025

    GameStop bought $500 million in Bitcoin

    May 28, 2025

    Vote for the session you want to watch in 2025

    May 26, 2025

    Save $900 + 90% from 2 tickets to destroy 2025 in the last 24 hours

    May 25, 2025
  • Security

    After data is wiped out, Kiranapro co-founders cannot rule out external hacks

    June 7, 2025

    Humanity appoints national security experts to governing trusts

    June 6, 2025

    Italian lawmakers say Italy used spyware to target immigrant activists' mobile phones, but not for journalists

    June 6, 2025

    Humanity unveils custom AI models for US national security customers

    June 5, 2025

    Unlock phone company Cellebrite to acquire mobile testing startup Corellium for $170 million

    June 5, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Why investing in a growing AI startup is risky and more complicated

    June 6, 2025

    Startup Battlefield 200: Only 3 days left

    June 6, 2025

    Book all TC Stage Exhibitor Tables before ending today

    June 6, 2025

    Less than 48 hours left until display at TC at all stages

    June 5, 2025

    TC Session: AI will be on sale today at Berkeley

    June 5, 2025
TechBrunchTechBrunch

Meta releases largest 'open' AI model to date

TechBrunchBy TechBrunchJuly 23, 20249 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


Meta's latest open-source AI model is its largest to date.

Meta today announced the release of Llama 3.1 405B, a model with 405 billion parameters. Parameters roughly correspond to the model's problem-solving ability, and models with more parameters generally perform better than models with fewer parameters.

With 405 billion parameters, Llama 3.1 405B is not the largest open source model in absolute terms, but it is the largest in recent years. Trained using 16,000 Nvidia H100 GPUs, the model also benefits from new training and development techniques, and Meta claims it can compete with leading proprietary models such as OpenAI's GPT-4o and Anthropic's Claude 3.5 Sonnet (with some caveats).

Like Meta's previous models, the Llama 3.1 405B can be downloaded or used on cloud platforms such as AWS, Azure and Google Cloud, and is also used by WhatsApp and Meta.ai to power their chatbot experiences for US-based users.

New and improved

Like other open and closed source generative AI models, Llama 3.1 405B can perform a variety of tasks, from answering coding and basic math problems to summarizing documents in eight languages ​​(English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai). Being text-only, it can't answer questions about images, for example, but most text-based workloads, such as analyzing files like PDFs and spreadsheets, are within its scope.

Meta is keen to be public about its experiments with multimodality: In a paper published today, the company's researchers say they are actively developing Llama models that can recognize images and videos, and understand (and generate) speech, though these models are not yet ready for public release.

To train Llama 3.1 405B, Meta used a dataset of 15 trillion tokens dating back to 2024. (Tokens are parts of words that a model can internalize more easily than entire words, and 15 trillion tokens equates to a whopping 750 billion words.) Meta used the base set to train the previous Llama model, so it's not a new training set per se, but the company claims that it has revamped its data curation pipeline and adopted a “more rigorous” quality assurance and data filtering approach in developing this model.

The company also used synthetic data (data generated by other AI models) to fine-tune Llama 3.1 405B. Most major AI vendors, including OpenAI and Anthropic, are considering applying synthetic data to scale up AI training, but some experts believe synthetic data should be a last resort as it can exacerbate biases in models.

Meta, on the other hand, claims to be “striving for a careful balance.”[d]” Llama 3.1 405B's training data is publicly available, but it doesn't reveal where the data came from (other than as a web page or public web file). Many generative AI vendors consider training data a competitive advantage and keep the training data and related information secret. However, details of training data are also a potential source of IP-related litigation, preventing companies from revealing the details.

Metalama 3.1Image credit: Meta

In the aforementioned paper, Meta researchers wrote that compared to previous Llama models, Llama 3.1 405B was trained on a combination of non-English data (to improve performance in non-English languages), more “math data” and code (to improve the model's mathematical reasoning skills), and more recent web data (to enhance its knowledge of current events).

According to a recent Reuters report, Meta once used copyrighted e-books to train its AI, despite warnings from its own lawyers. The company controversially trains its AI with Instagram and Facebook posts, photos and captions, making it difficult for users to opt out. Additionally, Meta, along with OpenAI, is being sued by authors, including comedian Sarah Silverman, for using copyrighted data to train models without permission.

“The training data is in many ways like the secret recipe or sauce for building these models,” Raghavan Srinivasan, Meta's vice president of AI program management, told TechCrunch in an interview. “So from our perspective, we've invested heavily in this, and it's going to be one of the things that we continue to improve.”

The bigger context and tools

Llama 3.1 405B has a larger context window than previous Llama models: 128,000 tokens, or the length of about a 50-page book. A model's context, or context window, refers to the input data (e.g., text) that the model considers before generating an output (e.g., additional text).

One advantage of a model with a larger context is that it can summarize longer text snippets or files. When powering chatbots, such models are also less likely to forget recently discussed topics.

The other two new smaller models Meta announced today, the Llama 3.1 8B and Llama 3.1 70B (updated versions of the company's Llama 3 8B and Llama 3 70B models released in April), also feature a context window of 128,000 tokens. This is a significant upgrade, as the previous models' context maxed out at 8,000 tokens (assuming the new Llama models can effectively infer all of that context).

Metalama 3.1Image credit: Meta

All Llama 3.1 models, like rival models from Anthropic and OpenAI, can use third-party tools, apps, and APIs to complete tasks. Out of the box, they are trained to use Brave Search to answer questions about recent events, Wolfram Alpha APIs for math and science-related queries, and the Python interpreter to validate code. Additionally, Meta claims that Llama 3.1 models can use certain tools to some degree that they have not seen before.

Building an ecosystem

If you believe the benchmarks (and benchmarks aren't the be-all and end-all of generative AI), the Llama 3.1 405B is a very capable model indeed, which is a good thing considering the obvious limitations of the previous generation Llama model.

According to the paper, Llama 3 405B performs on par with OpenAI's GPT-4, and achieves “mixed results” compared to GPT-4o and Claude 3.5 Sonnet, according to human evaluators employed by Meta. Llama 3 405B outperforms GPT-4o at running code and generating plots, but has weaker overall multilingual capabilities and lags behind Claude 3.5 Sonnet in programming and general reasoning.

Also, due to its large size, you need powerful hardware to run it – Meta recommends at least a server node.

Perhaps that's why Meta is promoting its new smaller models, the Llama 3.1 8B and Llama 3.1 70B, for general-purpose applications like chatbot powering and code generation. The company says the Llama 3.1 405B is best suited for model distillation (the process of transferring knowledge from a large model to a smaller, more efficient one) and generating synthetic data to train (or fine-tune) alternative models.

To facilitate synthetic data use cases, Meta says it has updated Llama's license to allow developers to use the output of the Llama 3.1 model family to develop third-party AI generative models (whether this is a wise idea is debatable). Importantly, the license still restricts how developers can deploy Llama models: app developers with more than 700 million monthly users must apply for a special license from Meta, which Meta will grant at its sole discretion.

Metalama 3.1Image credit: Meta

This change in licensing for outputs mitigates significant criticism of Meta's model within the AI ​​community and is part of the company's aggressive efforts to gain mindshare in generative AI.

Alongside the Llama 3.1 family, Meta is releasing what it calls a “reference system” and new safety tools. Some of these tools block prompts that could cause Llama models to behave in unpredictable or undesirable ways, allowing developers to use Llama in more places. The company is also previewing and inviting comments on the Llama Stack, a soon-to-be-released API for fine-tuning Llama models, generating synthetic data with Llama, and tools that can be used to build “agent” applications (Llama-powered apps that can take actions on behalf of users).

“[What] We hear time and again from developers that they want to learn how to actually deploy. [Llama models] “We're in production now,” Srinivasan said, “so we're starting to give them different tools and options.”

Aiming for market share

In an open letter published this morning, Meta CEO Mark Zuckerberg laid out his vision for a future in which AI tools and models are put into the hands of more developers around the world, helping people access the “benefits and opportunities” of AI.

Though worded very charitably, the letter implicitly conveys Zuckerberg's desire for Meta to create these tools and models.

As Meta races to catch up with companies like OpenAI and Anthropic, it's using a tried-and-true strategy: offer its tools for free to foster an ecosystem, then gradually add paid products and services on top of it. Investing billions to commoditize its models will drive down prices for Meta's competitors, helping to make its version of AI more widely available, while also allowing it to incorporate improvements from the open-source community into future models.

Llama has certainly caught the attention of developers: According to Meta, Llama models have been downloaded over 300 million times, and over 20,000 Llama-inspired models have been created to date.

Don't get me wrong: Meta is serious about this. The company has spent millions lobbying regulators in favor of its preferred form of “open” generative AI. None of the models in Llama 3.1 solve the hard problems with generative AI technology today, like its tendency to spit out badly made-up training data. But it does make progress toward one of Meta's main goals: to become synonymous with generative AI.

This comes at a cost: In the research paper, the co-authors discuss energy-related reliability issues in training Meta's ever-growing generative AI models, echoing Zuckerberg's recent comments.

“During training, the power consumption of tens of thousands of GPUs can increase or decrease simultaneously as, for example, all GPUs wait for checkpoints or collective communication to finish, or as they start up or shut down the entire training job,” the researchers wrote. “When this happens, the power consumption of an entire data center can instantly fluctuate by tens of megawatts, potentially exceeding the limits of the power grid. This is an ongoing challenge for us as we scale up training for even larger Llama models in the future.”

Hopefully, training these large-scale models will not force even more utilities to keep their old coal-fired plants running.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

After data is wiped out, Kiranapro co-founders cannot rule out external hacks

June 7, 2025

Why investing in a growing AI startup is risky and more complicated

June 6, 2025

Humanity appoints national security experts to governing trusts

June 6, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.