Like most big tech companies these days, Meta has its own flagship generative AI model called Llama. Llama is somewhat unique among the major models in that it is “open,” meaning developers can download and use it as they like (with certain restrictions). This contrasts with models like Anthropic's Claude, OpenAI's GPT-4o (the basis for ChatGPT), and Google's Gemini, which are only accessible via APIs.
But to give developers options, Meta has partnered with vendors like AWS, Google Cloud and Microsoft Azure to offer cloud-hosted versions of Llama, and the company has also released tools that make it easier to tweak and customize the models.
Here's everything you need to know about Llama, from features and editions to where you can use it. We'll be updating this post as Meta releases upgrades and introduces new development tools to support the use of models.
What is a Llama?
Llama isn't just one model, it's a family of models.
Llama 8B Llama 70B Llama 405B
The latest versions are Llama 3.1 8B, Llama 3.1 70B, and Llama 3.1 405B, which was released in July 2024. They have been trained on web pages in different languages, public code and files from the web, and synthetic data (i.e. data generated by other AI models).
Llama 3.1 8B and Llama 3.1 70B are small models designed to run on a variety of devices from laptops to servers, while Llama 3.1 405B is a large model that will require data center hardware (unless modified in some way). Llama 3.1 8B and Llama 3.1 70B are less powerful than Llama 3.1 405B, but faster. They are actually “refined” versions of the 405B, optimized for low storage overhead and low latency.
Every Llama model has a context window of 128,000 tokens. (In data science, a token is a chunk of raw data, like the syllables “fan,” “tas,” and “tic” in the word “fantastic.”) A model's context, or context window, refers to the input data (e.g., text) that the model considers before producing an output (e.g., additional text). A long context prevents the model from “forgetting” recent documents or content of the data, or veering off-topic and making erroneous inferences.
These 128,000 tokens equal roughly 100,000 words or 300 pages, which, for reference, is roughly the length of Wuthering Heights, Gulliver's Travels, and Harry Potter and the Prisoner of Azkaban.
What can a llama do?
Like other generative AI models, Llama can perform a variety of assistance tasks, such as coding, answering basic math problems, and summarizing documents in eight languages (English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai). Most text-based workloads (such as analyzing files like PDFs and spreadsheets) are within its scope. None of the Llama models can process or generate images, although this may change in the near future.
All of the latest Llama models can be configured to utilize third-party apps, tools, and APIs to complete tasks. These models are trained out of the box to answer questions about recent events using Brave Search, run math and science-related queries using the Wolfram Alpha API, validate code using the Python interpreter, and more. Additionally, Meta says that Llama 3.1 models can use certain tools we've never seen before (though whether they can use those tools reliably is another matter).
Where can I use Llama?
If you simply want to chat with Llama, it powers the Meta AI chatbot experience on Facebook Messenger, WhatsApp, Instagram, Oculus, and Meta.ai.
Developers using Llama can download, use, or fine-tune their models on most popular cloud platforms, and Meta claims that more than 25 partners are hosting Llama, including Nvidia, Databricks, Groq, Dell, and Snowflake.
Some of these partners are building additional tools and services on top of Llama, including tools that allow models to reference their own data and run with lower latency.
Meta recommends using the smaller models, Llama 8B and Llama 70B, for general-purpose applications like chatbot powering and code generation. The company says that Llama 405B is well-suited for model distillation (the process of transferring knowledge from a larger model to a smaller, more efficient one) and generating synthetic data to train (or fine-tune) alternative models.
Importantly, the Llama license restricts how developers can deploy their models: app developers with more than 700 million monthly users must apply for a special license from Meta, which Meta will grant at its sole discretion.
Alongside Llama, Meta provides tools to make using models “safer.”
Llama Guard, a moderation framework Prompt Guard, a tool to protect against prompt injection attacks CyberSecEval, a cybersecurity risk assessment suite
Llama Guard attempts to detect potentially problematic content captured in or generated by Llama models, such as content related to criminal activity, child exploitation, copyright infringement, hate, self-harm, and sexual abuse. Developers can customize which categories of content to block, and the blocking applies to all languages that Llama supports out of the box.
Like Llama Guard, Prompt Guard can block text intended for Llama, but only text intended to “attack” the model and make it behave in an undesirable way. Meta claims that Llama Guard can defend against prompts that contain “injected input,” as well as obviously malicious prompts (i.e. jailbreaks that try to circumvent Llama's built-in safety filters).
CyberSecEval is less of a tool and more of a collection of benchmarks for measuring the security of models. It can assess the risks that Llama models pose to app developers and end users (at least according to Meta's criteria) in areas such as “automated social engineering” and “aggressive cyber operations escalation.”
Llama Limits
Like all generative AI models, Llama comes with certain risks and limitations.
For example, it is unclear whether Meta trained Llama on copyrighted content, and if it did, it could be held liable for copyright infringement if users unknowingly used copyrighted snippets that the model repeatedly played.
According to a recent Reuters report, Meta has used copyrighted e-books to train its AI, despite warnings from its own lawyers. The company controversially trains its AI with Instagram and Facebook posts, photos and captions, making it difficult for users to opt out. Additionally, Meta, along with OpenAI, is being sued by authors, including comedian Sarah Silverman, for using copyrighted data to train its models without permission.
Programming is another area where it is wise to tread carefully when using Llama, as Llama, like any generative AI, can generate buggy or unsafe code.
As always, it’s best to have a human expert review any AI-generated code before incorporating it into your service or software.