A new AI model family is in the works, and this is one of the few that can be recreated from scratch.
On Tuesday, Ai2, a nonprofit AI research organization founded by the late Paul Allen, released OLMo 2, the second model family in the OLMo series. (OLMo stands for “Open Language Model.”) While there is no shortage of “open” language models to choose from (see Meta's Llama), OLMo 2 builds on the Open Source Initiative's definition of open source AI: Meet the tools and data you need. The tools to develop it are publicly available.
The Open Source Initiative, a long-standing organization that aims to define and “govern” all things open source, finalized its definition of open source AI in October. However, the first OLMo model released in February also met this criteria.
“Olmo 2” [was] It was developed from start to finish using open and accessible training data, open-source training code, reproducible training recipes, transparent assessments, intermediate checkpoints, and more,” AI2 said in a blog post. I am writing. “By openly sharing data, recipes, and findings, we hope to provide the open source community with the resources it needs to discover new and innovative approaches.”
There are two models in the OLMo 2 family. One with 7 billion parameters (OLMo 7B) and the other with 13 billion parameters (OLMo 13B). Parameters roughly correspond to the model's problem-solving skills, and models with more parameters generally perform better than models with fewer parameters.
Like most language models, OLMo 2 7B and 13B can perform a variety of text-based tasks, such as answering questions, summarizing documents, and writing code.
To train the model, Ai2 used a dataset of 5 trillion tokens. Tokens represent bits of raw data. 1 million tokens is equivalent to approximately 750,000 words. The training set included websites “filtered for high quality,” academic papers, Q&A discussion boards, and “synthetic and human-written” math workbooks.
According to Ai2, the result is an open model like Meta's Llama 3.1 release that is competitive in terms of performance.
Image credit: Ai2
“We observed dramatic performance improvements for all tasks compared to previous OLMo models, but in particular OLMo 2 7B outperformed LLama 3.1 8B,” Ai2 wrote. I am. “Olmo 2” [represents] The best fully open language model ever. ”
The OLMo 2 model and all its components can be downloaded from the Ai2 website. They are under the Apache 2.0 license, so they can be used commercially.
There has been recent debate about the safety of open models, as well as the Llama model that Chinese researchers are reportedly using to develop defense tools. When I asked Ai2 engineer Dirk Groenefeld in February if he was worried about OLMo being exploited, he said he believed the benefits would ultimately outweigh the harm.
“Yes, it is possible for open models to be used inappropriately or for unintended purposes,” he said. “[However, this] This approach also fosters technological advances that lead to more ethical models. This is a prerequisite for validation and reproducibility, as validation and reproducibility can only be achieved with access to the full stack. and reduce growing concentrations of power and create more equitable access. ”