If you ask anyone in the open source AI community, they will tell you that the gap with large private companies is about more than just computing power. AI2 is working to solve this problem, initially using a completely open source database and model, and now using an open and easily adaptable post-training plan to Turning large language models into usable models.
Contrary to what many people think, “basic” language models do not emerge from a training process ready to use. Of course, a pre-training process is necessary, but not sufficient. Some even believe that pre-training may soon not be the most important part.
Because it's becoming increasingly clear that the post-training process is where real value can be created. So the model is formed from a vast network of know-it-alls who are as quick to generate talking points for Holocaust denial as cookie recipes. Generally speaking, you don't want that!
Companies are secretive about their post-training plans, because while anyone can scrape the web and use cutting-edge techniques to create a model, it's hard to make that model useful to, say, a therapist or a research analyst. Because doing so is a completely different challenge.
AI2 (formerly known as the Allen Institute for AI) has spoken out about the lack of openness in ostensibly “open” AI projects, like Meta's Llama. While this model is certainly free for anyone to use and tweak, the source and process of creating the raw model, and how to train it for general use, is a carefully guarded secret. It remains. It's not bad, but it's not really “open”.
Meanwhile, AI2 is committed to being as open as possible, from publishing our data collection, curation, cleaning, and other pipelines to the exact training methods we used to produce LLMs like OLMo.
But the simple truth is that very few developers have the talent to run their own LLM in the first place. It also means that even fewer developers can do post-training the way Meta, OpenAI, or Anthropic do. Technically complex and time consuming.
Fortunately, AI2 wants to democratize this aspect of the AI ecosystem as well. This is where Tulu 3 comes into play. This is a huge improvement over the previous, more rudimentary post-training process (called, you guessed it, Tulu 2). In nonprofit testing, it scored on par with the most advanced “open” models. This is based on months of experimentation, reading and interpreting what the greats have suggested, and doing lots of repetitive training.
The diagram doesn't show everything, but you can get a general idea of the shape. Image credit: AI2
Basically, Tulu 3 covers everything from choosing the topics you want your model to care about, such as downplaying multilingual features and dialing up math and coding, data curation, reinforcement learning, etc. It is applied to the model after a long plan of , fine-tuning, and configuration adjustments. Plus many other meta-parameters and tweaks to the training process that I couldn't explain properly. The result is expected to be a much more capable model that focuses on the necessary skills.
But the real point is to take one more toy out of the private sector toy box. Previously, if you wanted to build a custom-trained LLM, it was very difficult to avoid somehow using the resources of a large firm or hiring an intermediary to do the work on your behalf. Not only is this expensive, but it also poses risks that some companies are unwilling to accept.
For example, a medical research and services company: Sure, you could use OpenAI's API, or you could talk to Scale or someone else to customize your in-house model, but both involve external companies privy to your sensitive data. That's it. If it is unavoidable, then you need to put up with it, but what if it is not? For example, what if a research organization released a pre- and post-workout soup-to-nuts regimen that could be implemented on-premises? That might actually be a better alternative.
AI2 itself uses this and it's the biggest endorsement. The test results published today use Llama as the underlying model, but we will soon publish a model trained on OLMo-based Tulu-3, which further improves the baseline and is completely open source. It should be. from tip to tail.
If you are interested in the current performance of the model, try the live demo.