So-called inferential AI models are becoming easier and cheaper to develop.
On Friday, NovaSky, a team of researchers based at the Sky Computing Lab at the University of California, Berkeley, released Sky-T1-32B-Preview, an inference model that competes with OpenAI's previous version of o1 on a number of important benchmarks. did. Sky-T1 appears to be the first true open source inference model in the sense that it can be replicated from scratch. The team has released the dataset used for training and the necessary training code.
“Incredibly, Sky-T1-32B-Preview was trained for less than $450,” the team said in a blog post, “demonstrating that advanced inference capabilities can be reproduced affordably and efficiently.” There is.
$450 may not be all that affordable. But it wasn't that long ago that the price to train a model with comparable performance often ran into the millions of dollars. Synthetic training data, or training data generated by other models, helped reduce costs. AI startup Writer's recently announced model, Palmyra X 004, was trained almost entirely on synthetic data and reportedly cost just $700,000 to develop.
Unlike most AI, inferential models effectively fact-check, allowing them to avoid some of the pitfalls that models typically stumble upon. Inferential models take a little longer to arrive at a solution (typically seconds to minutes longer) than typical non-inferential models. The advantage is that they tend to be more reliable in fields such as physics, science, and mathematics.
The NovaSky team used another inference model, Alibaba's QwQ-32B-Preview, to generate initial training data for Sky-T1, then “curated” the data mixture and used OpenAI's GPT-4o-mini. He said he leveraged this to refactor the data into more detailed data. executable format. Training Sky-T1 with 32 billion parameters took approximately 19 hours using a rack of 8 Nvidia H100 GPUs. (The parameters roughly correspond to the model's problem-solving skills.)
According to the NovaSky team, Sky-T1 performs better than the early preview version of o1 on MATH500, a collection of “competitive-level” math tasks. The model also outperformed o1 previews on a set of difficult questions in the LiveCodeBench coding assessment.
However, Sky-T1 falls short of GPQA-Diamond's o1 preview, which includes physics, biology, and chemistry-related questions that PhD graduates are expected to know.
Also note that OpenAI's o1 GA release is a more powerful model than the preview version of o1, and OpenAI plans to release an even more performant inference model, o3, in the coming weeks. is important.
However, the NovaSky team says Sky-T1 is just the beginning of a journey to develop open source models with advanced inference capabilities.
“Going forward, we will focus on developing more efficient models that maintain strong inference performance, and exploring advanced techniques that further increase model efficiency and accuracy during testing,” the team said in the post. are. “We look forward to seeing the progress of these exciting initiatives.”