A Chinese research institute has announced what appears to be one of the first “inference” AI models to rival OpenAI's o1.
On Wednesday, DeepSeek, an AI research firm funded by quantitative traders, released a preview of DeepSeek-R1. The company claims that DeepSeek-R1 is an inference model comparable to o1.
Unlike most models, inferential models effectively check facts by spending more time considering questions and queries. This helps avoid some of the pitfalls that models usually stumble upon.
Similar to o1, DeepSeek-R1 reasons through tasks, plans ahead, and performs a series of actions that help the model arrive at an answer. This may take some time. Similar to o1, depending on the complexity of the question, DeepSeek-R1 may “think” for tens of seconds before answering.
Image credit: DeepSeek
DeepSeek claims that DeepSeek-R1 (or more precisely, DeepSeek-R1-Lite-Preview) performs on par with OpenAI's o1-preview model on two popular AI benchmarks: AIME and MATH. AIME uses other AI models to evaluate model performance, while MATH is a collection of word problems. However, the model is not perfect. Some commentators on X have noted that DeepSeek-R1 struggles with tic-tac-toe and other logic problems. (The same goes for O1.)
DeepSeek can also be easily jailbroken. This means that you can run prompts in a way that ignores safety precautions. One X user obtained a model that provided detailed meth recipes.
DeepSeek-R1 also appears to be blocking queries that it deems too politically sensitive. In our tests, the model refused to answer questions about Chinese leader Xi Jinping, Tiananmen Square, and the geopolitical impact of China's invasion of Taiwan.
Image credit: DeepSeek
This action is likely a result of pressure from the Chinese government on AI projects in the region. China's model must be benchmarked by China's internet regulator to ensure its response “embodies socialist core values.” The government has reportedly gone so far as to propose a blacklist of sources that cannot be used to train models, resulting in many Chinese AI systems responding to topics that could draw the ire of regulators. It is said that he is refusing to do so.
The increased focus on inference models is driven by the feasibility of the “law of scaling,” the long-standing theory that a model's power will continually improve as you feed more data and computing power into it. This is because it is being scrutinized. A flurry of reports suggests that models from major AI labs like OpenAI, Google, and Anthropic aren't improving as dramatically as they once did.
This has led to a scramble for new AI approaches, architectures, and development techniques. One is test-time computing, which powers models such as o1 and DeepSeek-R1. Test-time computing, also known as inferential computing, essentially gives your model additional processing time to complete a task.
“We're seeing a new law of scaling emerge,” Microsoft CEO Satya Nadella said in his keynote at Microsoft's Ignite conference this week, referring to test-time computing.
DeepSeek says it plans to open source DeepSeek-R1 and release an API, which is an interesting move. It is backed by Highflyer Capital Management, a Chinese quantitative hedge fund that uses AI to inform trading decisions.
One of DeepSeek's first models, a general-purpose text and image analysis model called DeepSeek-V2, forced competitors such as ByteDance, Baidu, and Alibaba to lower usage fees for some models, and others. model is completely free.
High-Flyer has built its own server clusters for model training, the latest of which includes 10,000 Nvidia A100 GPUs and costs 1 billion yen (approximately $138 million) It is said that. Founded by computer science graduate Liang Wenfeng, High-Flyer aims to enable “hyperintelligent” AI through its DeepSeek organization.