A Chinese research institute has created what may be one of the most powerful “open” AI models ever.
The model, called DeepSeek V3, was developed by AI company DeepSeek and released on Wednesday under a permissive license that allows developers to download and modify it for most applications, including commercial ones.
DeepSeek V3 can handle a variety of text-based workloads and tasks from descriptive prompts to coding, translation, writing essays and emails, and more.
According to DeepSeek's internal benchmark tests, DeepSeek V3 outperforms both “openly” available downloadable models and “closed” AI models that are only accessible via API. In some coding contests hosted by programming contest platform Codeforces, DeepSeek has outperformed models such as Meta's Llama 3.1 405B, OpenAI's GPT-4o, and Alibaba's Qwen 2.5 72B.
DeepSeek V3 also outperforms the competition in Aider Polgyglot, a test designed to measure things like whether a model can successfully write new code that integrates with existing code.
DeepSeek-V3!
60 tokens/second (3x faster than V2!)
API compatibility remains the same
Completely open source models and papers
671B MoE Parameters
37B Enabling Parameters
Trained with 14.8T high quality tokens
Outperforms Llama 3.1 405b in almost all benchmarks https://t.co/OiHu17hBSI pic.twitter.com/jVwJU07dqf
— Chubby♨️ (@kimmonismus) December 26, 2024
DeepSeek claims that DeepSeek V3 was trained on a dataset of 14.8 trillion tokens. In data science, tokens are used to represent bits of raw data. 1 million tokens is equivalent to approximately 750,000 words.
It's not just the training set that's huge. The size of DeepSeek V3 is huge, with 685 billion parameters. (A parameter is an internal variable model that is used to make predictions or decisions.) This is approximately 1.6 times the size of Llama 3.1 405B, which has 405 billion parameters.
DeepSeek (a Chinese AI company) is making it look easy today with the open weight release of a frontier-grade LLM trained on a tongue-in-cheek budget (2048 GPUs in 2 months, $6 million).
For reference, this level of functionality is believed to require a cluster of nearly 16,000 GPUs. https://t.co/EW7q2pQ94B
— Andrei Karpathy (@karpathy) December 26, 2024
The number of parameters often (but not always) correlates with skill. Models with more parameters tend to perform better than models with fewer parameters. However, larger models also require more powerful hardware to run. The unoptimized version of DeepSeek V3 requires a set of high-end GPUs to answer questions at reasonable speeds.
DeepSeek V3 isn't the most practical model, but it delivers in several ways. DeepSeek was able to train the model in just about two months using a data center of Nvidia H800 GPUs. This GPU was recently restricted from being procured by Chinese companies by the US Department of Commerce. The company also claims that it spent only $5.5 million training DeepSeek V3, a fraction of the cost of developing models such as OpenAI's GPT-4.
The downside is that the model's political views are a bit filtered. For example, if you ask DeepSeek V3 about Tiananmen Square, it won't give you an answer.
DeepSeek is a Chinese company and is subject to benchmarking by China's internet regulator to ensure that its model responses “embody socialist core values.” Many of China's AI systems refuse to respond to topics that could anger regulators, such as speculation about Xi Jinping's government.
DeepSeek is an interesting organization, having recently announced DeepSeek-R1, an answer to OpenAI's o1 “inference” model. It is backed by Highflyer Capital Management, a Chinese quantitative hedge fund that uses AI to inform trading decisions.
DeepSeek's model has forced competitors such as ByteDance, Baidu, and Alibaba to lower the price of using some models and make others completely free.
High-Flyer has built its own server clusters for model training, the latest of which is powered by 10,000 Nvidia A100 GPUs and costs 1 billion yen (approximately $138 million) It is said that. Founded by computer science graduate Liang Wenfeng, High-Flyer aims to enable “hyperintelligent” AI through its DeepSeek organization.
In an interview earlier this year, Liang described open source as a “cultural act” and characterized closed-source AI like OpenAI as a “temporary” moat. “Even OpenAI's closed-source approach hasn't stopped others from catching up,” he noted.
surely.