DeepSeek-V3, launched in December 2024, only added to DeepSeek’s notoriety.
According to DeepSeek’s internal benchmark testing, DeepSeek V3 outperforms both downloadable, openly available models like Meta’s Llama and “closed” models that can only be accessed through an API, like OpenAI’s GPT-4o.
Equally impressive is DeepSeek’s R1 “reasoning” model. Released in January, DeepSeek claims R1 performs as well as OpenAI’s o1 model on key benchmarks.
Being a reasoning model, R1 effectively fact-checks itself, which helps it to avoid some of the pitfalls that normally trip up models. Reasoning models take a little longer — usually seconds to minutes longer — to arrive at solutions compared to a typical non-reasoning model. The upside is that they tend to be more reliable in domains such as physics, science, and math.
There is a downside to R1, DeepSeek V3, and DeepSeek’s other models, however. Being Chinese-developed AI, they’re subject to benchmarking by China’s internet regulator to ensure that its responses “embody core socialist values.” In DeepSeek’s chatbot app, for example, R1 won’t answer questions about Tiananmen Square or Taiwan’s autonomy.
In March, DeepSeek surpassed 16.5 million visits. “[F]or March, DeepSeek is in second place, despite seeing traffic drop 25% from where it was in February, based on daily visits,” David Carr, editor at Similarweb, told TechCrunch. It still pales in comparison to ChatGPT, which surged past 500 million weekly active users in March.
A disruptive approach
If DeepSeek has a business model, it’s not clear what that model is, exactly. The company prices its products and services well below market value — and gives others away for free. It’s also not taking investor money, despite a ton of VC interest.
The way DeepSeek tells it, efficiency breakthroughs have enabled it to maintain extreme cost competitiveness. Some experts dispute the figures the company has supplied, however.
Leave feedback about this
You must be logged in to post a comment.