
AI startup DeepSeek made waves recently with its R1 model, which reportedly matches the power of OpenAI’s GPT-4-level o1 model while being trained for just $5.6 million—a fraction of the $100 million-plus OpenAI spent. However, Google DeepMind CEO Demis Hassabis isn’t convinced.
Speaking at the Artificial Intelligence Action Summit in Paris, Hassabis acknowledged DeepSeek’s impressive work but suggested its claims were “exaggerated and a little bit misleading.”
Hassabis pointed out that the $5.6 million figure likely only accounts for the final training run, not the full cost of development, which includes data collection, infrastructure, and multiple training iterations. He also suggested that DeepSeek may have relied on Western AI models to refine its own, an accusation that OpenAI also raised.
“We know PRC-based companies—and others—are constantly trying to distill the models of leading US AI companies,” OpenAI told Bloomberg following DeepSeek’s launch.
DeepSeek Isn’t a Game-Changer, Says Hassabis
Despite DeepSeek’s strong performance, Google doesn’t see it as a breakthrough in AI efficiency. Hassabis argued that Google’s Gemini models are actually more efficient than DeepSeek in terms of cost-to-performance but simply haven’t been marketed in the same way.
“So it’s impressive, but it isn’t some new outlier on the efficiency curve,” he said.
DeepSeek’s claims have certainly sparked debate in the AI world. But as the race for more powerful and cost-effective AI models continues, the real test will be in long-term performance and innovation—rather than just bold claims.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Copyright©2025 Living Media India Limited. For reprint rights: Syndications Today