• Pivot 5
  • Posts
  • Unprompted: The Inefficiency Behind DeepSeek

Unprompted: The Inefficiency Behind DeepSeek

Unprompted is an opinion piece I publish every now and then on Pivot 5.

Summertime in Australia means early mornings for me, and this past weekend, that worked out well. Being close to China’s time zone, it was easy to follow along as DeepSeek quietly disrupted the AI world.

What started as a niche discussion in AI circles quickly turned into a full-blown industry event. Benchmarks, breakdowns, speculation—DeepSeek had officially arrived. I was able to download it and play around before access got cut off due to surging demand. 

While much of the news coverage focused on its performance, cost, and what it means for open models, something else stood out to me. 

DeepSeek is being framed as an efficiency story. But the truth is, DeepSeek is an inefficiency story. And that’s what makes it so interesting.

DeepSeek’s “Efficiency” is Built on a Mountain of Inefficiency 

One of the big takeaways from DeepSeek’s release was its reported cost-efficiency—some estimates suggest it was trained for under $6 million in compute. That number sounds almost too good to be true. And in many ways, it is.

The cost of compute is just one part of the equation. DeepSeek’s real investment came in the form of time, talent, and iteration. Hundreds of researchers spent over a year designing, refining, and discarding model architectures in search of an approach that would work. The cost of sourcing and curating training data—another major hidden expense—is rarely accounted for in headline numbers. Add it all up, and the real cost of DeepSeek is likely north of $100 million.

The reality is that AI breakthroughs don’t happen through a single clean, efficient process. They happen through relentless experimentation, failed attempts, and dead-end ideas that quietly get discarded before something promising emerges.

And DeepSeek is no exception.

The Inefficiency of Building Something New

Every major technological breakthrough shares this same story. The first versions are expensive, slow, and wasteful by design. Tesla burned through billions before figuring out how to manufacture EVs at scale. OpenAI trained multiple versions of GPT before reaching the levels seen today. Early iPhones were built with inefficient supply chains and costly R&D before Apple found its production rhythm.

DeepSeek spent over one year figuring out how to train efficiently—but that means a year of inefficiency came before it. That’s the price of innovation.

This is why big companies struggle to innovate. In the pursuit of predictability, structure, and quarterly results, they squeeze out the very thing that makes innovation possible: the freedom to be inefficient. Startups, on the other hand, thrive in inefficiency. They experiment, pivot, and fail forward. 

Big companies have to buy or rent innovation at a premium, after someone else has paid the price of inefficiency.

AI Adoption is Following the Same Inefficient Path

And it’s not just AI companies—it’s all of us.

Businesses adopting AI will do so inefficiently. They’ll run pilots, betas, and experiments. Half of them won’t work. The other half will barely work.

Consumers are also learning AI inefficiently.

I use ChatGPT constantly—it’s my most-used app. Easily 20-30 queries a day, 7 days a week. This year, I set the intention to expand my AI friends and started regularly testing Claude, Gemini, Perplexity, and now, DeepSeek. And I’m inefficient with all of my new AI friends. It takes multiple prompts, rephrased questions, and iterative back-and-forths to get what I want.

And that’s fine. Because inefficiency is the process. It’s how we learn.

Innovation Demands Inefficiency

Unless one has been through multiple innovation cycles, over years or decades, it’s hard to fully appreciate how much inefficiency is involved to create something new. The stories that are told are often neat and linear, but the reality is messy—filled with wrong turns, abandoned ideas, and costs that never get accounted for in the final success story.

DeepSeek wasn’t efficient. Neither was OpenAI. Neither was Tesla, Apple, or any of the companies that have meaningfully changed an industry. And that’s a good thing.

The path to efficiency runs through the forest of inefficiency. The experiments that don’t work, the dead ends that seem pointless, the failed prototypes—these are not wasted efforts. They are the necessary cost of progress.

Inefficiency is a feature of innovation, and it should be celebrated. Because in the end, it’s not just about the outcome. It’s about the journey—the unpredictable, frustrating, and often inefficient process that leads to something that, eventually, hopefully, works.

And in that sense, the most efficient thing to do is to be inefficient.

Kunal