
DeepSeek heralds the rise of efficient, purpose-built AI models, moving away from costly, large language models to specialized, modular systems for better performance and adaptability.
The release of DeepSeek marks yet another moment where the AI industry is forced to reckon with an undeniable truth: constraint breeds elegance. For years, the dominant narrative in AI has been that bigger is better. Large Language Models (LLMs) trained on everything, consuming vast amounts of compute and energy, have driven the field forward. But the future won’t belong to these behemoths. Instead, the next phase of AI will mirror the evolution of software engineering—toward efficiency, modularity, and purpose-built systems.
The Evolution of Computing: A Roadmap for AI
Looking at how software development has evolved, it’s clear that the trajectory of AI is following a familiar path. We began with monolithic procedural programming, transitioned into object-oriented and functional paradigms, then moved to APIs, and finally, to microservices. The common theme? Breaking systems down into smaller, more efficient, and more maintainable parts. AI is on the same path. Today’s generalist LLMs are akin to monolithic applications—massive, powerful, but unwieldy and inefficient. The next step is specialization.
![]() |
Infosys chairman Nandan Nilekani predicts that companies will increasingly build their own specialized artificial intelligence (AI) models to enhance efficiency and productivity, rather than relying on large language models (LLMs) like OpenAI's ChatGPT due to their high costs and associated risks. <source> |
![]() |
Similarly, Canadian AI startup Cohere has announced a strategic shift towards developing tailored AI models for enterprise users rather than focusing on large, general-purpose foundation models. This decision is influenced by customer feedback requesting models specifically designed for particular use cases rather than larger, generalized ones. <source> |
![]() |
Arthur Mensch, CEO of Mistral AI, emphasizes the importance of small, high-performance AI models capable of running on local devices. His focus is on democratizing AI by making it more accessible and allowing developers to innovate with flexible, open-source models. <source> |
Why Smaller, Purpose-Built Models Win
DeepSeek’s architecture highlights why purpose-built models will define AI’s next wave. Instead of assuming that every AI system needs access to all the world’s data all the time, the industry is shifting toward targeted, domain-specific models. The advantages are clear:
- Efficiency: Smaller models require less computational power, making them faster and cheaper to run.
- Accuracy: Training on focused datasets reduces noise and hallucinations, improving reliability.
- Security & Compliance: Controlling what data a model trains on helps meet regulatory requirements and enhances privacy.
- Scalability: Like microservices, small models can be combined dynamically to solve complex problems without unnecessary bloat.
AI’s “Day 2” Act: Specialization & Modularity
The first act of AI was brute force—scraping everything, training on everything, and hoping generalization would win. And it did, to an extent. But the second act, the “Day 2” of AI, will be about refining these systems for real-world deployment. Companies will no longer tolerate expensive, power-hungry generalist models when smaller, more efficient AI agents can perform the same tasks with better precision and lower costs.
This is where AI starts to look more like microservices than monoliths. We’ll have sales AI, finance AI, legal AI—each optimized for its domain, interoperable through APIs but not burdened by unnecessary complexity. The result? AI that is faster, cheaper, and better aligned with business needs.
The Writing on the Wall
DeepSeek is just the beginning. Open-source AI projects, edge computing, and regulatory pressures are accelerating the shift toward purpose-built models. Enterprises are already demanding AI solutions that prioritize efficiency over size, and startups are seizing the opportunity to build specialized AI agents that deliver real value without the overhead of massive generalist models.
As we enter AI’s next phase, the lesson from software engineering is clear: scale matters, but efficiency wins. The AI of tomorrow won’t be measured by the number of parameters but by its precision, adaptability, and ability to operate within constraints. In the end, the most powerful AI won’t be the one that knows everything—it will be the one that knows just enough to do its job better than anything else.