By Supreeth Koundinya
Publication Date: 2026-01-08 04:44:00
Companies are shelling out billions of dollars in the race to buy GPUs and scale data centres, but that hyperscale hinges on a fundamental assumption: powerful AI models need centralised compute power to train complex algorithms.
Now, some are challenging that very reality, hinting at the possibility that AI models may not need data centres at all.
Aravind Srinivas, CEO and co-founder of Perplexity, in a recent podcast with Prakhar Gupta, argued that the biggest threat to data centres is local intelligence, where applications do not depend on compute hosted remotely. In this model, compute shifts closer to the user, reducing reliance on a centralised data centre-based infrastructure.
Gavin Baker, CIO and managing partner at Atreides Capital, also echoed this view in a recent podcast. He imagined a future in which smartphones house more memory modules to accommodate pruned versions of frontier AI models, allowing users to access them without relying on cloud or high-end devices.
Baker pointed to Appleās strategy, focused heavily on on-device, privacy-first AI rather than relying directly on powerful cloud-based models. That approach has improved privacy guarantees but limits massive data collection, contributing to Apple lagging in the broader AI ecosystem.
Small AI Models Accelerate the Shift
Efficient and increasingly capable small language models strengthen the on-device case. Google continues to build large frontier systems such as Gemini 3…