By Alaa Mohasseb
Publication Date: 2025-12-02 13:19:00
The US company Nvidia shaped the foundations of modern artificial intelligence for many years. Its graphics processing units (GPUs) are specialized computer chips originally designed for the processing needs of graphics and animation. But they are also great for the repetitive calculations that AI systems require.
Thus, these chips have fueled the rapid rise of large language models – the technology behind AI chatbots – and they have become the well-known engine behind almost every major AI breakthrough.
This hardware remained quietly in the background while most of the attention was focused on algorithms and data. Google’s decision to train Gemini on its own chips, called Tensor Processing Units (TPUs), changes this picture. It invites the industry to look directly at the machines behind the models and to rethink assumptions that have long been considered fixed.
This moment is important because the scale of AI models has begun to reveal the limits of general purpose…