By DataCenterKnowledge
Publication Date: 2026-03-03 18:49:00
Cloud computing company Akamai on Tuesday said it plans to unleash ‘thousands’ of Nvidia Blackwell GPUs, DPUs and servers to bolster its AI capabilities at over 4,000 locations worldwide.
The company said a decentralized AI infrastructure will help reduce latency and allow better performance overall across its global operations as inference needs continue to grow. Creating a unified platform will bolster AI research and development, fine-tuning, and inference capabilities, the company said.
The news follows Akamai’s recent efforts to expand its AI inference and compute power. In October, the company announced Akamai Inference Cloud, which promises to bring AI inference closer to users and devices.
“While hyperscalers continue to push the boundaries of AI training, Akamai is focused on meeting the unique demands of the inference era,” Adam Karon, chief operating officer and general manager for Akamai’s cloud technology group, said in a statement. “By distributing inference-optimized compute across our global fabric, Akamai isn’t just adding capacity. We’re providing scale, at minimal latency, that is required to move AI from the laboratory to the street corner…”
Akamai said the distributed platform can reduce latency up to 2.5 times and save businesses as much as 86% on AI inference compared to hyperscaler infrastructure. The decentralized fabric will allow AI to interact with real…