By Timothy Prickett Morgan
Publication Date: 2026-02-10 18:38:00
In the modern AI datacenter – really, a data galaxy at this point because AI processing needs have broken well beyond the bounds of a single datacenter or even multiple datacenters in a region in a few extreme cases – has two pinch points in the network. There is the datacenter interconnect that creates a router backbone to lash multiple datacenters together into a single working compute complex, and then there is the back-end network that creates a single memory domain across dozens to someday hundreds or thousands of GPUs or XPUs as the most useful granularity for mixture of expert training and inference.
Last fall, Cisco took care of the datacenter interconnect with its “Dark Pyramid” P200 router chip, part of the ever-embiggening and enfattening Silicon One product line, which is used by Cisco in its own switches and routers as well as by hyperscalers and cloud builders in their custom gear.
And this week, at the Cisco Live conference in Amsterdam, Cisco is…