By Tobias Mann
Publication Date: 2026-02-10 08:30:00
As AI training and inference clusters grow larger, they require bigger, higher-bandwidth networks to feed them. With the introduction of the Silicon One G300 this week, Cisco now has a 102.4 Tbps monster to challenge Broadcom’s Tomahawk 6 and Nvidia Spectrum-X Ethernet Photonics.
Much like those chips, the G300 packs 512 ultra-fast 200 Gbps serializers/deserializers (SerDes). The massive radix — that means loads of ports — means Cisco can now support deployments of up to 128,000 GPUs using just 750 switches, where 2,500 were needed previously. Alternatively, those SerDes can be aggregated to support port speeds of up to 1.6 Tbps.
None of this is unique to Cisco, however. That’s just how bandwidth scales. Those same figures apply to Broadcom and Nvidia’s 102.4 Tbps silicon just as they do to anyone else’s.
Managing AI congestion
According to Cisco fellow and SVP Rakesh Chopra, what really sets the G300 apart from the competition is its collective networking engine, which…

