AMD Introduces Instinct MI300 AI Chips to Compete Against Nvidia with Support from Microsoft, Dell, and HPE

Spread the love

AMD has introduced the MI300 chips with support from Lenovo, Supermicro, and Oracle, posing a significant challenge to Nvidia’s dominance in the AI computing space. The MI300X GPUs boast better memory and AI inference capabilities compared to Nvidia’s H100 chip. AMD claims that its Instinct MI300X data center GPU outperforms Nvidia’s flagship H100 in memory capacities and key AI performance metrics, offering cost savings and improved capabilities.

At an event in San Jose, California, AMD unveiled the Instinct MI300X and Instinct MI300A data center APU, targeting Nvidia’s stronghold in AI computing. The MI300X will be featured in servers from Dell, HPE, Lenovo, and Supermicro, with other OEMs planning to release designs in the near future. Microsoft Azure and Oracle Cloud Infrastructure will also utilize the MI300X, with other cloud service providers like Aligned, Akron Energy, and Cirrascale planning to support it.

The Instinct MI300X is based on the CDNA 3 architecture and offers superior performance metrics compared to Nvidia’s H100, including higher memory capacity and bandwidth. The platform supports PCIe Gen 5 and offers enhanced optimizations for large language models and AI frameworks. AMD also introduced the ROCm 6 GPU programming platform as an open alternative to Nvidia’s CUDA platform.

The MI300X platform, featuring eight MI300X chips, delivers significant performance improvements with 10.4 petaflops of peak performance, 1.5 TB of memory, and enhanced bandwidth. In comparison to Nvidia’s H100 HGX platform, the MI300X platform provides higher memory capacity and computing power. AMD emphasized the advantages of the improved memory capabilities in enabling more models to be run efficiently on the platform.

The Instinct MI300A, labeled as the world’s first data center APU for HPC and AI, combines x86-based Zen 4 cores with GPU cores based on CDNA 3 architecture. AMD highlighted the MI300A’s power efficiency, shared memory, and programmable GPU platform. The chip offers improved performance metrics over the H100 in both HPC and AI applications, showcasing power efficiency and unified memory capabilities.

AMD’s MI300 chips present a formidable challenge to Nvidia’s AI computing dominance, offering superior performance metrics and enhanced capabilities for large language models and AI workloads. With support from leading OEMs and cloud service providers, the MI300X and MI300A chips are set to disrupt the AI computing space and provide cost-effective solutions for data centers and enterprise applications.

Article Source
https://www.crn.com/news/components-peripherals/amd-launches-instinct-mi300-ai-chips-to-challenge-nvidia-with-backing-from-microsoft-dell-and-hpe