By TechPowerUp
Publication Date: 2025-11-18 02:53:00
Together, these platforms deliver data-center-class performance across diverse AI workloads—from large-scale model training and simulation to edge inferencing and desktop AI development. MSI’s AI servers are purpose-built for large language models (LLMs), deep learning, and NVIDIA Omniverse workloads, offering flexible configurations with both Intel Xeon and AMD EPYC processors, supporting up to 600 W GPUs for maximum performance and scalability.
CG481-S6053 (4U, AMD Platform)
Dual AMD EPYC 9005 CPUs, eight FHFL PCIe 5.0 dual-width GPU slots, 24 DDR5 DIMMs, eight U.2 NVMe bays, and eight 400 GbE ports powered by NVIDIA ConnectX-8 SuperNICs, enabling high-bandwidth AI clusters.
CG480-S5063 (4U, Intel Platform)
Dual Intel Xeon 6 CPUs, eight FHFL dual-width GPU slots, 32 DDR5 DIMMs, and twenty PCIe 5.0 E1.S NVMe bays, optimized for deep learning training and fine-tuning workloads.
CG290-S3063 (2U)
Single Intel Xeon 6 CPU, 16 DDR5 DIMMs, and four dual-width GPU slots (up to 600 W each), ideal for compact edge computing and…