Mixture of Experts Powers the Most Intelligent Frontier AI Models, Runs 10x Faster on NVIDIA Blackwell NVL72

Mixture of Experts Powers the Most Intelligent Frontier AI Models, Runs 10x Faster on NVIDIA Blackwell NVL72

By Shruti Koparkar
Publication Date: 2025-12-03 16:00:00

  • The top 10 most intelligent open-source models all use a mixture-of-experts architecture.
  • Kimi K2 Thinking, DeepSeek-R1, Mistral Large 3 and others run 10x faster on NVIDIA GB200 NVL72.

A look under the hood of virtually any frontier model today will reveal a mixture-of-experts (MoE) model architecture that mimics the efficiency of the human brain.

Just as the brain activates specific regions based on the task, MoE models divide work among specialized “experts,” activating only the relevant ones for every AI token. This results in faster, more efficient token generation without a proportional increase in compute.

The industry has already recognized this advantage. On the independent Artificial Analysis (AA) leaderboard, the top 10 most intelligent open-source models use an MoE architecture, including DeepSeek AI’s DeepSeek-R1, Moonshot AI’s Kimi K2 Thinking, OpenAI’s gpt-oss-120B and Mistral AI’s Mistral Large 3.

However, scaling MoE models in production while delivering high performance is notoriously difficult. The extreme codesign of NVIDIA GB200 NVL72 systems combines hardware and software optimizations for maximum performance and efficiency, making it practical and straightforward to scale MoE models.

The Kimi K2 Thinking MoE model — ranked as the most intelligent open-source model on the AA leaderboard — sees a 10x performance leap on the NVIDIA GB200 NVL72 rack-scale system compared with NVIDIA HGX H200. Building on the performance delivered…