A closer look at Nvidia’s Groq-powered LPX rack systems

A closer look at Nvidia’s Groq-powered LPX rack systems

By Tobias Mann
Publication Date: 2026-03-19 23:41:00

GTC DEEP DIVE At Nvidia’s GTC conference this week, CEO Jensen Huang finally addressed a $20 billion question he’s dodged for months: Why spend so much to license AI chip startup Groq’s tech and hire away its engineers rather than build it themselves?

As we’ve said before, if Nvidia wanted to build an SRAM-heavy inference accelerator, it didn’t need to buy Groq to do it. The company’s newly announced Groq 3 LPX racks, which pack 256 LP30 language processing units (LPUs) into a single system, show time-to-market was the reason Nvidia bought rather than built.

We’re told the chip is based on Groq’s second-gen LPU tech with a handful of last-minute tweaks made just before tapping out at Samsung’s fabs.

The chip doesn’t use Nvidia’s proprietary NVLink interconnect, it lacks NVFP4 hardware support, and it isn’t CUDA-compatible at launch.

We can therefore conclude that the $20 billion paid to acquire Groq’s intellectual property rights and engineering staff was an opportunity cost to get the chips out the door and into customers’ hands this year.

Why the rush?

One of the defining characteristics of SRAM-heavy architectures from Groq and its rival…