By Rebecca Bellan
Publication Date: 2026-01-05 20:27:00
At CES 2026, Nvidia launched Alpamayo, a new family of open source AI models, simulation tools, and datasets for training physical robots and vehicles that are designed to help autonomous vehicles reason through complex driving situations.
“The ChatGPT moment for physical AI is here – when machines begin to understand, reason, and act in the real world,” Nvidia CEO Jensen Huang said in a statement. “Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments, and explain their driving decisions.”
At the core of Nvidia’s new family is Alpamayo 1, a 10 billion-parameter chain-of-thought, reason-based vision language action (VLA) model that allows an AV to think more like a human so it can solve complex edge cases — like how to navigate a traffic light outage at a busy intersection — without previous experience.
“It does this by breaking down problems into steps, reasoning through every possibility, and then selecting the safest path,” Ali Kani, Nvidia’s vice president of automotive, said Monday during a press briefing.
Or as Huang put it during his keynote on Monday: “Not only does [Alpamayo] take sensor input and activate steering wheel, brakes, and acceleration, it also reasons about what action it’s about to take. It tells you what action it’s going to take, the reasons by which it came about that action. And then, of course, the trajectory.”
Alpamayo…