[Digital Today reporter Chi-gyu Hwang] AI computing technology company Nvidia said on Tuesday it unveiled its Nvidia Alpamayo product suite at CES 2026, an IT and consumer electronics show in Las Vegas, to accelerate development of next-generation inference-based autonomous vehicles (AVs).
Alpamayo consists of open AI models, simulation tools and datasets. According to the company, autonomous vehicles must operate safely across a wide range of driving conditions. Rare and complex situations, commonly called long-tail events, remain among the hardest tasks for autonomous driving systems to handle safely. Existing autonomous vehicle architectures have processed perception and planning separately, but that approach has limits when it comes to scaling to new or unusual situations. Recent advances in end-to-end learning have delivered significant progress, but fully overcoming extreme long-tail cases requires models that can safely infer cause and effect, especially in situations outside trained ranges.
The Alpamayo suite provides vision language action (VLA) models that apply human-like thinking to autonomous driving decisions and are based on step-by-step thinking and reasoning.
Nvidia CEO Jensen Huang said, "The era of physical AI ChatGPT has arrived. Machines have now begun to directly understand the real world and to reason and act on their own. Robotaxis will be the first to benefit. Through Alpamayo, autonomous vehicles will gain reasoning capabilities, properly understand very rare situations, drive safely even in complex environments, and be able to explain the driving decisions they have made. This will be the foundation for safe and scalable autonomous driving technology."