Nvidia CEO Jensen Huang spent much of his CES speech discussing artificial-intelligence opportunities in self-driving cars and robotics. But roughly an hour into his presentation, investors got what they wanted.
“Vera Rubin is in full production,” Huang said Monday afternoon.
NVIDIA is currently on its Blackwell Ultra line, but the company moves quickly to roll out new models annually and keep up with ramping demand for computing power. Nvidia is expected to start shipping Vera Rubin later this year.
The “fundamental challenge” in AI is skyrocketing demand for computational power, and by virtue, Nvidia hardware, Huang said during his presentation. “The race is so intense” as customers seek to do computing tasks faster.
“Rubin arrives at exactly the right moment, as AI computing demand for both training and inference is going through the roof,” he added in a statement.
The Vera Rubin “platform,” as Nvidia calls it, consists of six chips, including the Vera central processing unit, which Huang said delivers twice the performance per watt “of the world’s most advanced CPUs.” The platform also contains the Rubin graphics processing unit, designed for inference applications, and a Spectrum-6 Ethernet switch, which consists of networking technology that Huang said will contribute to an “AI factory.”
The Vera CPU and Rubin GPU were co-designed from the start to share data faster with lower latency, Huang said during his CES keynote, referring to the act of mitigating data-transfer delays.
Nvidia’s press release featured a laundry list of endorsements. Rubin “will be a rocket engine for AI” and “is the infrastructure you use” to deploy models at scale, according to Tesla Motors CEO Elon Musk.
“The efficiency gains in the Nvidia Rubin platform represent the kind of infrastructure progress that enables longer memory, better reasoning and more reliable outputs,” Anthropic CEO Dario Amodei added.
Product updates are top of mind for investors. But Huang often likes to muse about the big-picture opportunities in AI, and he devoted a good portion of Monday’s presentation to physical AI, which is when robots and other machines start taking on tasks autonomously.
“The ChatGPT moment for physical AI is here — when machines begin to understand, reason and act in the real world,” Huang said in a statement.
Physical AI has been a tougher task for industry players because of the need to simulate real-world situations with adequate data. It nonetheless represents an exciting opportunity for companies like Nvidia, which are working to automate cars and factories while also turning humanoid robots mainstream.
The Nvidia CEO showcased new developments involving self-driving cars, humanlike robots and even AI agents that can help design chips.
He discussed a new lineup of open-source autonomous-driving models called Alpamayo, which he said will help contribute to a world in which “every single car” is AI-powered.
“There’s no question in my mind now that this is going to be one of the largest robotics industries,” Huang said of autonomous driving.
The future of self-driving cars hinges on reasoning, because it’s “impossible” to connect “every possible scenario” that could happen on the road. But those scenarios can be “decomposed into a bunch of other smaller scenarios” that can be addressed through reasoning, he said.

