NVIDIA CEO's GTC Keynote Highlights: Trillion-Dollar Revenue, LPU Architecture, Space Chips, and One-Click AI Agents

Deep News07:22

NVIDIA CEO Jensen Huang projected during his GTC keynote that the company's flagship computing chips could generate $1 trillion in revenue by 2027.

Huang also showcased the Vera Rubin AI factory platform, the LPU inference architecture, CPO switches, and a space data center module, while introducing NemoClaw as an intelligent agent infrastructure. The presentation aimed to outline a full-stack AI ecosystem spanning from edge devices and data centers to orbital computing. Delivered early Tuesday Beijing time, the two-and-a-half-hour speech covered a wide range of hardware and software concepts shaping the AI industry.

For investors, the event proved highly rewarding—nearly all anticipated market themes were addressed, with the added surprise of Huang's bold financial forecast for computing chip revenue.

Key Takeaway: $1 Trillion Revenue Huang confirmed that NVIDIA’s flagship chips are expected to help the company achieve $1 trillion in revenue by 2027.

The significance of this projection depends on individual investor interpretation. Previously, Huang stated that data center equipment would bring in $500 billion in sales by the end of 2026. The latest forecast extends that timeline by one year and doubles the cumulative amount.

This announcement marked one of the most exciting moments for shareholders during the speech. NVIDIA's stock rose as much as 4% intraday before settling with a 1.6% gain at the close.

GPU (X) AI Factory Platform (√) NVIDIA emphasized that Vera Rubin is not a single chip but a complete AI supercomputing platform composed of 7 types of chips and 5 rack systems.

Beyond the well-known Rubin GPU and Vera CPU combination (Vera Rubin NVL72 GPU rack), the event introduced two new CPU products as major variables.

The Vera CPU rack integrates 256 Vera CPUs per unit, delivering twice the computational efficiency and a 50% increase in operating speed compared to traditional CPUs.

The Groq 3 LPX rack incorporates 256 LPU processors, offering 128GB of on-chip SRAM and 640 TB/s of expanded bandwidth. When combined with the Vera Rubin platform, the LPX rack improves inference throughput per watt by 35 times. Huang noted that LPU chips will be manufactured by Samsung, with racks expected to begin shipping in the second half of this year.

All three rack systems utilize liquid cooling architecture.

The highly anticipated Spectrum-6 SPX switch, as expected, adopted co-packaged optics (CPO) technology, delivering 5 times higher optical power efficiency and 10 times greater network reliability.

Looking ahead, the Rubin Ultra chip will use a vertical insertion arrangement in the Kyber rack, enabling 144 GPUs to connect within a single NVLink domain. The next-generation Feynman architecture GPU will incorporate stacked chips and custom HBM technology.

Space Data Chip NVIDIA also introduced the Space-1 Vera Rubin module, designed to deploy data center-level AI computing capabilities on satellites and orbital data centers (ODCs). The company highlighted its applications for in-orbit inference, real-time geospatial intelligence, and autonomous space missions.

NVIDIA emphasized that its product portfolio—including Jetson Orin, IGX Thor, RTX PRO 6000 Blackwell GPU, and the upcoming Space-1 module—forms a comprehensive computing architecture spanning orbital edge computing, ground-based AI data centers, and cloud analytics.

One-Click “AI Shrimp Farming” By venturing into what it metaphorically calls the “shrimp farming industry,” NVIDIA is positioning AI agent infrastructure as a new growth area.

NemoClaw serves as the infrastructure layer for the OpenClaw intelligent agent platform, allowing AI agents to be deployed with a single command. It integrates Nemotron models and the OpenShell runtime environment while enhancing security, privacy, and sandbox capabilities. The goal is not only simplified deployment but also “secure shrimp farming.”

NVIDIA highlighted that NemoClaw can run on devices ranging from RTX PCs and RTX PRO workstations to DGX Station and DGX Spark systems, underscoring that always-on AI assistants require specialized computing hardware.

The company also announced an expansion of its “open model ecosystem,” covering three major AI directions: agentic AI, physical AI, and healthcare AI.

DLSS 5: A “GPT Moment” for Graphics At GTC, NVIDIA unveiled DLSS 5, calling it the most significant breakthrough in computer graphics since the introduction of real-time ray tracing in 2018.

Huang stated, “Twenty-five years after NVIDIA invented programmable shaders, we are once again redefining computer graphics. DLSS 5 is the ‘GPT moment’ for the graphics field.”

The new DLSS 5 system combines traditional 3D graphics data with a generative AI model that predicts and completes parts of an image. This allows NVIDIA GPUs to generate highly detailed scenes and realistic characters without rendering every element from scratch.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment