AI Bull Market Narrative Gains Momentum as NVIDIA's Jensen Huang Unveils Trillion-Dollar AI Vision

Stock News12:59

NVIDIA CEO Jensen Huang presented an unprecedented AI computing revenue blueprint during the GTC event, highlighting the company's leadership in AI infrastructure. He informed global investors that, driven by robust demand for Blackwell architecture GPUs and the upcoming mass production of the Vera Rubin AI computing system, NVIDIA's future revenue in the AI chip sector could reach at least $1 trillion by 2027. This projection significantly exceeds the previous target of $500 billion by 2026 outlined in earlier GTC events.

Analysts from firms such as Goldman Sachs, Wedbush, and Morgan Stanley, who maintain a positive outlook on NVIDIA's stock, believe that stronger-than-expected revenue growth prospects could push the company's market capitalization beyond the $5 trillion mark once again, following its achievement last October. They further anticipate NVIDIA reaching new all-time highs. The company's trillion-dollar AI computing vision is expected to reinforce the "AI bull market narrative" as a central theme in capital markets, potentially driving NVIDIA's stock to record levels and uplifting the global AI computing supply chain.

According to Wall Street analysts' average price target, NVIDIA's market capitalization could surpass $6 trillion within the next 12 months, with the most optimistic projections reaching as high as $8.8 trillion. As model scale, inference chains, and multimodal/agentic AI workloads drive exponential growth in computing power consumption, major technology companies are increasingly directing capital expenditures toward AI infrastructure. Global investors continue to view the "AI bull market narrative"—centered on NVIDIA, Google's TPU clusters, and AMD's product iterations—as one of the most reliable investment themes in global equity markets.

This trend also highlights growing interest in related sectors such as power supply, liquid cooling systems, and optical interconnect供应链, which are closely tied to AI training and inference. Despite geopolitical uncertainties in the Middle East, industry leaders like NVIDIA, AMD, Broadcom, TSMC, and Micron remain at the forefront of investor attention.

At the annual GTC developer conference in San Jose, California, CEO Jensen Huang introduced a new central processing unit designed for data center servers and an LPU AI inference infrastructure system based on technology licensed from AI chip startup Groq in December for $17 billion. These initiatives are part of Huang's strategy to strengthen NVIDIA's position in the inference computing segment, which involves processing user queries for both enterprise and consumer applications. In this area, NVIDIA's AI GPU ecosystem faces increasing competition from central processors and custom AI ASIC processors developed by companies like Google.

While NVIDIA has long dominated AI model training, the focus is now shifting toward inference, which emphasizes cost per token, latency, and energy efficiency as AI technologies scale. "The era of AI inference has arrived," Huang stated during his keynote address. "And inference demand continues to rise," he added. Dressed in his signature black leather jacket, Huang addressed an audience of over 18,000 people in a hockey arena, underscoring the event's significance as a premier global AI technology showcase.

Huang's presentation emphasized NVIDIA's transition from selling AI GPUs to providing comprehensive AI factory solutions. He reframed the industry’s focus from training to inference and agentic AI, raising the AI infrastructure revenue opportunity from $500 billion to at least $1 trillion for 2025–2027. This adjustment signals that future computing competition will prioritize efficient token production—balancing cost, throughput, and latency—rather than peak training performance.

Huang articulated a clear business rationale: data centers are evolving into "AI factories." Under fixed power budgets, key metrics include tokens per watt, cost per token, and time to first production. This approach underscores the importance of extreme co-design—integrating computing, networking, storage, software, power, and cooling into a unified system.

Official data indicate that the Vera Rubin NVL72 platform offers up to 10 times the inference throughput per watt compared to Blackwell, reduces token costs by 90%, and cuts the number of GPUs required for training large Mixture of Experts models by 75%. These advances represent a fundamental shift in AI infrastructure economics rather than mere chip upgrades.

At the hardware level, NVIDIA has integrated CPUs, GPUs, LPUs, DPUs, SuperNICs, switch chips, and storage architectures into a platform-level system. The Vera Rubin platform includes the Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, Spectrum-6 Ethernet switch, and the newly integrated Groq 3 LPU. The Vera Rubin NVL72 rack combines 72 Rubin GPUs with 36 Vera CPUs, while the Groq 3 LPX rack specializes in low-latency inference.

Huang redefined AI inference by splitting it into two phases: prefill handled by Vera Rubin and decode managed by Groq AI chips. This heterogeneous computing approach separates high-throughput tasks from ultra-low-latency demands, moving beyond a GPU-centric solution.

On the software and ecosystem front, NVIDIA introduced Dynamo 1.0 as an inference operating system for AI factories, claiming it can improve inference performance on Blackwell by up to seven times. For agentic AI, the company launched Agent Toolkit, OpenShell, and NemoClaw, positioning OpenClaw as a platform for personal AI operating systems while enhancing enterprise capabilities in policy control, privacy routing, and security.

NVIDIA also expanded its open-model family, including Nemotron, Cosmos, Isaac GR00T, Alpaymayo, BioNeMo, and Earth-2, and previewed the Feynman architecture roadmap. Future platforms will feature Rosa CPU, LP40 LPU, BlueField-5, CX10, and Kyber, advancing copper interconnects and co-packaged optics for next-generation AI factories.

Beyond data centers, GTC highlighted "physical AI" and spatial computing. The IGX Thor platform is now generally available for industrial, medical, robotics, and orbital edge computing. The Open Physical AI Data Factory Blueprint accelerates data generation and evaluation for robotics and autonomous systems, while the Space-1 Vera Rubin Module extends Vera Rubin architecture to orbital data centers, offering up to 25 times the AI computing power of H100 for space-based inference.

These developments illustrate NVIDIA's expansion of the "AI factory" concept into a unified infrastructure paradigm spanning cloud, edge, devices, vehicles, robotics, and even space. The core theme of GTC 2026 is not a single product launch but NVIDIA's consolidation of GeForce, data center infrastructure, networking, storage, inference systems, agent platforms, robotics, and spatial computing into a cohesive narrative—transitioning from a GPU supplier to a full-scale AI infrastructure provider.

The key takeaway is not individual chip specifications but NVIDIA's strategic effort to lock in future token economics, inference monetization, and infrastructure pricing power through system-level products.

NVIDIA's reinforced dominance in AI computing infrastructure is fueling expectations for new stock price highs. According to Emarketer analyst Jacob Bourne, "Investors had concerns about the sustainability of massive AI infrastructure spending by tech giants, but Huang's outline of a $1 trillion revenue opportunity by 2027 has bolstered confidence in long-term demand for NVIDIA's AI solutions." He added, "As the AI industry transitions from early experimentation to large-scale deployment, NVIDIA continues to lead the AI computing market."

Huang's upward revision of the AI chip and infrastructure opportunity to $1 trillion by 2027 positions NVIDIA not merely as a supplier of powerful GPUs but as a defining force in the next-generation AI factory ecosystem. The company is shifting from training dominance to inference infrastructure leadership, competing at the system level with integrated hardware and software stacks.

Huang's announcement that the "inflection point for inference has arrived" signals to capital markets that AI capital expenditure is far from peaking, with large-scale deployment just beginning. By integrating CPUs, GPUs, LPUs, networking, agent software, and data center economics into a unified strategy, NVIDIA is not only launching a new product cycle but also steering toward a $5 trillion market capitalization milestone.

According to TipRanks, Wall Street analysts' average price target for NVIDIA is $273, implying a 51% upside over the next 12 months, with the most optimistic target reaching $360. A $273 share price would correspond to a market capitalization of approximately $6.6 trillion. As of Monday's close, NVIDIA stock settled at $183.22, with a market cap of around $4.45 trillion.

Goldman Sachs noted that the $1 trillion revenue outlook presented at GTC provides longer-term demand validation, alleviating investor concerns about a potential peak in AI capital spending by 2026. The firm emphasized that NVIDIA is not just launching another high-performance GPU but commercializing inference in a unique way, upgrading its AI infrastructure to become essential in the global AI arms race.

As highlighted, Huang's split of inference into prefill and decode phases—handled by Vera Rubin and Groq 3 LPX/LPU, respectively—underscores NVIDIA's expansion from a training leader to a comprehensive AI inference infrastructure provider. Official data suggest that Vera Rubin combined with LPX can achieve up to 35 times the inference throughput per megawatt and unlock 10 times the revenue opportunity for trillion-parameter models.

Goldman Sachs maintains a bullish stance, citing NVIDIA's ability to address two key investor concerns: sustained demand and competitive threats in the inference era from CPUs, custom ASICs, or other chips. The $1 trillion guidance exceeds market expectations, confirming strong and lasting demand from hyperscale cloud providers. Based on optimistic near-term catalysts, Goldman Sachs reaffirmed its "Buy" rating for NVIDIA with a 12-month price target of $250, highlighting continued performance leadership supported by capital expenditure plans from major cloud providers and new models based on Blackwell and Rubin architectures.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment