AI Bull Market Narrative Gains $21 Billion Boost as Meta Expands CoreWeave Partnership for NVIDIA's Rubin Era

Stock News04-09 20:53

Cloud AI computing power leasing giant CoreWeave (CRWV.US), often referred to as "NVIDIA's favored partner," has expanded its latest artificial intelligence infrastructure supply agreement with Facebook's parent company Meta Platforms (META.US) to a total value of $21 billion. This new agreement builds upon the $14.2 billion cloud computing arrangement reached by the two companies in September. CoreWeave will provide computing capacity through 2032, significantly deepening business ties with the social media giant that is striving to catch up with AI leaders like Anthropic and OpenAI in the cutting-edge large language model race.

For NVIDIA's newly launched Vera Rubin flagship AI computing infrastructure platform, Meta's additional computing expenditure represents a crucial step in transforming Vera Rubin from a technology roadmap into a system-level AI computing infrastructure with real customers, long-term contracts, and commercial viability. This development substantially reinforces the "AI bull market narrative" that has single-handedly supported global stock market growth trajectories in recent years, demonstrating that when AI model parameter scales, reasoning chains, and multimodal/agentic AI workloads drive exponential expansion in computing consumption, technology giants' capital expenditure priorities remain firmly focused on AI computing infrastructure.

According to a statement released by CoreWeave on Thursday, under the new terms, Meta has committed to an additional $21 billion to purchase AI cloud computing capacity from the computing leasing giant. CoreWeave will provide massive AI cloud computing infrastructure capacity through multiple large AI data centers by December 2032, with some of these centers supported by AI chip superpower NVIDIA's latest Vera Rubin AI computing infrastructure platform. Additionally, the previously established agreement between the two companies was originally set to run through December 2031, with an option to extend to 2032 with additional capacity. This means the latest approximately $21 billion initial AI computing infrastructure commitment consists of two parts: new AI cloud computing capacity corresponding to fresh orders, and the execution of the capacity expansion option from the previous agreement.

Following the announcement of the latest developments, CoreWeave's stock surged over 8% in Thursday's pre-market trading. The stock has already risen 24% year-to-date, significantly outperforming both the S&P 500 and Nasdaq 100 indices. Meta's stock gained approximately 2% in pre-market trading.

CoreWeave, often called "NVIDIA's favored partner," is part of the emerging group of "new cloud service providers" that operate by leasing access to leading cloud AI computing infrastructure centered on NVIDIA AI GPUs. Its core competitors include Nebius Group NV and Nscale, all belonging to the so-called "neocloud" AI computing leasing cloud computing camp. CoreWeave has been one of the primary beneficiaries of the AI computing industry chain driven by major technology companies racing to build the most advanced AI models, a competition that has pushed computing demand to unprecedented levels.

Undoubtedly, Meta has become one of the highest-spending technology companies in the AI computing infrastructure sector. The tech giant's CEO Mark Zuckerberg plans to invest hundreds of billions of dollars over the coming years in the enormous energy requirements, computing infrastructure, and top global talent needed to build, train, and operate AI models.

CoreWeave separately announced plans to issue $3 billion in convertible senior notes due 2032 and $1.25 billion in senior notes due 2031 for general corporate purposes, including repaying outstanding debt. Media reports citing informed sources indicated that in February, the company was seeking to raise approximately $8.5 billion from multiple major investment banks including Morgan Stanley and Mitsubishi UFJ Financial Group to help finance its cloud capacity expansion for Meta.

As an early adopter of NVIDIA graphics processing units (GPUs) for cloud leasing in the data center field, CoreWeave gained favor from NVIDIA's venture capital division by anticipating the wave of data center AI computing resource demand, even securing priority access to highly sought-after NVIDIA H100/H200 and Blackwell series AI GPUs on multiple occasions. This advantage previously forced cloud service giants like Microsoft to lease cloud AI computing resources from CoreWeave, earning it the "NVIDIA's favored partner" designation.

Global AI computing resource demand continues to exhibit explosive expansion, which explains why cloud AI computing leasing leaders like Fluidstack, Nebius Group NV, Nscale, and CoreWeave have seen continuous valuation growth this year. AI computing resource demand closely related to AI training/reasoning has pushed underlying computing infrastructure cluster capacity to its limits, with even recently expanded large AI data centers unable to meet the extraordinarily strong global computing demand.

Looking back over the past year, CoreWeave's publicly disclosed major new cloud orders have predominantly come from top-tier generative AI buyers. In March 2025, the company secured a five-year training and reasoning computing contract with OpenAI worth up to $11.9 billion; in May, it received an expansion order from OpenAI worth up to $4 billion; in September, it obtained an additional agreement with OpenAI worth up to $6.5 billion, bringing their total contract value to approximately $22.4 billion; that same month, it signed a $14.2 billion long-term computing agreement with Meta; and now it has secured another approximately $21 billion new agreement/expansion agreement with Meta.

The most significant aspect of this deal for the "AI bull market narrative" that has supported global stock market growth contours in recent years is its reinforcement of "long-term contracted cloud AI capacity expansion driven by massive-scale AI reasoning workloads." This highlights that AI computing demand has not peaked as questioned by "AI bubble rhetoric," but is instead fully transitioning from training peaks to long-term computing expansion periods involving contextual reasoning, Agentic AI, and production-level deployment. Against the backdrop of significantly cooling geopolitical tensions, the "AI bull market narrative" is poised to create substantial waves once again.

Furthermore, Meta's early locking in of Rubin architecture initial deployment through CoreWeave effectively confirms an important market trend: the next round of AI capital expenditure isn't just about continuing to purchase Blackwell architecture, but also begins allocating massive budgets for NVIDIA's newly launched Rubin generation of "cabinet-level/factory-level" AI computing infrastructure systems. This represents very strong demand anchoring for NVIDIA itself, as well as for the entire AI computing industry chain including switch chips, high-performance networking, liquid cooling, OCS switches and optical interconnects, optical modules/silicon photonic circuits, HBM/memory, 2.5D/3D advanced packaging, and data center power chains.

At the March GTC conference, NVIDIA CEO Jensen Huang unveiled the company's "unprecedented AI computing revenue super blueprint" in the AI computing infrastructure field. He informed global investors that driven by strong demand for Blackwell architecture GPU computing power and the even more explosive strong demand from the soon-to-be-mass-produced Vera Rubin architecture AI computing system, the company's future revenue scale in artificial intelligence chips could reach at least $1 trillion by 2027 (2025-2027), far exceeding the $500 billion AI computing infrastructure blueprint presented at the previous GTC conference for achievement by 2026.

As model scales, reasoning chains, and multimodal/agentic AI workloads drive exponential expansion in computing resource consumption, technology giants' capital expenditure priorities increasingly concentrate on AI computing infrastructure amid surging AI computing demand. Global investors continue to anchor the "AI bull market narrative" surrounding NVIDIA, Google TPU clusters, and AMD's product iterations and AI computing cluster delivery expectations as one of the most certain prosperity investment narratives in global stock markets. This also means investment themes closely related to AI training/reasoning, such as power supply, liquid cooling systems, and optical interconnection supply chains, will continue to rank among the stock market's hottest investment camps alongside AI computing leaders like NVIDIA, AMD, Broadcom, TSMC, and Micron, even as Middle East geopolitical tensions face uncertainty.

According to the latest analyst expectations compiled by institutions, Amazon together with Google's parent Alphabet, Facebook's parent Meta Platforms Inc., Oracle Corporation, and Microsoft are projected to reach approximately $650 billion in cumulative artificial intelligence-related capital expenditure by 2026, with some analysts believing total spending could exceed $700 billion—implying year-over-year AI capital expenditure growth potentially exceeding 70%. Notably, these five US super technology giants are expected to cumulatively invest approximately $1.5 trillion between 2023 and 2026 to build enormous AI computing infrastructure; by comparison, these technology giants invested about $600 billion during the entire historical statistical period prior to 2022.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment