NVIDIA has reported an adjusted gross margin of 75.2%, reaching a recent high. However, this level of profitability, which leaves rivals far behind, faces multiple challenges. These include narrowing supply bottlenecks, the accelerated rise of custom chips, and the fact that returns on customer AI investments have yet to materialize.
NVIDIA's latest quarterly report shows that for the quarter ending in January, the adjusted gross margin hit 75.2%, the highest level since the second half of 2024. The company also anticipates maintaining a similar level in the current quarter. On the demand side, hyperscale AI companies are projected to have a combined capital expenditure of approximately $650 billion this year, an increase of about 60% compared to 2025. NVIDIA is expected to benefit significantly from this trend—a factor largely anticipated by the market before the earnings release. Consequently, the true highlight of this report was the profit margin rather than demand itself.
The competitive landscape is shifting. AMD announced this week a data center processor supply agreement with Meta valued at "tens of billions of dollars," directly challenging NVIDIA's core GPU business. Alphabet's TPU chips and Amazon's custom chips are also rapidly gaining market share, with pricing significantly lower than NVIDIA's products. Increasingly attractive cost-performance ratios are driving more customers to explore diversified procurement options.
NVIDIA CEO Jensen Huang identified "continuous generational technological leaps" as the core lever for sustaining high profit margins and expressed optimism about agentic AI driving growth in computing demand. However, whether massive spending on AI hardware will yield corresponding commercial returns for customers remains an unanswered question, and this represents the biggest variable for the sustainability of NVIDIA's high margins.
**Supply Constraints and Cost Pressures: Potential Risks to Margins**
NVIDIA's high profit margin does not come without costs. Rising memory costs are an unavoidable reality, even though NVIDIA holds a priority position in the supply queue for key components.
According to Bloomberg, NVIDIA's CFO Colette Kress stated that the company has "strategically secured inventory and capacity to meet demand for several quarters ahead" but simultaneously expects supply "tightness" to persist.
Leading manufacturers of core components have warned that shortages could continue into 2027 or even longer. The current growth rate of demand for AI hardware continues to far outpace the expansion speed of the corresponding infrastructure capacity.
This means that while facing cost pressures, NVIDIA must still provide sufficient supply for its customers. Squeezed from both sides, there is considerable uncertainty about whether the 75% margin can remain stable in the coming quarters.
**The Rise of Alternatives: Competitors Compete on Price**
The price difference is particularly striking. Data from Bloomberg Intelligence indicates that the average selling price per unit for Google's TPU is between $8,000 and $10,000, while NVIDIA's H100 chip sells for over $23,000, with the newer Blackwell system starting as high as $27,000.
A price gap exceeding twofold makes diversifying computing power purchases economically very attractive.
Competitive dynamics are also becoming clearer at the transaction level. Meta signed a "tens of billions of dollars" data center processor supply agreement with AMD; last October, OpenAI reached a similar arrangement with AMD. In both deals, AMD included additional stock options to enhance the offer's attractiveness.
Regarding Alphabet, its TPUs already handle significant computational loads for Google Cloud customers and its own AI services like Gemini, with the news contributing to a rise in its stock price. Amazon, leveraging its custom chips, has secured Anthropic as a major client.
**The Return-on-Investment Puzzle: Billions in Spending Yet to Translate into Revenue Growth**
NVIDIA's data center business recorded revenue of $62.3 billion, with slightly more than half coming from hyperscale cloud companies. This signifies that NVIDIA's high margins depend considerably on these customers' continued and large-scale purchasing willingness.
Huang remains optimistic. "I have great confidence in their cash flow growth, for a simple reason," he said. "We have seen the inflection point for agentic AI, and the practical value of AI agents is emerging across global enterprises. The massive computing demand you see stems precisely from this." He further stated, "In the new era of AI, computing power is revenue."
However, a clear gap remains between expectation and reality: the massive computing investments by hyperscale cloud firms have not yet translated into visible revenue returns substantial enough to justify the expenditure.
If these returns fail to materialize, the market's willingness to continuously pay a premium for chips will be tested. At that point, NVIDIA's industry-leading profit margin would be the first to come under pressure.
**NVIDIA's Moat: The Dual Arguments of Versatility and Efficiency**
Huang reiterated this logic during Wednesday's analyst call: compared to the custom chips from Google and Amazon, NVIDIA's GPUs can handle a broader range of AI-related tasks, not being limited to specific scenarios like model training or "inference" (running already built AI models).
Against the backdrop of increasingly tight energy supplies, NVIDIA's progress in energy efficiency also constitutes a differentiating advantage.
On the margin issue, Huang provided his core rationale: "The most important lever for our gross margin is actually the continuous delivery of generational leaps to our customers."
Huang also noted, "We love CPUs as well," emphasizing that NVIDIA's CPU products will surpass competitors in data center scenarios and potentially become "one of the world's largest CPU manufacturers."
Comments