• 1
  • 1
  • Favorite

Nvidia looks poised to lead AI - but there's one big question about its growth

Dow Jones2023-12-03

MW Nvidia looks poised to lead AI - but there's one big question about its growth

By Ryan Shrout

Company's AI-focused business is soaring, but rivals are lining up

Nvidia should keep its edge as AI spreads into daily life.

Nvidia $(NVDA)$ has been raising eyebrows with investors and analysts for some time, and its most recent earnings results are no exception. The company made a lot of money in the third quarter - $18.1 billion in revenue, an increase of 206% compared to a year ago. Earnings per share were up six times or 12 times, depending on if you view the GAAP or non-GAAP numbers.

The driver of this growth is its data center segment, responsible for the GPU (graphics processing units) and chips that have been powering high-performance computing and the AI boom.

As recently as a year ago, Nvidia had a year-on-year revenue decrease in this segment (Q3 to Q4 FY23). But since that reporting, revenue in this segment has jumped four times through four quarters and more than three times in just the past six months - reflecting $14.5 billion of that total $18 billion quarterly revenue.

For comparison, Nvidia's gaming division, which as recently as fiscal 2022 generated more revenue than the data center, a revenue increase of 55% year-on-year in the most recent quarter. But it represents just $2.8 billion of gross revenue.

The stunningly fast growth for Nvidia to a $1 trillion market valuation company is due to the AI boom and need for processing chips to handle it. But such rapid growth raises questions - chiefly, can the growth of the data center business continue? This is the most important question that Nvidia investors and analysts must consider.

The quick answer is yes. Data center revenue soared to $14.5 billion from $4.2 billion in just two quarters, On Nvidia's recent quarterly earnings call, CEO Jensen Huang was bullish, as expected, saying that he sees growth in this space through 2025 at least.

There seems to be a growing sense a saturation point in the computing power needed for AI is near, or that we might be approaching the end of the AI training (the computing-intensive process of building an AI model) cycle. Such thinking is incredibly shortsighted. For the foreseeable future, AI training will never be "done" and the need for more expansive, more accurate, and more customized AI models will continue to expand. ChatGPT and current generative AI applications are just the start of this revolution, and tools such as Microsoft Copilot $(MSFT)$ are beginning to unveil the potential for AI usages.

Huang described what he calls the creation of "AI factories" that enable enterprises, governments and infrastructure developers to develop their own AIs, tailored to and specific to different needs. These will provide some of the safety and security needed for inclusion of proprietary and personal data. This vision paints a future where the need for additional AI processing continues to ramp, not one that stabilizes or decreases.

One piece of this puzzle comes from the recently enacted restrictions on shipping GPUs to China. Nvidia reported that its sales for the current quarter and into calendar year 2024 will be impacted, as China and the other restricted regions represented 20%-25% of its total sales. Offsetting that is growth in other parts of the world, Nvidia says, and it makes sense considering the company has been "sold out" of chips for some time. If a customer in China can't buy Nvidia's AI chips, I'm sure a suitor is out there in Europe or the U.S.

Nvidia is apparently working on custom designs of its GPUs for the China market that will uphold the performance restrictions from the U.S. government, but it will take a few months for those to start to filter out and represent notable revenue.

Positive trends

Does Nvidia lose its dominant position as the AI world moves from training to inference? No. AI training, or the use of high-performance supercomputers to build complex AI models like Llama and GPT, has been the primary driver of the market for AI computing.

AI inference is the application of those AI models. Once GPT has been trained, then a company like OpenAI can create a user-facing application like ChatGPT that uses the model to "infer" answers based on it and input from the user. The question has been asked if Nvidia's growth in AI could be impacted by the move from training-focus to inference-focused markets and as AI gets integrated into applications.

Perhaps the best examples of AI inference at the corporate or consumer level today are ChatGPT and Adobe's Firefly $(ADBE)$. Both are generative AI solutions, creating new content based on inputs from the user; text and analysis from ChatGPT and text-to-image for Firefly. And both are utilizing massive numbers of GPUs for AI processing in the cloud.

I expect that GPU usage to continue going forward. Nvidia's AI chips are performant in inference workloads, not just training, and the company has a huge advantage since basically all AI developers are writing and testing code on Nvidia GPUs and its CUDA software development stack. Any competitor in this arena not only has to compete at the hardware level, but with a software layer that can offer efficient development and reliability - no easy task.

Competitors take aim

One potential area to watch is on-device AI processing. As users start demanding more AI applications on their laptops, PCs, and smartphones, Intel $(INTC)$, Qualcomm $(QCOM)$and AMD $(AMD)$ are ramping up performance of their own consumer chips for AI. Qualcomm recently showed off its Snapdragon X-series processors, AMD is touting its Ryzen AI technology, and Intel is set to launch its Core Ultra processors next month, all of which include some form of dedicated AI processing on-chip. If this model of AI processing can catch on, then it might diminish the need for Nvidia's AI chips in the data center.

Are there competitive AI chips to worry about? Maybe. If anything should worry Nvidia and its investors, it is competition in the AI chip space.

For its part, Nvidia says it plans to increase its product launch cadence, going from a 24-month release cycle to a 12-month one. That means the company will be bringing new chips to market faster, with more performance and more features with each launch. Clearly Nvidia understands it cannot sit idly and let competitors sneak up.

AMD represents the biggest competitive threat in the short term. Its GPUs have been the second choice for many years, both in the PC market and in the data center. They are based on similar designs and architecture, though they are not as closely aligned as Intel and AMD PC processors; there is still considerable work that has to be done on the software side to migrate from a CUDA development stack. AMD's recent MI300 family of AI chips looks to be ramping well, with a strong announcement of support from Microsoft for Azure cloud implementations. AMD CEO Lisa Su is confident that the company can add $1 billion in revenue quickly.

The custom AI accelerator market was recently made more interesting with the announcement of the Microsoft Maia 100 chip, but also includes chips built by Meta Platforms (META), Amazon AWS $(AMZN)$, and startups like Groq. These options have the potential to offer compelling advantages over Nvidia chips, including higher energy efficiency and better performance thanks to the ability to customize the silicon for specific workloads and algorithms. It can also offer a cost advantage in the long run,

The challenge presented by Intel is an interesting one. While its GPU development work has struggled to gain market share in the years after it hired (and then lost) Raja Koduri, it is focused on its Gaudi branded family of chips that are dedicated for AI processing, acquired with the purchase of Habana Labs in 2019. These chips are proving to be competitive in several areas of the AI segment, but displacing Nvidia in the data center continues to be a struggle.

Intel could turn out to be a partner for Nvidia in the AI race if it can get its foundry services ramped up, providing an alternative to TSMC $(TSM)$ for the manufacturing of these massive chips. This would offer pricing negotiation advantages and provide additional capacity that has limited Nvidia so far.

Ryan Shrout is the founder and lead analyst at Shrout Research. Shrout has provided consulting services for AMD, Qualcomm, Intel, Arm Holdings, Micron Technology, Nvidia and others. Shrout owns shares of Intel. Follow him on X @ryanshrout.

More: Here's why Nvidia is a compelling stock when compared with the rest of the 'Magnificent Seven'

Plus: Nvidia is pushing to stay ahead of Intel, AMD in a high-stakes, high-performance computing race

-Ryan Shrout

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

 

(END) Dow Jones Newswires

December 02, 2023 13:35 ET (18:35 GMT)

Copyright (c) 2023 Dow Jones & Company, Inc.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment1

  • Large Caps
    ·2023-12-05
    Will this stock pull back after its rapid increase over the last year I wonder especially with the insider selling recently!!
    Reply
    Report
 
 
 
 

Most Discussed

 
 
 
 
 

7x24

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Company: TTMF Limited. Tech supported by Xiangshang Yixin.

Email:uservice@ttm.financial