NVIDIA Q4 FY 2023Q&A Session Transcript

Q&A(Question-and-Answer Session)is a session after the company's prepared remarks where institutional investors and analysts ask management questions. In this dialogue, you may find some valuable information that might affect the stock price in the following weeks.

Now let's look at some key points from NVIDIA Q4 FY 2023 Q&A Session Transcript$NVIDIA Corp(NVDA)$

Q:Clearly, on this call, a key focal point is going to be the monetization effect of your software and cloud strategy. I think as we look at it, I think, straight up, the enterprise AI software suite, I think, is priced at around $6,000 per CPU socket. I think you've got pricing metrics a little bit higher for the cloud consumption model. I'm just curious, Colette, how do we start to think about that monetization contribution to the company's business model over the next couple of quarters relative to, I think, in the past, you've talked like a couple of hundred million or so? Just curious if you can unpack that a little bit.

A:So I'll start and turn it over to Jensen to talk more because I believe this will be a great topic and discussion also at our GTC.

Our plans in terms of software, we continue to see growth even in our Q4 results, we're making quite good progress in both working with our partners, onboarding more partners and increasing our software. You are correct. We've talked about our software revenues being in the hundreds of millions. And we're getting even stronger each day as Q4 was probably a record level in terms of our software levels. But there's more to unpack in terms of there, and I'm going to turn it to Jensen.

Yes, first of all, taking a step back, NVIDIA AI is essentially the operating system of AI systems today. It starts from data processing to learning, training, to validations, to inference. And so this body of software is completely accelerated. It runs in every cloud. It runs on-prem. And it supports every framework, every model that we know of, and it's accelerated everywhere.

By using NVIDIA AI, your entire machine learning operations is more efficient, and it is more cost effective. You save money by using accelerated software. Our announcement today of putting NVIDIA's infrastructure and have it be hosted from within the world's leading cloud service providers accelerates the enterprise's ability to utilize NVIDIA AI enterprise. It accelerates people's adoption of this machine learning pipeline, which is not for the faint of heart. It is a very extensive body of software. It is not deployed in enterprises broadly, but we believe that by hosting everything in the cloud, from the infrastructure through the operating system software, all the way through pretrained models, we can accelerate the adoption of generative AI in enterprises. And so we're excited about this new extended part of our business model. We really believe that it will accelerate the adoption of software.

Q:I guess, Jensen, you talked about ChatGPT as an inflection point kind of like the iPhone. And so curious, part A, how have your conversations evolved post ChatGPT with hyperscale and large-scale enterprises? And then secondly, as you think about Hopper with the transformative engine and Grace with high-bandwidth memory, how have you kind of your outlook for growth for those 2 product cycles evolved in the last few months?

A:ChatGPT is a wonderful piece of work, and the team did a great job, OpenAI did a great job with it. They stuck with it. And the accumulation of all of the breakthroughs led to a service with a model inside that surprised everybody with its versatility and its capability.

What people were surprised by, and this is in our -- and close within the industry is well understood. But the surprising capability of a single AI model that can perform tasks and skills that it was never trained to do. And for this language model to not just speak English, or can translate, of course, but not just speak human language, it can be prompted in human language, but output Python, output Cobalt, a language that very few people even remember, output Python for Blender, a 3D program. So it's a program that writes a program for another program.

We now realize -- the world now realizes that maybe human language is a perfectly good computer programming language, and that we've democratized computer programming for everyone, almost anyone who could explain in human language a particular task to be performed. This new computer -- when I say new era of computing, this new computing platform, this new computer could take whatever your prompt is, whatever your human-explained request is, and translate it to a sequence of instructions that you process it directly, or it waits for you to decide whether you want to process it or not.

And so this type of computer is utterly revolutionary in its application because it's democratized programming to so many people really has excited enterprises all over the world. Every single CSP, every single Internet service provider, and they're, frankly, every single software company, because of what I just explained, that this is an AI model that can write a program for any program. Because of that reason, everybody who develops software is either alerted or shocked into alert or actively working on something that is like ChatGPT to be integrated into their application or integrated into their service. And so this is, as you can imagine, utterly worldwide.

The activity around the AI infrastructure that we build Hopper and the activity around inferencing using Hopper and Ampere to inference large language models, has just gone through the roof in the last 60 days. And so there's no question that whatever our views are of this year as we enter the year has been fairly, dramatically changed as a result of the last 60, 90 days.

Q:I wanted to ask a couple of questions on the DGX Cloud. And I guess, we're all talking about the drivers of the services and the compute that you're going to host on top of these services with the different hyperscalers. But I think we've been kind of watching and wondering when your data center business might transition to more of a systems level business, meaning pairing and [indiscernible] InfiniBand with your Hopper product, with your Grace product and selling things more on a systems level. I wonder if you could step back, over the next 2 or 3 years, how do you think the mix of business in your data center segment evolves from maybe selling cards to systems and software? And what can that mean for the margins of that business over time?

A:Yes, I appreciate the question. First of all, as you know, our Data Center business is a GPU business only in the context of a conceptual GPU because what we actually sell to the cloud service providers is a panel, a fairly large computing panel of 8 Hoppers or 8 Amperes that's connected with NVLink switches that are connected with NVLink. And so this board represents essentially 1 GPU. It's 8 chips connected together into 1 GPU with a very high-speed chip-to-chip interconnect. And so we've been working on, if you will, multi-die computers for quite some time. And that is 1 GPU.

So when we think about a GPU, we actually think about an HGX GPU, and that's 8 GPUs. We're going to continue to do that. And the thing that the cloud service providers are really excited about is by hosting our infrastructure for NVIDIA to offer because we have so many companies that we work directly with. We're working directly with 10,000 AI start-ups around the world, with enterprises in every industry. And all of those relationships today would really love to be able to deploy both into the cloud at least or into the cloud and on-prem and oftentimes multi-cloud.

And so by having NVIDIA DGX and NVIDIA's infrastructure are full stack in their cloud, we're effectively attracting customers to the CSPs. This is a very, very exciting model for them. And they welcomed us with open arms. And we're going to be the best AI salespeople for the world's clouds. And for the customers, they now have an instantaneous infrastructure that is the most advanced. They have a team of people who are extremely good from the infrastructure to the acceleration software, the NVIDIA AI open operating system, all the way up to AI models. Within 1 entity, they have access to expertise across that entire span. And so this is a great model for customers. It's a great model for CSPs. And it's a great model for us. It lets us really run like the wind. As much as we will continue and continue to advance DGX AI supercomputers, it does take time to build AI supercomputers on-prem. It's hard no matter how you look at it. It takes time no matter how you look at it. And so now we have the ability to really prefetch a lot of that and get customers up and running as fast as possible.

The above Q&A are highlights that are edited for brevity.Click here for the full NVIDIA Q4 FY 2023 Earnings Call Transcript.​​​​​​​​​​​​

If you want to know more details, you can click here to re-watch the NVIDIA Q4 FY 2023 Earnings Conference Call

# Q4 Earnings Season

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment

  • Top
  • Latest
empty
No comments yet