Nvidia's CUDA: Unleashing The Power Of Parallel Computing For AI Dominance

Summary

  • Nvidia Corporation's parallel computing platform, CUDA, is a key factor in the company's competitive advantage, with exponential growth showcased at COMPUTEX 2023, boasting over four million developers and 40 million downloads.

  • CUDA's synergy with Nvidia's GPUs has solidified the company's dominance in the AI industry, making CUDA the go-to platform for GPU acceleration in deep learning and AI applications.

  • Despite high valuation multiples, Nvidia Corporation's strong financial performance, low short interest, and sustainable competitive advantage make it an attractive investment in the AI and GPU computing space.

Semiconductor Maker Nvidia Reports Quarterly EarningsSemiconductor Maker Nvidia Reports Quarterly Earnings

Most recently, in an article posted in April, I emphasized to readers that Nvidia Corporation (NASDAQ:NVDA $NVIDIA Corp(NVDA)$ ) stands to gain substantially from the artificial intelligence, or AI, boom. Moreover, I've been advocating this thesis since December 2021, a full year prior to the launch of ChatGPT, with an article titled Nvidia's Trillion-Dollar AI Opportunity.

By now, we believe the market recognizes the enormous potential AI holds for Nvidia. However, I think the prevailing skepticism on the stock, which has a Hold rating on Seeking Alpha, arises from a lack of appreciation of the firm's sustainable competitive edge.

While Nvidia's competitive moat is multifaceted, we believe the primary element is CUDA, the company's parallel computing and programing platform.

We recently attended COMPUTEX 2023 and listened to the two-hour keynote speech given by Jensen Huang, Nvidia's founder and CEO. One of the key takeaways for us is the stunning growth of CUDA. We would like to take this opportunity to share what we learned as well as delve deeper into CUDA to help investors comprehend the extent of Nvidia's competitive advantage.

COMPUTEX 2023 & Deep Dive Into CUDA

During the COMPUTEX 2023 conference, we were astounded by Jensen's revelation about the recent exponential growth of CUDA. Currently, CUDA boasts over four million developers, in excess of 3,000 applications, and a staggering 40 million CUDA downloads historically, with a phenomenal 25 million just in the previous year. Furthermore, 15,000 startups have been established on Nvidia's platform, and a massive 40,000 large enterprises worldwide are utilizing accelerated computing. We agree with Jensen's assessment: we have undoubtedly arrived at a crucial turning point, ushering in a new epoch of computing.

Understanding CUDA's history and enduring presence in the AI ecosystem, in our view, is crucial to understanding the sustainability of Nvidia's dominance in GPU for AI training and inference.

CUDA, developed by Nvidia, is a parallel computing platform and programming model that allows general computing on Nvidia's GPUs (graphics processing units). This platform equips developers to accelerate compute-demanding applications by utilizing the strength of GPUs for the computations that can be executed in parallel.

While there have been alternatives to CUDA, such as OpenCL, and GPU competitors like Advanced Micro Devices, Inc. (AMD), the synergy of CUDA and Nvidia's GPU rules in multiple application fields, including deep learning. It's even the cornerstone for some of the fastest supercomputers worldwide.

Graphics cards, believed to be as ancient as PCs, began with the 1981 IBM Monochrome Display Adapter. Nvidia entered the market with its first GPU in 1995. Originally, GPUs were primarily used for gaming, but their applications broadened over time to mathematics, science, and engineering fields.

Nvidia launched CUDA in 2006 to become the first commercially viable solution for general-purpose computing on GPUs. OpenCL, a competitor to CUDA, was introduced in 2009. CUDA and OpenCL aimed to offer a standard for heterogeneous computing that wasn't confined to Intel Corporation (INTC) and AMD CPUs. However, even though OpenCL's versatility seemed appealing, it didn't perform as well as CUDA on Nvidia GPUs, which were gaining in popularity. Today, most deep learning frameworks either lack OpenCL support or only provide it as an afterthought post-CUDA support release.

Over the years, CUDA has evolved and expanded its capabilities in tandem with advanced Nvidia GPUs. With multiple P100 server GPUs, performance improvements can go up to 50x over CPUs. Other GPUs like the V100 and A100 offer even higher performance boosts, although this varies among users. There have also been software enhancements for model training on CPUs and improvements in CPUs themselves, particularly in offering more cores.

The advent of GPUs has been a game-changer for high-performance computing. The yearly CPU performance improvement, which Moore’s Law predicted to double every 18 months, has slowed down due to physical limits. This includes constraints on chip mask resolution, chip yield during manufacturing, and heat limits on clock frequencies at runtime. In such a scenario, GPUs' speed boost has been a timely intervention. Huang's law, named after Nvidia's founder Jensen Huang, is an observation that the evolution of graphics processing units ((GPUs)) is progressing at a significantly faster pace compared to conventional central processing units ((CPUs)).

GPUs combined with CUDA have been a game-changer for the AI industry, which benefited from the massive improvements in GPU computation power and ease of programmability. Today, Numerous deep learning frameworks, including Caffe2, Chainer, Databricks, H2O.ai, Keras, MATLAB, MXNet, PyTorch, Theano, and Torch, rely on CUDA for GPU support. These frameworks typically employ the cuDNN library for deep neural network computations. This library is so integral to deep learning framework training that any framework using a specific version of cuDNN achieves roughly the same performance for comparable use cases. When CUDA and cuDNN progress from version to version, all deep learning frameworks that update to the new version benefit from the performance improvements. However, performance differences between frameworks arise from their ability to scale to multiple GPUs and nodes.

The CUDA Toolkit comprises libraries, debugging and optimization tools, a compiler, documentation, and a runtime library for deploying your applications. It includes components for deep learning, linear algebra, signal processing, and parallel algorithms.

Generally, CUDA libraries work with all NVIDIA GPU families, and using these libraries is the simplest way to leverage GPUs, given that the required algorithms are implemented in the corresponding library. Controlling CUDA gives Nvidia a massive competitive advantage vs. any current and potential GPU challengers.

Financials & Valuation

Note: All historical data in this section comes from the company’s 10-K filings, and all consensus numbers come from FactSet.

On the financial trends front, NVDA's revenue has demonstrated impressive growth, expanding by a compound annual growth rate (CAGR) of 35.2% over the past three fiscal years. Analysts expect this trend to continue, forecasting revenues to soar by 51.4% to $40.8 billion this fiscal year, and by another 25.5% to $51.3 billion in the following year. However, NVDA's EBIT margin saw a slight contraction of 0.4% points, from 33.9% to 33.4% over the same period, signaling possible efficiency issues. Still, experts remain optimistic, projecting a massive expansion of 1,431 basis points to 47.8% this fiscal year and a further 256 basis points to 50.3% the next fiscal year.

Over the past three years, NVDA's stock-based compensation (SBC) accounted for 8.7% of its revenue, leading to a minor 0.4% increase in diluted outstanding common shares. Despite these dynamics, NVDA's EPS still managed to grow at a CAGR of 32.1% during the past three fiscal years, albeit lagging its revenue growth. Looking ahead, the consensus predicts EPS to rise by a hefty 116.7% to $7.24 this fiscal year, and by 34.2% to $9.71 in the following year.

Consensus estimates paint an encouraging picture for NVDA's free cash flow in the current fiscal year, expecting it to reach $15,790 million, equating to a 38.7% free cash flow, or FCF, margin. This represents a significant increase from four fiscal years ago, when the FCF margin stood at 38.1% with a value of $4,157 million. Over the past four completed fiscal years, NVDA has generated an average FCF margin of 32.5%, and capital expenditure has averaged 6.3% of revenue, suggesting a moderate capital intensity.

NVDA's strong return on invested capital, standing at 12.3%, combined with a net cash of $4,366 million, underlines the strength of the company's balance sheet.

Trading at $401.11 per share, NVDA currently holds a market value of $990.7 billion and an enterprise value of $986.4 billion. The valuation multiples, at an EV/Sales multiple of 19.2, an EV/EBIT multiple of 38.2, a P/E multiple of 41.3, and an FCF multiple of 46.7, are significantly above the S&P 500's averages. However, NVDA's PEG ratio for FY2 is currently at a premium of less than 1% compared to the S&P 500, suggesting that the company is fairly valued on a growth-adjusted basis.

Finally, NVDA's current rolling forward 12-month P/E metric stands at 48.6, which is high compared to its 5-year mean of 37.7. This metric is currently within a 2-standard deviation range of 17.3 to 58.1, signaling that it's trading towards the higher end of its historical valuation range.

The current low short interest in Nvidia Corporation's stock of just 1.2% demonstrates a lack of investor sentiment toward a price decline, further strengthening the bull case for the company.

Conclusion

The exponential growth of CUDA showcased at COMPUTEX 2023 reinforces Nvidia Corporation's sustainable competitive advantage in the AI industry. With over four million developers, 3,000 applications, and a staggering 40 million downloads, CUDA has become the go-to platform for GPU acceleration in deep learning and AI applications.

The strong synergy between CUDA and Nvidia Corporation's GPUs has solidified their dominance, making CUDA a cornerstone for some of the fastest supercomputers worldwide. By understanding the history and significance of CUDA, investors can appreciate the extent of Nvidia Corporation's competitive edge and its pivotal role in driving the company's continued success in GPU computing for AI.

Source: seeking alpha

# US Stocks Opportunities

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment

  • Top
  • Latest
empty
No comments yet