• 17
  • Comment
  • Favorite

Why Nvidia Is Still the Undisputed King of AI

Dow Jones08-10

NVIDIA Corp is famous for building AI chips, but its most important construction is a business bulwark that keeps customers in and competitors out. This barrier is made as much of software as it is of silicon.

Over the past two decades, Nvidia has created what is known in tech as a "walled garden," not unlike the one created by Apple. While Apple's ecosystem of software and services is aimed at consumers, Nvidia's focus has long been the developers who build artificial-intelligence systems and other software with its chips.

Nvidia's walled garden explains why, despite competition from other chip makers and even tech giants like Google and Amazon, Nvidia is highly unlikely to lose meaningful AI market share in the next few years.

It also explains why, in the longer term, the battle over territory Nvidia now dominates is likely to focus on the company's coding prowess, not just its circuitry design -- and why its rivals are racing to develop software that can circumvent Nvidia's protective wall.

The key to understanding Nvidia's walled garden is a software platform called CUDA. When it launched in 2007, this platform was a solution to a problem no one had yet: how to run non-graphics software, such as encryption algorithms and cryptocurrency mining, using Nvidia's specialized chips, which were designed for labor-intensive applications like 3-D graphics and videogames.

CUDA enabled all kinds of other computing on those chips, known as graphics-processing units, or GPUs. Among the applications CUDA let Nvidia's chips run was AI software, whose booming growth in recent years has made Nvidia one of the most valuable companies in the world.

Also, and this is key, CUDA was just the beginning. Year after year, Nvidia responded to the needs of software developers by pumping out specialized libraries of code, allowing a huge array of tasks to be performed on its GPUs at speeds that were impossible with conventional, general-purpose processors like those made by Intel and AMD.

The importance of Nvidia's software platforms explains why for years Nvidia has had more software engineers than hardware engineers on its staff. Nvidia Chief Executive Jensen Huang recently called his company's emphasis on the combination of hardware and software "full-stack computing," which means that Nvidia makes everything from the chips to the software for building AI.

Every time a rival announces AI chips meant to compete with Nvidia's, it is up against systems that Nvidia's customers have been using for more than 15 years to write mountains of code. That software can be difficult to shift to a competitor's system.

At its June shareholders meeting, Nvidia announced that CUDA now includes more than 300 code libraries and 600 AI models, and supports 3,700 GPU-accelerated applications used by more than five million developers at roughly 40,000 companies.

The enormous size of the market for AI computing has encouraged an array of companies to unite to take on Nvidia. Atif Malik, semiconductor and networking equipment analyst at Citi Research, projects that the market for AI-related chips will reach $400 billion annually by 2027. (Nvidia's revenue for the fiscal year that ended in January was about $61 billion.)

Much of this collaboration is focused on developing open-source alternatives to CUDA, says Bill Pearson, an Intel vice president focused on AI for cloud-computing customers. Intel engineers are contributing to two such projects, one of which includes ARM Holdings Ltd, Google, Samsung and Qualcomm. OpenAI, the company behind ChatGPT, is working on its own open-source effort.

Investors are piling into startups working to develop alternatives to CUDA. Those investments are driven in part by the possibility that engineers at many of the world's tech behemoths could collectively make it possible for companies to use whatever chips they like -- and stop paying what some in the industry call the "CUDA tax."

One startup that could take advantage of all this open-source software, Groq, recently announced a $640 million investment, at a $2.8 billion valuation, to build chips to compete with Nvidia's.

Tech giants are also investing in their own alternatives to Nvidia chips. Alphabet and Amazon.com each make their own custom chips for training and deploying AI, and Microsoft announced in 2023 it would follow suit.

Among the most successful competitors to Nvidia's AI-chip dominance is AMD. It is still a fraction of Nvidia's size in the market -- Advanced Micro Devices has projected $4.5 billion in 2024 revenue from its Instinct line of AI chips -- but it is investing heavily to hire software engineers, says Andrew Dieckman, an AMD vice president.

"We have expanded our software resources tremendously," he says. AMD announced last month it would acquire Silo AI for $665 million, adding 300 AI engineers.

Microsoft and Meta Platforms, major Nvidia customers, both buy AMD's AI chips, reflecting a desire to encourage competition for one of the priciest items in tech giants' budgets.

Nonetheless, Malik, of Citi Research, says he expects Nvidia to maintain a market share of around 90% in AI-related chips for the next two to three years.

To understand the pluses and minuses of alternatives, it helps to understand what it takes to build a ChatGPT-style AI without using any hardware or software from Nvidia.

Babak Pahlavan, CEO of startup NinjaTech AI, says he would have used Nvidia's hardware and software to launch his company -- if he could have afforded it. But shortages of Nvidia's powerful H100 chips have kept prices high and access challenging.

Pahlavan and his co-founders eventually turned to Amazon, which makes its own custom chips for training AI, the process by which such systems "learn" from huge troves of data. After months of effort, the team finally succeeded in training their AI on Amazon's chips, known as Trainium. It wasn't easy.

"There were lots of challenges and bugs," says Pahlavan, whose team at NinjaTech AI met four times a week, for months, with an Amazon software team. Finally, the two companies worked out the issues, and NinjaTech's AI "agents," which perform tasks for users, launched in May. The company claims more than one million monthly active users for its service, all of whom are served by models trained, and running, on Amazon's chips.

"In the beginning there were a few bugs on both sides," says Amazon Web Services executive Gadi Hutt, whose team worked with NinjaTech AI. But now, he says, "we're off to the races."

Customers that use Amazon's custom AI chips include Anthropic, Airbnb, Pinterest and Snap. Amazon offers its cloud-computing customers access to Nvidia chips, but they cost more to use than Amazon's own AI chips. Even so, it would take time for customers to make the switch, says Hutt.

NinjaTech AI's experience illustrates one big reason startups like it are enduring the pain and additional development time required to build AI outside of Nvidia's walled garden: cost.

To serve more than a million users a month, NinjaTech's cloud-services bill at Amazon is about $250,000 a month, says Pahlavan. If he were running the same AI on Nvidia chips, it would be between $750,000 and $1.2 million, he adds.

Nvidia is keenly aware of all this competitive pressure, and that its chips are costly to buy and operate. Huang, its CEO, has pledged that the company's next generation of AI-focused chips will bring down the costs of training AI on the company's hardware.

For the foreseeable future, Nvidia's fate is a question of inertia -- the same kind of inertia that's kept businesses and customers locked into various other walled gardens throughout history. Including Apple's.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment

empty
No comments yet
 
 
 
 

Most Discussed

 
 
 
 
 

7x24

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Company: TTMF Limited. Tech supported by Xiangshang Yixin.

Email:uservice@ttm.financial