MW Broadcom and Marvell are a growing threat to Nvidia as AI-chip needs evolve
By Therese Poletti
Hyperscalers are looking at ASIC chips to lower their costs of building out big AI data centers
Nvidia Corp. could start seeing some competition for its graphics processors for AI from custom chips over the next few years, as cloud companies and others start finding lower-cost solutions to the AI processing problem.
Wall Street began to realize how big that potential opportunity was this month, after the earnings calls of both Broadcom Inc. $(AVGO)$ and Marvell Technology Inc. $(MRVL)$. Both companies talked about big potential opportunities for their custom ASIC (application-specific integrated circuit) chips, citing future plans of their hyperscaler customers.
ASICs, which have been around since the 1960s, are custom-designed semiconductors with specific functions, and therefore are more efficient than a general-purpose processor. Technically, GPUs are specialized chips designed initially for rendering graphics, but ASICs can be even more specialized, and therefore even faster at very specific tasks. In 2017, Susquehanna Financial Group analyst Chris Rolland wrote in a note that he believed that ASICs could be long-term winners in the AI race.
Also read: A warning to Nvidia and AMD: GPUs may not hold the AI crown forever.
"While Nvidia GPUs are easier to program and are used for advanced AI development, hyperscalers are large enough that they want XPU ASICs that have lower total cost of ownership for internal and customer AI workloads," said Kevin Krewell, a principal analyst at Tirias Research, in an email. An XPU is traditionally a cross-architecture processor, but for Broadcom, XPU refers to its custom-designed high-performance accelerator chips.
One of Nvidia's key advantages is its CUDA software layer, which lets developers program Nvidia's GPUs for their specific needs. But as Broadcom Chief Executive Hok Tan said in the company's recent earnings call, three of its hyperscaler customers want to build clusters of 1 million custom XPUs across a single, open Ethernet network. "One million is a whole new game in terms of architecture," Tan said on the company's earnings call last week.
Nvidia recently began to support Ethernet for AI networking, but in the past it favored the more expensive InfiniBand networking architecture, which is largely used in the high-performance computing arena. Broadcom also has high-speed Ethernet networking chips which could be used in the massive clusters Tan described. Tan said networking chips would grow to 15% to 20% of its total AI silicon content, up from 5% to 10% today.
Broadcom appears to be seeing a bigger opportunity right now than Marvell, based on its recent whopping projection of a potential serviceable market in the range of $60 billion to $90 billion in 2027. Tan made that projection based on the plans of those three hyperscalers, and the possibility of two others. Analysts believed that the three current customers are Alphabet's $(GOOG)$ $(GOOGL)$ Google, Meta $(META)$ and TikTok parent ByteDance.
"Overall, the AI story seems to really be coming into its own," Stacy Rasgon, a Bernstein Research analyst, wrote last week. "Perhaps Hock might think about shopping for a leather jacket," referencing Nvidia $(NVDA)$ CEO Jensen Huang's iconic look.
When Rolland made his call on ASICs several years ago, he also pointed to the example of cryptocurrency mining, which started out using GPUs, but then many crypto miners switched to ASICs as their systems grew bigger. Rolland called Broadcom, Marvell and Microsemi, a subsidiary of Microchip Technology $(MCHP)$, his best bets at the time for 2018.
Wall Street is now seeing the first inklings of that prediction playing out.
"Our sense is that investors do not yet fully appreciate that the computing industry is evolving from a ubiquitous, standard-merchant model to a fragmented, mixed merchant/custom-IC (integrated circuit) model in the data center that could grow to $50 billion over the next five years," wrote Evercore ISI analyst Mark Lipacis, in a note to clients earlier this month. "Marvell is one of the two established vendors (along with Broadcom) that is positioned to capture a material piece of that business."
In early December, Marvell announced a five-year deal to provide Amazon.com Inc. $(AMZN)$ with chips, including custom AI chips for its web services business, AWS. Revenues in its custom silicon business are projected to go up more than previously expected, and it sees its AI revenue surging to over $1.5 billion in fiscal 2025, and to over $2.5 billion in fiscal 2026, up from over $550 million in fiscal 2024.
"There's already been a lot of noise in the system around these types of opportunities and applications," said Marvell Chief Executive Mark Murphy, adding that what is actually going to ship and represent the bulk of the volume in custom silicon for AI accelerators will be "from scaled-up companies like Marvell," that have the intellectual property, the roadmap and other capabilities. Google's Tensor processors are also custom ASICs.
Investors have already been getting slightly nervous about Nvidia's position as the leader and how it could potentially lose some of massive market share in AI. In the past month, Nvidia's shares have fallen about 11%, and are now considered in correction territory, while Broadcom has soared 45% and Marvell has jumped about 26%. These two new AI darlings, though, both have other business segments that are not growing as fast as AI.
And of course there is always the possibility that customers will decide to slow down their massive build-outs of data centers if the return on investment looks weaker. But for now, both Broadcom and Marvell appear to be having their Nvidia moments.
-Therese Poletti
This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.
(END) Dow Jones Newswires
December 18, 2024 08:00 ET (13:00 GMT)
Copyright (c) 2024 Dow Jones & Company, Inc.
Comments