• 4
  • Comment
  • Favorite

Google Seeks Partnership with Marvell to Develop AI Inference Chips and Reduce Dependence on Broadcom

Trading Random17:15

Google is exploring cooperation with chip designer Marvell Technology to develop two new customized chips for AI inference workloads. This move marks Google’s latest effort to systematically cut reliance on its long-term partner Broadcom, and reflects the surging market demand for inference chips across the AI industry.

According to a report by The Information on the 19th, people familiar with the matter revealed that Google’s negotiations with Marvell cover two types of chips: a Memory Processing Unit (MPU) designed to work with Google’s Tensor Processing Units (TPU), and a new TPU optimized for AI model operation. Unlike Google’s previous procurement of off-the-shelf chips from Marvell, this cooperation aims to develop exclusive customized semiconductor products for Google.

The ongoing negotiations will exert a direct impact on the chip market landscape, putting potential pressure on Broadcom’s stock. Although Broadcom signed a new agreement with Google this month extending their partnership to 2031, Google’s strategic push for supplier diversification has become increasingly clear. Meanwhile, Marvell is poised to further expand its customized chip business, currently its fastest-growing segment.

Two New Chips with Divided Functions to Boost Inference Efficiency

Two insiders noted that the MPU jointly developed by Google and Marvell will collaborate with existing TPUs to dynamically distribute AI workloads based on varying computing and memory requirements. This design addresses the inherent heterogeneity of inference tasks: certain generative steps demand high computing power, while others are constrained by memory read-and-write speed, which a single processor cannot balance effectively.

Google plans to manufacture nearly 2 million MPUs. However, given the early stage of negotiations, this figure is subject to adjustment. For reference, Morgan Stanley estimates Google’s TPU output will reach approximately 6 million units in 2027. The two sides target finalizing the MPU design as early as next year before launching trial production.

For the second chip, a new TPU built exclusively for inference scenarios, its design timeline and planned production volume remain undisclosed. Google currently manufactures its chips at TSMC, and it remains unconfirmed whether the new chips will adopt the same foundry.

Inference Chip Arms Race Accelerates, NVIDIA LPU Acts as a Key Catalyst

Intensifying competitive pressure from NVIDIA is a core factor driving Google to accelerate this partnership.

A Google insider stated that the company had long planned to develop dedicated inference chips, but significantly expedited progress after NVIDIA unveiled its Language Processing Unit (LPU) at the GTC conference this March. NVIDIA’s LPU is built on technology licensed for USD 20 billion from startup Groq. Notably, Marvell served as the chip design partner for Groq’s first-generation LPU, granting Marvell solid practical experience in inference chip development.

The explosive demand for inference chips stems from the evolution of AI product forms. The widespread adoption of complex applications such as autonomous AI agents requires far greater computing resources than traditional chatbots. OpenAI has recently signed a more than USD 20 billion inference chip procurement deal with Cerebras and co-develops proprietary inference chips with Broadcom, as the entire industry ramps up layout in the inference track.

Decoupling from Broadcom: Clear Strategic Goals with Practical Restrictions

Google’s collaboration negotiations with Marvell are part of its supplier diversification strategy launched in 2023. As the exclusive long-term design partner for Google’s TPU products, Broadcom collects royalties based on production volume. Surging TPU demand has substantially increased Google’s payment costs, becoming the core motivation for Google to seek alternative suppliers.

Google introduced MediaTek to participate in TPU design and production last year. The new partnership with Marvell further expands Google’s supplier ecosystem. Google has previously purchased CXL controller chips from Marvell for cross-server memory sharing in data centers, laying a solid foundation for mutual trust.

Even so, Broadcom’s core position cannot be shaken in the short term. The new contract signed this month enables Broadcom to supply customized TPUs and network components for Google’s next-generation AI data center racks through 2031. This indicates Google’s diversification strategy focuses on adding alternative suppliers rather than a complete switch away from Broadcom.

For Marvell, in-depth cooperation with Google will bring high-profile client endorsement for its customized chip business. While Marvell’s core businesses cover data center networking, storage and optical connectivity chips, custom chip development has become its fastest-growing business in recent years.

The commercialization of Google’s TPU business also unlocks broader market potential for this cooperation. Google began leasing TPU resources to external clients outside its own data centers last year, challenging NVIDIA’s dominant position in the AI chip market. Anthropic, Meta and Apple are all current TPU clients. Smooth research and development of new inference chips will allow Google to expand product applications beyond internal business needs.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment

empty
No comments yet
 
 
 
 

Most Discussed

 
 
 
 
 

7x24