Alphabet is in negotiations with semiconductor firm Marvell Technology regarding the customization of Tensor Processing Units (TPUs) and is exploring the potential development of a specialized chip optimized for large language model inference workloads.
This move, combined with Marvell's recent $2 billion strategic cooperation agreement with NVIDIA, further clarifies Marvell's strategic positioning in the market for custom AI data center chips.
According to reports, Alphabet is actively discussing a TPU development project with Marvell, where Marvell would participate as a design service provider. The scope of their discussions has also extended to a dedicated chip specifically optimized for the inference workloads of large language models (LLMs).
These negotiations come just days after Alphabet extended its long-term agreement with Broadcom for TPUs and networking through 2031. This indicates that Alphabet is proactively diversifying its suppliers for custom silicon, aiming to leverage Marvell's expertise in high-speed interconnect technologies to optimize both cost and performance.
**Strategic Rationale Behind Alphabet's Supplier Diversification**
Reports indicate that the talks between Alphabet and Marvell operate on two levels: first, the custom development of TPUs with Marvell acting as a design service partner; and second, the development of a specialized chip for LLM inference scenarios, sometimes referred to as a Language Processing Unit (LPU) architecture.
This development follows closely on the heels of Alphabet's decision to renew its cooperative agreement with Broadcom through 2031.
For Alphabet, as investments in rapidly expanding AI infrastructure intensify, engaging multiple custom silicon suppliers is a standard practice to mitigate technical risk and maintain bargaining power. Marvell's accumulated expertise in high-speed interconnect technology effectively addresses a specific optimization need for certain workloads within Alphabet's operations.
For Marvell, even though discussions are still in the preliminary stages, the potential implications are significant. The company's current annualized revenue from its custom ASIC business is approximately $1.5 billion. Securing a contract with Alphabet would add a substantial, high-value revenue stream to its existing design orders.
**NVIDIA's $2 Billion Investment Integrates Marvell into Core AI Ecosystem**
The backdrop for the Alphabet negotiations includes a pivotal agreement announced by Marvell last month: a strategic investment from NVIDIA totaling $2 billion, coupled with the establishment of a deep collaboration framework utilizing NVLink Fusion.
Under this agreement, Marvell will design custom XPUs and NVLink-compatible scale-up networks that will be directly integrated into NVIDIA's rack-scale AI architecture, operating in conjunction with GPUs, Vera CPUs, network interface cards, and switches. The two companies will also collaborate on silicon photonics technology for optical interconnects.
NVIDIA has characterized this partnership as an expansion of its AI ecosystem, stating that it offers customers greater flexibility while maintaining the central role of its own interconnect architecture. For Marvell, the $2 billion cash injection directly strengthens its balance sheet and accelerates its expansion efforts in the AI communications and AI-RAN markets.

