Broadcom reported record revenue for the first quarter of fiscal year 2026, driven by the powerful engine of its AI chip business, which more than doubled. The company also projected a remarkable milestone of surpassing $100 billion in AI chip revenue by 2027.
As the global generative AI race intensifies, key players in underlying computing infrastructure are delivering results far exceeding market expectations. During the subsequent earnings call, Broadcom provided an ambitious guidance: "We are now confident that in 2027, AI revenue from chip sales alone will exceed $100 billion."
Overall, Broadcom is building a scalable, replicable, and long-term infrastructure business around "custom AI chips + Ethernet networking," with commitments locked in through 2028. Key points from the call are summarized below.
**AI Chip Revenue >$100B by 2027, 6 Long-Term Strategic Customers** CEO Hock Tan delivered a striking forecast: in 2027, AI revenue from chips alone—encompassing XPUs, switch chips, and DSPs—will surpass $100 billion. It is crucial to note this figure refers specifically to chips, excluding racks and system integration.
Upon analyst inquiry, he confirmed an expected installed capacity nearing 10 gigawatts by 2027. Based on industry value estimates of approximately $20 billion per gigawatt, a 10-gigawatt capacity implies a potential market size of around $200 billion. Therefore, Broadcom capturing over $100 billion of this is not considered exaggerated.
Another key revelation was the explicit identification of the customer base as six companies. Publicly disclosed or inferable clients include Google (TPU), Meta (MTIA), OpenAI, Anthropic, and two other undisclosed LLM platform customers. Specifically, Anthropic's demand for TPU compute capacity is projected to surge to over 3 gigawatts by 2027, while OpenAI is expected to deploy over 1 gigawatt of compute capacity in the same year.
Critically, all six are LLM platform companies building their own custom XPUs under multi-generation roadmap collaborations, with planning cycles spanning 2-4 years. Management emphasized these are not short-term transactions but multi-generational strategic partnerships.
**XPUs to Gradually Erode GPU Dominance** Hock Tan was explicit during Q&A: GPUs are general-purpose architectures for dense matrix multiplication, whereas XPUs can be customized for specific workloads like Mixture of Experts (MoE), inference, pre-fill, and decoding.
As AI models evolve, customized XPUs are expected to become the preferred choice for customers, as they allow for architectures tailored to specific workloads, offering lower cost and power consumption.
Broadcom observes that technically mature customers are moving towards developing two dedicated chips annually—one specialized for model training and another productized specifically for inference.
This indicates that demand for custom chips is not a one-time replacement for GPUs but a long-term, dual-track expansion.
**Networking: A Severely Underestimated Growth Engine** During the Q&A session, management heavily emphasized networking. Networking currently constitutes 33% of structured AI revenue in Q1, is projected to reach 40% in Q2, and is expected to remain in the 33%–40% range long-term.
The growth logic for networking stems from two dimensions:
* **Scale-out:** Ethernet is the preferred solution. Broadcom's first-to-market 100 Tbps Tomahawk 6 switch faces immense demand, with the company planning to launch the performance-doubling Tomahawk 7 in 2027. * **Scale-up:** Within rack clusters, Direct Attached Copper (DAC) cables should be used to connect XPUs or GPUs for as long as possible, as copper offers the lowest latency, lowest power consumption, and cost advantages compared to optical solutions. Broadcom's technology already enables 400G transmission rates over copper.
This means Broadcom benefits simultaneously across three dimensions: switch chips, DSPs, and Ethernet scale-up.
**Supply Chain Capacity Locked Through 2028** Leveraging deep, multi-year partnerships with customers, Broadcom has proactively secured key component capacity—including leading-edge wafers, high-bandwidth memory, and substrates—from 2026 through 2028, making it one of the earliest companies to lock in 2028 capacity.
When asked about the rationale for extending supply visibility to 2028, Tan credited "early expectations" and "excellent partners," noting Broadcom's early mastery of techniques like reticle set locking. Charlie Kawwas, President of the Semiconductor Solutions Group, added that customers share 2-4 year forecasts, prompting the company to secure capacity and technology investments early. When an analyst asked if growth is achievable in 2028 given current supply, Kawwas affirmed, "Yes."
On inventory, CFO Kirsten Spears disclosed: "Inventory was $3.0 billion at the end of the first quarter, as we continue to procure components to support strong AI demand." Inventory days rose to 68 days from 58 days last quarter, primarily due to "our expectation of accelerating growth in AI semiconductor revenue."
**Infrastructure Software Unthreatened by AI, Poised to Benefit** Beyond hardware, Broadcom also highlighted the "certainty" of its software business during the call.
Tan stated, "Our infrastructure software is not disrupted by AI." He described VMware Cloud Foundation (VCF) as the "core software layer" of the data center, emphasizing its long-term value: "As the permanent abstraction layer between AI software and physical silicon, VCF is not being displaced or replaced."
The company disclosed that VMware revenue grew 13% year-over-year in the first quarter. Infrastructure software "orders remain strong, with total bookings in the first quarter exceeding $9.2 billion," and Annual Recurring Revenue (ARR) grew 19% year-over-year. Tan further asserted, "We believe the growth of generative AI and agentic AI will increase, not decrease, the demand for VMware."
Comments