Broadcom (AVGO.US), one of the major beneficiaries of the global AI boom, reported its fiscal first-quarter 2026 earnings and second-quarter guidance before market open on March 5 Beijing time. Overall, Broadcom's latest results and management's outlook for the next quarter exceeded Wall Street analysts' expectations. Particularly, its $100 billion AI chip revenue outlook further validates the Street's assertion that the AI boom remains in the early infrastructure build-out phase characterized by supply shortages in computing power. It also highlights that as the AI inference era arrives, driven by surging demand for cloud-based AI inference computing and the trend of embedding large AI models into enterprise operations via "micro-training," more cost-effective AI ASIC computing systems are posing a strong challenge to NVIDIA's near 90% market share dominance in AI chips.
Broadcom is a key chip supplier for Apple and other major tech firms, and a core provider of high-performance Ethernet switch chips for large global AI data centers, as well as custom AI ASICs crucial for AI training and inference for cloud computing giants. Following the release of its exceptionally strong results and outlook, Broadcom's stock surged over 5% in after-hours trading, lifting shares of other AI computing supply chain players like TSMC and Micron. This single-handedly revitalized recently sluggish "AI faith," demonstrating to the market that spending by tech giants like Google and Meta, and AI leaders like OpenAI and Anthropic on AI computing infrastructure remains robust. It largely reflects the explosive growth in computing demand from users of top-tier AI platforms like Gemini, Claude, and ChatGPT.
Furthermore, Broadcom's management announced a new stock repurchase program of up to $10 billion, emphasizing it will continue through year-end, indicating its efforts to capitalize on unprecedented AI computing spending are yielding significant results. The most重磅 aspect of the earnings report was the CEO's statement that AI chip revenue related to AI ASICs will exceed $100 billion next year. CEO Hock Tan stated on the earnings call that the company expects its cumulative AI chip revenue to surpass the $100 billion milestone next year. This marks significant market share gains and technological progress for the company in the AI chip domain, which is dominated by the world's highest-valued company, the "AI chip superpower" NVIDIA (NVDA.US).
On Wall Street, analysts are extremely optimistic about Broadcom's AI chip revenue prospects, with 12-month price targets concentrated between $450 and $535. In contrast, Broadcom's stock closed at $317.53 on Wednesday. "We have very clear line of sight to achieving this milestone in 2027," Tan said on the call with Wall Street analysts. "We have also secured the supply chain needed to achieve this." The company expects AI-related chip revenue to be $10.7 billion in the current quarter, implying that reaching an annualized $100 billion revenue level would require a further significant leap in global AI computing demand.
Under Tan's leadership, Broadcom is increasingly tying its fortunes to the unprecedented AI infrastructure boom. Although NVIDIA remains the largest supplier of AI chips—the latest core hardware for training and efficiently running large AI models—Broadcom has positioned itself as a more cost-effective and power-efficient alternative through its custom semiconductor business. Broadcom's latest $100 billion AI chip-related revenue target includes both revenue from "AI ASIC computing clusters" that compete fiercely with NVIDIA's dominant AI GPUs, and revenue from AI networking chips, specifically high-performance Ethernet switch chips.
Regarding the latest financial metrics, for the first quarter ended February 1, Broadcom's total revenue rose to $19.3 billion, representing a significant 29% year-over-year increase. Adjusted earnings per share, excluding certain items, were $2.05. Both figures exceeded analysts' average expectations of approximately $19.2 billion in revenue and $2.03 EPS. Broadcom stated that AI-related revenue during the period doubled to $8.4 billion, growing much faster than previously anticipated. Tan stated in a release that this growth "was driven by strong demand for custom AI ASIC accelerators and high-performance AI networking equipment." Semiconductor solutions revenue, which includes AI ASICs and smartphone RF chips, reached $12.515 billion in Q1, a substantial 52% year-over-year increase.
Tan mentioned on the call that he expects OpenAI to begin volume shipments next year of AI ASIC computing chips developed in partnership with Broadcom, with computing power capacity expected to exceed 1 gigawatt. He also noted that demand for Google's TPU is very strong and will accelerate further in 2027. Broadcom also plans to ship AI ASIC chips, co-developed with AI application leader Anthropic, which is using Google's TPUs, to achieve 1 gigawatt capacity this year and over 3 gigawatts next year.
Regarding the closely watched guidance, the company expects total revenue of approximately $22 billion for the second quarter ending May 3, implying a potential year-over-year increase of about 47%, significantly higher than the Wall Street consensus of approximately $20.5 billion, though some analysts had forecasts above $22 billion.
So far this year, market skepticism towards the AI prospects of Broadcom, NVIDIA, and other AI computing supply chain leaders has persisted, fueled by concerns over the sustainability of trillion-dollar-level AI computing expenditures. Year-to-date, Broadcom's stock had fallen 8.3% at the close. Investors are growing increasingly worried about a potential bubble in unprecedented AI spending; even NVIDIA's explosively growing earnings report last month failed to boost bullish sentiment, with its stock falling significantly post-earnings. The key question is whether the current AI wave will extend over the next decade or two, and whether the potentially unprecedented global AI computing spending, which could reach trillions of dollars before 2030, will yield a stronger AI revenue outlook than the expenditures themselves.
TPU at Full Throttle! The Golden Age of AI ASIC Arrives In recent years, Broadcom's market capitalization has soared, now exceeding $1.5 trillion, thanks to massive AI computing orders for custom AI ASIC chips for leaders like Google, OpenAI, and Anthropic PBC. Growing global enterprise interest in deploying Google's TPU AI computing clusters also benefits Broadcom's prospects, as it has long partnered with the tech giant to develop core TPU chips. Simultaneously, Broadcom just shipped the first products of its next-generation computing processor and stated that about six other hyperscale customers will adopt this generation of ASIC products this year.
Beyond its custom AI ASIC business, Broadcom is continuously upgrading its high-performance networking equipment to better connect the powerful computing resources required for running AI models. Tan has also built a substantial software business benefiting from the cloud AI training/inference boom through acquisitions. This strong earnings report sufficiently demonstrates that the unprecedented growth logic for AI ASICs is being rapidly confirmed by "earnings-level evidence."
The generative AI craze sweeping the globe has accelerated the AI chip development processes of cloud and chip giants, who are racing to design the fastest and most power-efficient AI computing infrastructure clusters for advanced large AI data centers. Broadcom and its primary competitor Marvell focus primarily on leveraging their absolute advantages in high-speed interconnects and chip IP to partner with cloud giants like Amazon, Google, and Microsoft to build custom AI ASIC computing clusters tailored to their specific AI data center needs. This ASIC business has grown into a very significant segment for both companies; for instance, the TPU AI computing cluster developed by Broadcom and Google is a classic example of the AI ASIC technology path.
Undoubtedly, major economic and power constraints are pushing Microsoft, Amazon, Google, and Meta's parent company Facebook towards internally developing AI chips based on the AI ASIC technology path for their cloud systems. The core goal is to make AI computing clusters more cost-effective and power-efficient. The high construction costs of super-sized AI data centers, akin to "Stargate," mean tech giants increasingly demand more economical AI computing systems. Under power constraints, they strive to optimize "cost per token" and "output per watt" to the extreme. The prosperous era for the AI ASIC technology path has truly arrived.
Furthermore, compared to the long-term supply shortages, high costs, and constraints of supply chain bottlenecks and delivery schedules associated with advanced AI GPU computing clusters like those based on NVIDIA's Blackwell architecture, self-developed AI ASICs undoubtedly provide a "second curve" of production capacity. They also offer more leverage in procurement negotiations, product pricing, and cloud service margins. Additionally, cloud majors like Google and Microsoft can co-design the entire stack—"chip-interconnect-system-compiler/runtime-scheduling-observability/reliability"—improving computing infrastructure utilization and lowering Total Cost of Ownership.
While AI training, where NVIDIA's GPUs hold a near monopoly, requires greater generality of AI computing clusters and rapid iteration of the entire computing system, the AI inference side, after the规模化 deployment of cutting-edge AI technologies, places greater emphasis on cost per token, latency, and power efficiency. For example, Google explicitly positions its Ironwood as a TPU generation "born for the AI inference era," emphasizing performance/power efficiency/cost-effectiveness and scalability of computing clusters. However, Amazon's latest actions demonstrate that AI ASICs may possess strong potential for training large models.
The AI ASIC computing体系 will undoubtedly continue to erode NVIDIA's monopoly premium and some market share over the medium to long term, rather than linearly replacing the GPU system. The fundamental reason is that core competition in the inference era is no longer just about "peak computing power," but rather cost per token, power consumption, memory bandwidth utilization, interconnect efficiency, and the total cost of ownership after hardware-software co-design. On these metrics, ASICs customized for specific workloads with tailored data flow, compilers, and interconnects are inherently better positioned to achieve high cost-effectiveness than general-purpose GPUs.
What is more likely to happen in future AI data centers is: cutting-edge training and general-purpose cloud computing will continue to be dominated by GPUs, while hyperscale internal inference, Agent workflows, and fixed high-frequency loads will accelerate the shift towards ASICs. Data centers are entering a true era of heterogeneous computing.
Broadcom to Lead AI ASIC! Wall Street Bullish on Further Gains Amazon AWS explicitly positions its AI ASIC computing clusters—Trainium/Inferentia—as dedicated accelerators for generative AI training and inference, with Trainium2 offering about 30-40% better price-performance compared to its AI GPU cloud instances. Google recently stated publicly that Gemini 2.0's training and inference run 100% on TPUs. This indicates that "hyperscalers using self-developed ASICs for core model training/inference" is no longer a proof-of-concept but is entering a replicable industrialization phase.
In the era of cutting-edge training, the AI field most needed generality, software maturity, and rapid adaptation to new model architectures, hence GPUs had a natural advantage. But as the industry shifts from "training scarcity" to "scaled inference, Agentification, long context, low latency," the core KPIs will comprehensively shift from "highest peak compute" to cost per token, throughput per watt, and system-level TCO. This is the fundamental reason for the collective acceleration of hyperscalers' ASICs. For instance, Google explicitly defines its Ironwood TPU as the optimal computing cluster for the "inference era," scalable to 9,216 chips; Microsoft positions its newly launched AI ASIC Maia 200 directly as an accelerator for cloud inference, claiming 30% better performance per dollar than its existing latest-generation hardware; AWS defines Trainium3 as a chip pursuing "optimal token economics," boasting over 4x power efficiency improvements. Together, they indicate that as cloud giants initiate an "AI computing cost revolution" to advance ASIC penetration, market concerns about NVIDIA's growth prospects are justified.
According to a Counterpoint Research report, Broadcom will maintain its absolute leadership position in the AI data center server ASIC design partner space through 2027, with a market share reaching 60%. Counterpoint also forecasts that AI server ASIC shipments will exceed 15 million units by 2028, surpassing total data center AI GPU shipments. Counterpoint expects ASIC shipments to more than triple by 2027, as Google, Amazon, Apple, Microsoft, ByteDance, and OpenAI accelerate the deployment of massive-scale AI server computing clusters for training and inference workloads. Counterpoint attributes this rapid growth to demand for Google's TPU infrastructure (supporting the Gemini project), continued expansion of Amazon's Trainium clusters, and capacity increases from Meta's MTIA and Microsoft's Maia chips as their internal product lines expand.
Wall Street analysts are extremely optimistic about the revenue and profit growth prospects of Broadcom's AI-related businesses, with 12-month price targets concentrated between $450 and $535. Of the 55 Wall Street analysts covering the stock, 96% assign the equivalent of a "Buy" rating, with an average price target of around $454.
Wall Street's long-term bull thesis for Broadcom primarily revolves around three core points: The explosive growth of the AI computing business—Broadcom, as the most critical technology partner for Google's TPU AI computing clusters, directly benefits from the expanding AI capital expenditures of cloud giants (e.g., Google, Meta, OpenAI); An increasingly large order backlog; Stability from infrastructure software (VMware)—The acquisition and integration of VMware Cloud Foundation is progressing smoothly, providing Broadcom with strong cash flow and a growth engine in infrastructure software closely related to cloud AI training/inference.
Comments