Marvell Technology (MRVL.US), a key player in custom AI chips (AI ASIC) for large-scale data centers and a major partner for Amazon's AWS Trainium series, reported financial results on March 6 that exceeded Wall Street expectations across the board. This robust performance, coming just a day after larger rival Broadcom (AVGO.US) posted explosive growth figures, underscores a powerful challenge to NVIDIA's (NVDA.US) near-90% market share in AI chips. This shift is driven by the full arrival of the AI inference era, persistent strong demand for storage chips, surging demand for cloud-based AI inference computing power, and a trend toward "micro-training" that embeds large AI models into business operations, favoring more cost-effective AI ASIC systems.
For its fiscal fourth quarter 2026 ended January 31, Marvell reported record revenue of approximately $2.22 billion, representing year-over-year growth of over 20% and slightly surpassing analyst estimates of around $2.21 billion. Non-GAAP earnings per share (EPS) were $0.80, beating the average Wall Street forecast of $0.79 and up from $0.60 a year ago. GAAP operating profit surged 72% to $404.4 million, exceeding expectations, while net income attributable to common shareholders jumped about 97.9% to approximately $396.1 million.
The data center segment, closely tied to AI training and inference systems, was a standout contributor, generating roughly $1.65 billion in revenue. This accounted for about 74% of total revenue, growing 21% year-over-year and increasing 9% sequentially from a strong prior quarter. The company emphasized that order rates for its data center business are accelerating at a "record pace." Following the earnings release, Marvell's stock price surged over 15% in after-hours trading.
Looking ahead, Marvell's CEO anticipates "further acceleration" in year-over-year revenue growth for the current fiscal year. The company's revenue guidance for the first quarter of fiscal 2027 is approximately $2.4 billion at the midpoint, significantly higher than the average analyst estimate of about $2.27 billion. This guidance, which has been repeatedly revised upward by analysts since late January following strong reports from giants like Google, Amazon, and NVIDIA, underscores the explosive global demand for AI computing infrastructure based on the ASIC technology path. The company's Non-GAAP EPS guidance range is $0.74 to $0.84, with the midpoint above consensus, and its Non-GAAP gross margin forecast of 58.25% to 59.25% also exceeds average analyst expectations.
The previous day, AI ASIC leader Broadcom reported results showing total revenue grew 29% year-over-year to $19.3 billion. Revenue linked to AI more than doubled to $8.4 billion, far exceeding the company's prior expectations. Semiconductor solutions revenue, including AI ASIC and smartphone RF chips, reached $12.515 billion, a significant 52% increase. Notably, Broadcom's CEO stated that next year's AI chip-related revenue, encompassing AI ASIC compute clusters and AI networking chips, is projected to surpass $100 billion.
The strong results from both ASIC leaders reinforce a "AI ASIC bull narrative." As cloud giants like Google, Amazon, and Microsoft push a "AI computing cost revolution" to accelerate ASIC adoption, the core competition in the inference era is shifting from peak computing power to metrics like cost per token, power consumption, memory bandwidth utilization, interconnect efficiency, and total cost of ownership. For these metrics, ASICs customized for specific workloads inherently offer higher cost-effectiveness compared to general-purpose GPUs.
According to TipRanks, Wall Street analysts are highly optimistic about Marvell's prospects in AI chips and SSD storage controllers, with a consensus "Strong Buy" rating and a 12-month average price target of $118, implying potential upside of about 56%.
From the perspective of the global AI infrastructure build-out, Marvell's strong performance is primarily fueled by exploding demand for data center infrastructure semiconductors, especially custom AI ASICs, high-performance communication/control chips, and data-center eSSD storage controllers. The company's recent growth trajectory heavily relies on its data center business, providing products like custom AI ASICs, high-bandwidth networking chips, interconnect solutions, and SSD controllers to cloud providers and supercomputing platforms. Custom silicon design for hyperscale data center clients has become a core growth engine.
Amazon AWS positions its Trainium/Inferentia clusters, developed with Marvell, as specialized accelerators for generative AI training and inference, claiming 30-40% better price-performance versus its AI GPU cloud instances. Google has also stated that Gemini 2.0 training and inference runs 100% on its TPUs. This indicates that "hyperscaler-developed commercial ASICs handling core model training/inference" is moving from proof-of-concept to replicable industrialization.
Marvell's results, combined with strong reports from memory chip makers Samsung, SK Hynix, and Micron, also highlight the critical role of high-performance storage controllers/SSD主控芯片 as a core driver of "implicit computing power." In large model training/inference systems, I/O bandwidth, persistent storage access efficiency, and memory pool interconnect efficiency constrain overall training cost and performance. Marvell's SSD controllers, NVMe/CXL cache controllers, and high-bandwidth storage interconnect products are key components experiencing growing demand.
SSD storage chips are perfectly positioned in the AI super-cycle, benefiting from both training and inference expansion while serving as a "universal toll gate" across platforms and architectures. As the AI era shifts from training to inference, agents, long context, and retrieval-augmented generation, system demands for capacity, bandwidth, power efficiency, and data persistence will only intensify. Driven by robust demand from AI data centers, DRAM and NAND prices are expected to continue soaring. BNP Paribas forecasts DRAM contract prices to surge 90% quarter-over-quarter in Q1 2026, with NAND prices rising 55%, continuing an upward trajectory from the second half of 2025. TrendForce has similarly revised its price increase forecasts upward.
Marvell's CEO, Matt Murphy, stated that the company secured a record number of custom chip design wins in fiscal 2026 and expects this trend to continue. He projected "further acceleration" in overall year-over-year revenue growth for the current fiscal year due to "continued strength in the data center business," adding that data center bookings are accelerating at a "record pace."
While NVIDIA's AI GPUs dominate the training side, requiring generality and rapid iteration, the inference side prioritizes cost per token, latency, and energy efficiency after AI technologies are scaled. For instance, Google positions its Ironwood TPU as built for the "AI inference era," emphasizing performance/energy efficiency/price-performance and scalability. However, Amazon's recent actions demonstrate AI ASICs' potential for training large models.
The AI ASIC compute ecosystem will likely continue eroding NVIDIA's monopoly premium and market share over the medium to long term, not through linear replacement but by changing competitive dynamics. The future AI data center will likely see GPUs dominating cutting-edge training and general cloud computing, while hyperscale internal inference, agent workflows, and fixed high-frequency loads increasingly shift to ASICs, heralding a true heterogeneous computing era. As the industry moves from "training scarcity" to "scaled inference, agentification, long context, and low latency," key performance indicators are shifting from peak compute power to cost per token, throughput per watt, and system-level TCO. This is the fundamental reason for the collective acceleration of hyperscaler ASIC development, as evidenced by initiatives from Google, Microsoft, and AWS, validating market concerns about NVIDIA's growth prospects.
Comments