The NVIDIA GTC 2026 conference served not only as a platform for product launches but also as a catalyst for a significant reshuffle in the supply chain. As details of NVIDIA's next-generation AI platform, Vera Rubin, come into focus, the pivotal roles of three chip giants—Samsung, Micron, and Intel—have been clarified.
According to TrendForce, in the most closely watched supply chain developments, NVIDIA CEO Jensen Huang publicly confirmed for the first time that the Groq 3 LPU will be manufactured by Samsung. Concurrently, Micron announced that its HBM4 memory entered mass production in the first quarter of 2026, dispelling prior rumors of its exclusion from the Vera Rubin supply chain. These announcements directly influence the competitive landscape of the HBM market and the bargaining power of suppliers.
Simultaneously, Intel solidified its partnership with NVIDIA at the event, confirming that its Xeon 6 processors will provide computational support for the DGX Rubin NVL8 system. Looking further ahead, reports from Wccftech suggest Intel is poised to participate as a foundry partner in the packaging of NVIDIA's next-generation Feynman GPU, scheduled for launch in 2028.
**Samsung Secures LPU Foundry Order, Confirmed by Huang** The Groq 3 was one of the most anticipated releases at GTC. This LPU, designed for high-speed AI inference, will be integrated into the Vera Rubin platform, with shipments planned to begin in the second half of 2026. Reports indicate that Jensen Huang confirmed Samsung Foundry as the manufacturer, continuing a pre-existing foundry agreement between Groq and Samsung that predated NVIDIA's acquisition of Groq last year.
Technologically, the Groq 3's design philosophy differs significantly from mainstream AI accelerators. Each Groq 3 LPU incorporates 500MB of SRAM—an ultra-fast memory type typically used for CPU and GPU caches. While this capacity is substantially smaller than the 288GB of HBM4 equipped on the Rubin GPU, its bandwidth reaches approximately 150 TB/s, vastly exceeding the 22 TB/s provided by HBM4. This design is expected to substantially boost performance for bandwidth-intensive AI inference tasks.
Samsung's acquisition of this foundry order signifies an expansion of its role within NVIDIA's supply chain, moving beyond being an HBM4 memory supplier into the logic chip manufacturing arena, thereby strengthening its strategic position for the Vera Rubin platform.
**Micron's HBM4 Reaches Mass Production, Pressuring SK Hynix's Premium** Micron formally announced at the conference that its 36GB 12-layer stack HBM4 has commenced mass production for the NVIDIA Vera Rubin platform in Q1 2026. This product features a pin speed exceeding 11 Gb/s and a bandwidth surpassing 2.8 TB/s, representing a 2.3x improvement over HBM3E while also achieving over 20% better power efficiency. Furthermore, Micron has begun shipping samples of a 48GB 16-layer stack HBM4, which offers a 33% increase in capacity per stack compared to the 12-layer version.
The market significance of this development extends beyond Micron's own technological achievement. Analysis suggests that Micron's accelerated production timeline will reduce concentration among HBM suppliers, increasing competitive pressure on existing players regarding shipment allocations and pricing negotiations. The core impact is not necessarily an immediate erosion of SK Hynix's market share, but rather a potential weakening of the monopoly premium that formed during periods of peak HBM demand.
Samsung also faces more direct competitive pressure. While Samsung has officially advanced its own HBM4 production to demonstrate technical capability, Micron's large-scale supply commitment for the Vera Rubin platform may shift the industry's competitive benchmark from "ability to mass produce" to "scale of actual adoption," presenting a new challenge for Samsung.
**Intel's Dual-Pronged Strategy, Feynman Packaging Collaboration Emerges** Intel's presence at GTC was equally notable. Intel officially confirmed that its Xeon 6 processors will power the NVIDIA DGX Rubin NVL8 system. Reports indicate this product offers a 2.3x increase in memory bandwidth compared to its predecessor, providing scalable, high-performance AI computing for next-generation GPU-accelerated workloads.
In a longer-term strategic move, reports suggest NVIDIA intends to collaborate with Intel in the foundry space. This partnership would leverage Intel's advanced packaging technologies, including EMIB, to provide packaging support for the Feynman GPU slated for release in 2028. Notably, the Feynman GPU's core die is expected to be manufactured using TSMC's 1.6nm process, with Intel's involvement focused primarily on the packaging stage.
The Feynman platform is also set to introduce 3D chip stacking technology, potentially marking NVIDIA's first use of such a design in a GPU product. For memory, NVIDIA plans to equip Feynman with customized HBM, rather than using standard next-generation HBM products, further enhancing the differentiated competitive advantage of its AI data center platform.
Comments