Samsung's HBM Strategy: HBM4 Leads 2024 Shipments, HBM5 Substrate Advances to 2nm

Deep News03-18

Samsung Electronics is accelerating its next-generation High Bandwidth Memory (HBM) deployment. While HBM4 officially enters mass production this year, the company is already looking further ahead—planning to advance the HBM5 substrate process from 4nm to 2nm and utilizing 1d DRAM as the core stacked memory for HBM5E. Concurrently, HBM4 is projected to constitute over half of Samsung's total HBM shipments for the year, with overall HBM output increasing more than threefold compared to 2023.

According to reports from ETNews and Yonhap News Agency, Samsung Electronics' Vice President and Head of Memory Development, Hwang Sang-joon, disclosed these plans during Nvidia's GTC conference. He stated that the base die for HBM5 will utilize Samsung Foundry's 2nm process, representing a generational leap over the 4nm process used for HBM4 and HBM4E, aiming to meet the higher memory performance demands of next-generation AI workloads.

Regarding production targets, Hwang Sang-joon indicated that Samsung aims for HBM4 to account for more than 50% of its total HBM shipments this year, while overall HBM production volume is set to increase by over three times compared to the previous year. This announcement underscores Samsung's commitment to expanding its presence in the AI memory market, which will directly impact the supply dynamics of high-end DRAM and the downstream AI accelerator supply chain.

Beyond the memory roadmap, Hwang Sang-joon also revealed that the Groq 3 inference chip is currently in production at Samsung's Pyeongtaek campus. Mass production is targeted for the end of the third quarter to the beginning of the fourth quarter of this year, with order volumes already exceeding initial expectations. This move further extends Samsung's role from a pure-play memory supplier towards becoming a full-stack AI accelerator partner.

**HBM5 Substrate Process: Leap from 4nm to 2nm**

As reported by ETNews, Hwang Sang-joon explicitly stated at Nvidia's GTC that the base die for HBM5 will be manufactured using Samsung Foundry's 2nm process, marking a significant upgrade from the 4nm process used for HBM4 and HBM4E. Enhancements in the base die process typically contribute to improved memory bandwidth and energy efficiency.

Hwang Sang-joon noted that while adopting leading-edge processes leads to increased costs, introducing advanced technology is essential to achieving HBM's target performance. This clarifies Samsung's technical path of driving performance leaps through process upgrades in the high-end AI memory sector.

For HBM5E, ETNews reported that Hwang Sang-joon stated the product will use 1d DRAM as the core stacked memory, representing another upgrade compared to the 1c DRAM used in HBM4 and HBM4E.

The 1d DRAM intended for HBM5E is still in Samsung's internal development phase and has not yet been commercialized. However, citing informed sources, ETNews reported that Samsung has achieved strong performance results and test yields for this technology, indicating positive signals towards mass production.

**HBM4 Dominates 2024 Shipments, Capacity Triples Year-over-Year**

According to Yonhap News Agency, Hwang Sang-joon stated that Samsung's goal for this year is for HBM4 to comprise over 50% of its total HBM shipments, while annual HBM production volume increases more than threefold compared to last year.

HBM4 only officially entered mass production this year. Samsung plans to aggressively scale up mass production while significantly expanding overall HBM capacity to match the continuously rising demand for high-bandwidth memory from the AI chip market. If these capacity expansion plans are realized, they will have a substantive impact on the supply landscape of the high-end DRAM market.

**Groq 3 Foundry Work: Samsung Expands Role in Nvidia Ecosystem**

Beyond its memory business, Samsung is further extending its position in the AI accelerator supply chain by handling the foundry production for the Groq 3 inference chip.

Yonhap News Agency reported that Hwang Sang-joon mentioned Nvidia CEO Jensen Huang has publicly acknowledged Samsung's contribution to Groq 3. The chip is being produced at Samsung's Pyeongtaek campus, with mass production targeted for the end of Q3 to early Q4 this year, and current order volumes have surpassed expectations.

Reportedly, the Groq 3 chip die exceeds 700 square millimeters, yielding only about 64 chips per wafer, significantly lower than the typical 400-600 chips. Approximately 70% to 80% of the chip's area consists of SRAM, enabling fast on-chip inference computations without relying on external HBM. Hwang Sang-joon also revealed that Groq had been a customer of Samsung Foundry even before signing a licensing agreement with Nvidia.

As reported by SEDaily, Samsung's role as the foundry for the Groq 3 LPU chip is widely seen as a significant marker of its emergence as a core full-stack platform partner for next-generation AI accelerators. Following Samsung Foundry's entry into Nvidia's supply chain, Samsung's role has expanded from solely supplying memory to now encompassing LPU manufacturing, deepening its collaborative integration within the Nvidia ecosystem.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment