SK Hynix and Sandisk have launched a consortium to standardize High Bandwidth Flash (HBF), positioning the technology as a next-generation memory layer for AI inference and heightening competition in the post-HBM era as Micron accelerates investments to strengthen its HBM4 and NAND capabilities.
SanDisk stock rose 5% in premarket trading on the news. Pure Storage up nearly 4%; Western Digital, Micron up over 1%.
CSOP SK Hynix Daily 2x Leveraged Product soared about 15%.
In a press release issued in Seoul on February 26, 2026, SK Hynix said it held an HBF specification standardization consortium kick-off event with Sandisk at Sandisk's headquarters in Milpitas, California. The companies announced plans to pursue global standardization of HBF under a dedicated workstream within the Open Compute Project (OCP), the world's largest open data center technology initiative.
The companies said the AI industry is shifting from training large language models to inference, where AI services are delivered directly to users. During this phase, fast and efficient memory is critical to handle rising user demand while balancing power efficiency and capacity.
HBF is designed as an intermediate layer between high-bandwidth memory (HBM) and solid-state drives (SSDs). While HBM provides ultra-fast bandwidth for real-time computation, SSDs offer large storage capacity at a lower cost. HBF aims to bridge the performance-capacity gap, delivering greater scalability and improved total cost of ownership for AI inference workloads.
SK Hynix said system-level optimization across CPUs, GPUs, and memory will determine competitiveness in the AI inference market. Companies capable of delivering both HBM and HBF solutions are therefore expected to gain strategic advantages.
Industry forecasts suggest demand for complex memory solutions including HBF could begin rising around 2030, as inference workloads expand and AI services scale globally.
Micron steps on the gas as HBM4 competition builds
The move comes as competition in advanced memory intensifies ahead of Nvidia's expected adoption of sixth-generation HBM4 in its next-generation platforms.
Micron has recently accelerated capital investment despite previously taking a relatively cautious stance on HBM4 supply timing compared with Samsung Electronics and SK Hynix. Although earlier reports indicated that Micron's HBM4 performance did not fully meet Nvidia's requirements, industry sources in South Korea said the company has largely resolved those technical issues and is competitive in 16-layer stacking.
Micron announced plans to invest approximately US$24 billion in Singapore over 10 years, with NAND wafer production scheduled to begin in the second half of 2028. An advanced HBM packaging facility in the same manufacturing complex is expected to contribute to HBM supply from 2027. The company has also acquired a DRAM facility in Taiwan to expand production capacity.
In the US, Micron's large-scale manufacturing plan in New York state could receive up to US$25 billion in subsidies under supply chain localization policies. Industry observers noted that potential tariffs on non-US memory suppliers could provide Micron with a relative advantage if implemented.
The architecture shift beyond HBM
While HBM currently dominates AI memory markets, industry analysts expect a division of labor to emerge between HBM and HBF as AI workloads expand. HBM would continue handling real-time, high-speed access for compute-intensive tasks, while HBF would manage larger volumes of historical data and model content at lower cost.
Academic and industry observers expect HBF to be integrated into products from major AI chipmakers as early as late 2027 or early 2028. HBF is often described as analogous to a NAND flash-based version of HBM, using vertical stacking and high-speed interconnects similar to through-silicon via technology.
Although HBF is expected to offer lower speeds than HBM, its capacity could be roughly 10 times greater. However, write endurance remains limited compared with DRAM, meaning software architectures may need to emphasize read-intensive workloads.
Looking further ahead, some analysts project that by 2038, the HBF market could match or exceed HBM in size. Over the longer term, memory development is expected to evolve toward integrated architectures that combine multiple HBM stacks and HBF layers to shorten data paths and enhance system-level efficiency.
Comments