By Jiyoung Sohn
SEOUL -- The notoriously volatile memory-chip industry is entering an extended boom period thanks to artificial-intelligence business with the likes of Nvidia and OpenAI.
Memory-chip makers, led by South Korea's Samsung Electronics and SK Hynix as well as U.S.-based Micron Technology, are enjoying surging demand for a variety of products both for training and running artificial-intelligence models.
Research firm TrendForce estimates that DRAM, a major type of memory chip, will bring in more than four times as much industrywide revenue next year as it did at the trough of the cycle in 2023, rising to a record of around $231 billion.
Samsung said Thursday that net profit for the July-September quarter rose 21% from a year earlier to the equivalent of about $8.6 billion. Under a favorable price environment, its chip division logged record quarterly revenue and operating profit rose nearly 80% to the equivalent of $4.9 billion.
A day earlier, SK Hynix reported record earnings, with net profit for the third quarter more than doubling year-over-year to the equivalent of about $8.8 billion. The company said the memory market has entered a "super boom cycle" and its capacity through next year is already sold out.
In September, Micron reported net profit more than tripled to $3.2 billion for its most recent quarter.
Memory chips account for about a quarter of the world's chip sales, according to World Semiconductor Trade Statistics. The other major type is logic chips, a category that includes the specialized AI chips made by Nvidia known as graphics processing units or GPUs, as well as central processing units that act as the brains of everyday computers and smartphones.
In the AI era, demand has soared for a specialized type of memory called high-bandwidth memory, or HBM, which is targeted for use in training AI models. High-bandwidth memory stacks together multiple layers of DRAMs and, combined with GPUs, allows larger amounts of data to move between the memory and processor at once.
This kind of memory-plus-processor combo is at the heart of AI servers that handle the myriad computations required when AI models are given enormous data sets and told to digest and learn from them.
In October, OpenAI, the maker of ChatGPT, signed letters of intent with Samsung and SK Hynix to bring them on as advanced memory chip and data-center partners in the Stargate infrastructure project. OpenAI's demand is set to be up to 900,000 DRAM wafers a month, which is more than double the industry's current HBM capacity, according to SK Hynix.
OpenAI CEO Sam Altman called the two companies " key contributors to global AI infrastructure."
Strong demand for high-bandwidth memory predates this year, but what is different now is that conventional memory chips are also hot. Major U.S. data-center companies such as Amazon, Alphabet's Google and Meta Platforms are buying a lot of these chips for traditional servers, and supply is tight because memory-chip makers have been expanding capacity mainly for HBM.
While HBM is the type of memory most associated with AI, conventional memory chips for regular servers are useful for some AI tasks too -- particularly AI inference, when a trained model is tapped to generate output such as a chatbot's answers to queries.
For some tasks in inference, such as storing and retrieving the large amounts of data generated in a large-language AI model, it can be more cost-efficient to deploy a traditional server that uses conventional memory, said Peter Lee, a Seoul-based semiconductor analyst at Citi. As trained AI models are being used more widely, data centers are expanding to handle the workload, Lee said.
"With the innovation of AI technology, the memory market has shifted to a new paradigm and demand has begun to spread to all product areas," said SK Hynix's chief financial officer, Kim Woo-hyun.
The HBM market alone is expected to grow by an average of more than 30% over the next five years, SK Hynix said, calling it a conservative estimate.
The memory-chip industry is known for its extreme boom-and-bust cycles. One of the worst busts happened just a few years ago, when an initial surge of pandemic-era demand faded and the AI boom hadn't fully kicked in. In 2023, SK Hynix reported a net loss equivalent at current rates to more than $6 billion.
TrendForce's senior research vice president, Avril Wu, said the current memory supply crunch would persist through 2026 and possibly early 2027.
Still, there is some skepticism about whether the huge long-term infrastructure spending plans announced by firms such OpenAI will materialize as planned.
"OpenAI has come up with some crazy numbers. Demand for memory is going to be so huge that current capacity and the capacity planned won't be enough to meet the demand," said semiconductor analyst Sanjeev Rana at Hong Kong-based brokerage CLSA. "The question is whether OpenAI will live up to its expectations."
Write to Jiyoung Sohn at jiyoung.sohn@wsj.com
(END) Dow Jones Newswires
October 29, 2025 21:21 ET (01:21 GMT)
Copyright (c) 2025 Dow Jones & Company, Inc.
Comments