Driven by exceptionally strong demand from artificial intelligence data centers, prices for DRAM and NAND memory products are set to continue their sharp ascent. Two major U.S. memory giants, Micron Technology (MU.US) and SanDisk Corp. (SNDK.US), became the focus of global investor attention once again during Wednesday's strong rebound in U.S. tech stocks. Both stocks closed significantly higher, with gains of nearly 6% for the day, leading a broad rally in the memory sector and the Nasdaq index.
BNP Paribas recently issued a research report forecasting that DRAM contract prices are projected to surge by approximately 90% quarter-on-quarter in the first calendar quarter of 2026. Meanwhile, NAND Flash prices, traditionally known for their stable price curve, are expected to see a substantial increase of around 55%, continuing the upward trajectory that began in the second half of 2025. This bullish outlook on memory pricing is not an isolated view. TrendForce recently revised its forecasts upward, now expecting standard DRAM contract prices to increase by 90% to 95% QoQ in Q1 2026, up from a previous estimate of 55%-60%. NAND Flash contract price expectations were also significantly raised to a range of +55% to +60% QoQ. The firm pointed to a surge in demand for enterprise SSDs from North American cloud computing providers as a key driver, predicting these prices will rise an additional 53% to 58% QoQ in the first calendar quarter.
These developments underscore a critical fact: memory chips have become the "absolute center stage" in the AI super-cycle, rivaling the importance of Nvidia's AI chips, and remain one of the core supply bottlenecks where supply-demand imbalances and pricing power are most acutely felt first.
The relentless rise in memory prices shows no signs of stopping, and BNP Paribas is optimistic about continued strong performance from Micron and SanDisk. Senior analyst Karl Ackerman stated in a client report, "Our in-depth analysis of contract prices for over 50 DRAM SKUs and more than 75 NAND SKUs leads us to estimate that the overall DRAM average selling price could surge by roughly 90% QoQ in Q1 CY, followed by another significant sequential increase in Q2 CY. The core reason is that growing AI server demand is driving a broader supply-demand imbalance, creating sustained upward pressure on prices."
"For NAND products, we predict prices could rise substantially by about 55% QoQ in Q1 CY, with another sequential increase expected in Q2 CY. This trend is primarily driven by imbalanced supply-side dynamics, as NAND suppliers continue shifting capacity toward high-performance enterprise NAND products while maintaining extreme caution regarding new capacity additions." Analyst Ackerman set a 12-month price target of $500 for Micron and a target of $650 for SanDisk. At Wednesday's close, Micron rose 5.55% to $400.77, while SanDisk gained 5.95% to $599. BNP Paribas's targets suggest the strong bull run for both stocks since 2025 is far from over.
Delving deeper, BNP's Ackerman noted that spot price performance in February was even stronger. Given the correlation between spot and future contract prices, this is a very optimistic sign for the entire memory chip industry as contract renewals approach. Ackerman explained, "During an upturn, DRAM and NAND spot prices often command a significant premium over contract prices. February market pricing showed consumer-grade DDR4 spot prices rose 11% MoM – implying a massive 1,284% year-on-year increase – to $21.93/GB, while contract prices increased 7% MoM to $12.17/GB. This implies an 80% spot price premium over contracts. Similarly, consumer-grade DDR5 spot prices rose 9% MoM to $19.13/GB, while contract prices increased 3% MoM to $11.04/GB, resulting in a 73% spot premium."
Furthermore, multiple overseas media reports on Wednesday indicated that Samsung, the industry's largest memory giant, has raised DRAM prices by over 100%. According to The Elec in Korea, Samsung Electronics finalized Q1 DRAM supply price negotiations last month with major clients like Apple. The average price for general-purpose DRAM used in servers, PCs, and mobile devices increased by approximately 100% compared to the previous quarter, effectively doubling from Q4 last year, with some customers and products seeing increases exceeding 100%. The report cited industry sources stating negotiations have largely concluded, with some overseas clients having already completed payments. This hike represents an expansion of about 30 percentage points from the 70% level negotiated in January, occurring within just one month.
This rapid price escalation is reshaping long-term contract practices in the global memory industry. The growing reliance of GPU/TPU systems on HBM, DRAM, and enterprise SSDs is causing prolonged supply-demand imbalances. Supply negotiation cycles have compressed from traditional annual contracts to quarterly contracts, and now even require monthly adjustments, reflecting the severity of the market imbalance.
Both the "Google AI compute chain" and the "Nvidia GPU chain" are fundamentally dependent on memory. Whether it's Google's massive TPU AI compute clusters or vast arrays of Nvidia AI GPU clusters, they all require fully integrated HBM memory systems alongside the AI chips. Beyond HBM, tech giants like Google and OpenAI are accelerating the construction or expansion of AI data centers, necessitating large-scale purchases of server-grade DDR5 memory and enterprise-grade high-performance SSD/HDD storage solutions. Unlike Seagate and Western Digital, which focus on nearline high-capacity HDDs, and SanDisk's focus on high-performance eSSDs, the three major memory manufacturers – Samsung Electronics, SK Hynix, and Micron – are positioned across multiple core memory segments: HBM, server DRAM (including DDR5/LPDDR5X), and high-end datacenter enterprise SSDs (eSSD). They are the most direct beneficiaries within the "AI memory + storage stack," collectively capturing the "super dividends" of AI infrastructure build-out.
From a fundamental hardware perspective, AI computing is constrained not only by compute power but also by "data movement capability." Whether for Nvidia GPUs or TPU systems, what truly determines the efficiency of large model training and inference is not just the number of Tensor Cores/matrix units, but the bandwidth available to feed weights, KV cache, activations, and intermediate tensors into the compute cores every second. From a cross-analytical viewpoint of semiconductors and AI data center infrastructure, memory chips are "perfectly positioned" in the AI wave because they benefit from both the training expansion and inference expansion trends, while also serving as a "universal toll gate" across platforms, architectures, and ecosystems. As the AI era shifts from training-dominated to inference, Agent, long-context, and retrieval-augmented generation (RAG) dominated, system demands for capacity, bandwidth, power efficiency, and data persistence layers will only intensify.
An official Google document explicitly states that Cloud TPUs are equipped with HBM to support larger parameter models and batch sizes. Its "Ironwood" TPU, designed for the "inference era," further increases HBM capacity and bandwidth. The AI GPU ecosystem led by Nvidia is more direct: a single Blackwell Ultra architecture GPU can be equipped with up to 288GB of HBM3e, while the GB300 NVL72 rack-scale system is designed around massive HBM capacity to enhance long-context inference throughput. In other words, without HBM, the peak compute power of GPUs/TPUs cannot be effectively realized; memory bandwidth and capacity determine whether large models can "scale up, speed up, and run at full capacity."
Moreover, the storage systems that AI data centers truly rely on extend far beyond HBM. The complete AI storage hierarchy consists of: HBM handling high-speed data feeding closest to the accelerator; DDR5/RDIMM/LPDRAM responsible for host memory expansion and data preprocessing; and enterprise SSDs managing persistent data pathways for training datasets, checkpoints, vector databases, RAG retrieval, and inference caching. For instance, Micron officially defines its AI data center solutions as a "complete storage portfolio" covering both training and inference, explicitly stating that its eSSD product line is designed to maintain efficient data supply throughout the AI pipeline during training and inference. TrendForce also notes that with the arrival of the AI inference era, North American cloud giants are rapidly increasing procurement of high-performance storage, with eSSD demand far exceeding expectations.
This means AI GPU clusters are inseparable from memory, and Google TPU clusters are equally dependent – the only difference lies in the accelerator brand, but the underlying data storage foundation must be built upon the complete pyramid of HBM, server DRAM, and NAND/SSD.
Analysts at Citigroup have adopted a more aggressive stance on a "memory super-cycle" compared to UBS, Nomura, and JPMorgan in their latest storage price outlook. Citi analysts believe that driven by the proliferation of AI Agents and surging memory demands from AI CPUs, memory chip prices will experience runaway increases in 2026. Consequently, they have sharply raised their 2026 DRAM ASP growth forecast from 53% to 88%, and their NAND ASP growth forecast from 44% to 74%. Fueled by both training and inference demand, server DRAM ASP is expected to skyrocket by 144% year-on-year in 2026. Using the mainstream 64GB DDR5 RDIMM as an example, Citi predicts its price will reach $620 in Q1 2026, a 38% sequential increase, significantly higher than a previous forecast of $518.
In the NAND sector, Citi's forecasts are equally aggressive, raising the 2026 ASP growth expectation from +44% to +74%. Within this, enterprise SSD ASP is projected to increase by 87% year-on-year. In the view of Citi analysts, the memory chip market is entering an extremely intense seller's market, with pricing power firmly in the hands of memory giants like Samsung, SK Hynix, Micron, and SanDisk.
Comments