Lisa Su's Campaign to Challenge NVIDIA's AI Dominance

Deep News03-17 21:44

Lisa Su has led Advanced Micro Devices for over a decade, driving the company's market capitalization from under $3 billion to a staggering increase of more than one hundred times. Confronting an AI chip market dominated by NVIDIA, she has deeply aligned with major clients like OpenAI and Meta through equity-for-orders agreements, made early bets on the AI inference segment, and achieved significant performance leaps with the MI series chips. Additionally, she personally traveled to South Korea to secure HBM production capacity, building a comprehensive strategy that spans from investment ecosystems to supply chain logistics, quietly advancing the "de-NVIDIAization" of the AI data center market.

When Lisa Su assumed the role of CEO at AMD in 2014, the company's market cap was less than $3 billion. Today, that figure exceeds $315 billion, representing growth of over one hundred times. AMD's surge in market value began in 2018. The previous year, eight scientists from Google published the seminal paper "Attention Is All You Need," which introduced the Transformer architecture based on an attention mechanism. Its parallel computing characteristics indirectly propelled GPU companies to prominence. Numerous new GPU startups from China were established around this period, while major players like Google pursued an alternative path by developing their own ASIC chips to optimize total cost of ownership for computing power.

Today, the competition in the AI chip market, as a fundamental computing infrastructure, has evolved into a multi-dimensional battle. This includes not only a frantic race for computing power, performance, and cost optimization but also considerations around dependency on sustainable energy. Crucially, within the supply chain, the production capacity that leading chip manufacturers like TSMC and memory suppliers such as Samsung and SK Hynix can commit significantly influences the intensity of the AI chip war. From performance and clients to production capacity, Lisa Su faces an unprecedented challenge.

Regarding clients, an industry insider revealed, "Lisa Su is very active in visiting clients in mainland China, far more frequently than Jensen Huang." Publicly, in 2025, "Su Mama" had two trips to China: the first was for the AI PC Innovation Summit in March, and the second involved visiting partners, with both trips initially related to Lenovo.

For much of the past decade, AMD's primary competitor was Intel. AMD gradually eroded Intel's market share in desktops, servers, and consoles. However, the rules of the game have changed in the AI era. In 2018, AMD made a significant pivot towards cloud computing by launching the Instinct series of data center GPUs, its first chips designed for AI workloads, though it remained a follower on this path for years.

At a computer expo in Taipei two years ago, when asked about playing catch-up, Lisa Su did not avoid acknowledging the reality of being behind. She stated, "It's obvious that the demand for AI is accelerating dramatically. We are really just at the beginning of a decade-long AI super-cycle." That year, another subtle detail from her remarks was easily overlooked: she mentioned, "Last year (2023), we launched the MI300X series, which has a leading advantage in inference." This indicates that Lisa Su recognized the value of inference early on, a foresight later validated—by 2025, inference was frequently discussed, and NVIDIA even acquired inference chip supplier Groq for $20 billion.

AMD's turning point arrived in 2025. In June 2025, at the "Advancing AI" event in San Jose, California, Lisa Su made a bold announcement: AMD's MI350 series (led by the MI355X) had begun shipping, offering inference performance "35 times faster" than the previous generation. At this event, she also revised her market size forecast. Previously predicting the global AI processor market would reach $500 billion by 2028, she now expected this threshold to be breached earlier. "People used to think $500 billion was a very large number," she said after her presentation, "Now, it seems within reach."

While her new forecast was less aggressive compared to Silicon Valley giants' capital expenditures exceeding $600 billion by 2026, Lisa Su's boldness was evident when she stated that the MI400 series, slated for 2026, would achieve a significant leap over NVIDIA. "When this series (MI400) debuts, it will position AMD clearly ahead of NVIDIA in existing technology," she said. At the time, OpenAI CEO Sam Altman shared the stage, confirming OpenAI's collaboration with AMD on developing the MI450 chip, noting its initial specifications were so ambitious he initially thought them "impossible."

The partnership extended beyond chip development. Remarkably, AMD and OpenAI engaged in an "equity-for-orders" arrangement, though details will be addressed later. At CES 2026, Lisa Su officially presented the results, announcing the MI450 series. According to her, the MI450 represented a combination of the MI300X and MI350, a step-function improvement in performance—leveraging HBM memory to expand across memory capacity, bandwidth, and computing power, breaking the "memory wall" limitation for AI inference.

The true highlight was the MI455X. Lisa Su announced that the MI455X series offered a 10x performance improvement over the MI355X. This chip powered AMD's newly developed open 72-card server, "Helios." The Helios system comprises 18 compute trays, each configured with one Venice CPU and four MI455X GPUs. The Venice CPU utilizes a 2nm process with a total of 4,600 cores; the MI455X GPU uses a 3nm process with a total of 18,000 compute cores. The entire system is equipped with 31TB of HBM4 memory and 43TB/s of total bandwidth, delivering 2.9 exaflops of FP8 computing power. AMD emphasized that Helios is an open rack-scale platform paving the way towards yotta-scale computing expansion.

In AMD's roadmap, the MI500 series, launching in 2027, will be driven by the cDNA6 architecture, utilize advanced 2nm process technology, and feature high-speed HBM4e memory. Lisa Su stated that the MI500 would deliver another major leap in AI performance, powering the next wave of large-scale multimodal models. "Over the next four years, we aim to achieve a 1000x improvement in AI performance," she said, though this time she did not specify it referred solely to inference performance.

"The future of AI will not be built by any single company or within a closed ecosystem," she predicted. "It will be shaped by open collaboration across the entire industry." This concept of open collaboration is most tangibly represented by forming industry alliances. The opportunity arises from the diversified computing procurement strategies of leading model developers—from OpenAI to Google to Meta. These companies not only purchase NVIDIA products but also incorporate AMD into their procurement systems, essentially buying all available effective computing power. If that still falls short, they supplement with in-house developed systems.

In October 2025, AMD and OpenAI signed a 6-gigawatt GPU supply agreement, utilizing multiple generations of AMD Instinct GPUs to support OpenAI's next-generation AI infrastructure. The initial 1-gigawatt deployment uses AMD Instinct MI450 GPUs and is expected to commence in the second half of 2026. The uniqueness of this deal lies in AMD granting OpenAI warrants to purchase up to 160 million shares of AMD common stock at an exercise price of $0.01 per share. The warrants vest based on specific milestones: the first batch upon completing the initial 1-gigawatt deployment, with subsequent batches vesting as procurement scales to 6 gigawatts. Vesting is also tied to AMD achieving certain stock price targets and OpenAI reaching technological and commercial milestones. This collaboration between a computing power supplier and a major model client was interpreted externally as "equity for orders," creating a cycle of financing.

Five months later, AMD essentially replicated the OpenAI agreement with Meta. In February 2026, AMD and Meta announced a 6-gigawatt agreement to provide multi-generation AMD Instinct GPU support for Meta's next-generation AI infrastructure. The products for the first gigawatt-scale deployment are expected to begin shipping in the second half of 2026, utilizing AMD Instinct GPUs based on a custom MI450 architecture, codenamed "Venice" sixth-generation AMD EPYC CPUs, running ROCm software, and built on the AMD Helios rack-scale architecture. AMD granted Meta performance-based warrants comparable to those for OpenAI, allowing the purchase of up to 160 million AMD shares, representing approximately 10% of the company's stock.

Creative Strategies chip analyst Ben Bajarin estimated this agreement could be worth tens of billions of dollars over at least four years, as "deploying 6 gigawatts takes a considerable amount of time." Lisa Su later stated in an interview that this warrant structure is a "win-win" for shareholders, supporting a "very ambitious" plan and financial model. She described the agreement as one of the "most transformative transactions" for AMD as it expands its AI capabilities. Speaking at a Morgan Stanley conference on March 3, she further explained the logic behind issuing warrants. She noted that one value of warrants is accelerating purchasing behavior within the deal and simultaneously speeding up the development of AMD's ecosystem. Since warrant vesting is performance-based, both companies are incentivized to help each other achieve targets. Su emphasized that the value of such transactions for AMD lies in accelerating procurement, technological ecosystem development, and software ecosystem building, with benefits radiating across the entire AMD ecosystem.

If combinations like AMD+OpenAI and AMD+Meta are considered Plan B, then Lisa Su also holds a Plan C for the future—"AMD the investment bank." The most notable investment by AMD's venture arm, AMD Ventures, occurred in May 2025, co-investing in AI cloud startup TensorWave, mirroring NVIDIA's investment in its affiliated company CoreWeave. Ecosystem development frequently features on AMD Ventures' agenda, including participation in consecutive funding rounds for Fei-Fei Li's world model project, World Labs, and investments in startups like "Transformer challenger" Liquid AI. Public records also show AMD participated in Series D and E funding rounds for optical interconnect AI chip companies.

AMD's investment portfolio, according to its website and public reports, also encompasses AI drug discovery company Absci, data annotation platform Scale AI, generative video AI company Runway, and multimodal model company Luma AI, among others. From the computing infrastructure layer to middleware and applications, and from foundation models to vertical models, Plan C, centered around Instinct GPUs, outlines the future landscape of the AMD ecosystem.

Production capacity is a perennial topic in the semiconductor industry, encompassing both chip manufacturing capacity and the supply of key components. Simply put, having products and signed clients means little without sufficient production capacity. Chip manufacturing capacity primarily refers to advanced packaging like CoWoS. Disclosures from January indicated that TSMC's total CoWoS advanced packaging capacity for 2026 was estimated at around 1.15 million wafers, with AMD securing about 8% of that, approximately 90,000 wafers.

At the March Morgan Stanley conference, when asked about CoWoS capacity sufficiency, Lisa Su responded, "We absolutely have enough CoWoS capacity. I know many people are checking multiple sources. The best answer I can give you is: we have the capacity, the technology, deep customer relationships, and data center providers have allocated space for this." Estimating based on the MI400's package size of 4800mm², one wafer might yield 8-10 chips. Using the upper estimate of 10 chips and assuming all capacity is for MI400 (though in reality, it won't be), AMD could produce around 900,000 MI400 chips in 2026, equivalent to approximately 12,500 72-card Helios racks.

Producing 900,000 MI400 chips implies AMD would need at least 10.8 million HBM memory units this year. With firm orders totaling 16 gigawatts from OpenAI and Meta, the stability of HBM supply is critical for fulfilling these orders. The challenge is that one key supplier, Samsung, is also a major HBM supplier for NVIDIA. Securing ideal capacity is more difficult than ever for AMD. In this context, Lisa Su prepared for a client visit to South Korea. Korean media reported she would visit on March 18, marking her first trip there in over a decade as CEO. Sources indicated she might meet with key partners like Samsung Electronics Chairman Lee Jae-yong and Naver CEO Choi Soo-yeon to discuss collaboration in areas like data centers. Industry observers expected her to request an expansion of HBM supply. Samsung had previously announced it began mass-producing HBM4, claiming a world first for shipping this advanced AI accelerator memory.

Beyond HBM, securing supply for DRAM and NAND flash memory is equally urgent. Against a backdrop of tightening global memory chip supply, locking in capacity for AMD's server product lines is a priority. Reports suggested AMD and Samsung were also exploring collaboration in chip foundry services, discussing production of 2nm EPYC Venice CPUs. At the same same Morgan Stanley conference, Lisa Su also addressed the memory market: "We plan with suppliers years in advance. We have plans in place for the MI450 ramp and the transition to HBM4. We feel good about HBM supply." However, she acknowledged other ripple effects in the memory market, where pricing for DDR4, DDR5, and consumer-grade memory was driving up system costs.

While the meeting with Samsung's leadership focuses on capacity, the meeting with Naver's CEO concerns market access. Naver is executing a clear "multi-supplier" strategy, gradually reducing its reliance on NVIDIA. As a significant player in the East Asian server market, Naver's shifting demand presents an opportunity for AMD. Industry analysis suggests Naver seeks optimal infrastructure combinations through supply chain diversification, testing products from Intel and AMD internally while maintaining its partnership with NVIDIA. An insider revealed, "Naver wants to increase the proportion of validated AMD accelerators in its second data center, which aligns perfectly with AMD's goal of expanding in the Korean market."

The timing of Lisa Su's visit to South Korea coincides with NVIDIA's annual GTC 2026 conference, where NVIDIA commands approximately 90% market share. Lisa Su and AMD are gradually chipping away at this dominance, a effort reliant on agile maneuvers in the "invisible battlefield" of the supply chain. Her approach is direct: fly there personally, arrange meetings, and secure the capacity.

On the Q4 2025 earnings call on February 4, Lisa Su summarized AMD's 2025: record highs in revenue, net income, and free cash flow. Data Center segment revenue grew 39% year-over-year to a record $5.4 billion. "2025 was an exceptional year for AMD, marking the beginning of a new growth trajectory for the company," she said. "We are entering a multi-year demand super-cycle for high-performance and AI computing, creating significant growth opportunities across our business." In November, at the Financial Analyst Day, she set targets: achieving a compound annual growth rate exceeding 35% over the next three to five years, significantly expanding profit margins, and generating earnings per share of over $20 within the strategic timeframe. "This is all driven by growth across all our businesses and the rapid expansion of our Data Center AI business," she stated.

Unsurprisingly, the primary engine for this growth and expansion will be the Data Center business. Achieving this requires AMD to execute several strategies simultaneously: deeply aligning with major Silicon Valley clients, steadfastly investing in the future, building a resilient supply chain ecosystem, and consistently prioritizing product performance and stability as the foundation. Only by doing so can Lisa Su lead the company in gradually advancing the "de-NVIDIAization" process within the data center market.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment