Advanced Micro Devices Q4 2025 Earnings Call: MI450 Series Development On Track for H2 Launch and Production

Stock News02-04 16:24

Advanced Micro Devices (AMD) recently held its fourth-quarter 2025 earnings conference call. Development progress for the MI450 series is proceeding very smoothly, with plans for a second-half launch and the commencement of production proceeding exactly on schedule. Customer collaborations are advancing steadily, and the partnership with OpenAI remains solid. The associated capacity ramp-up plan is set to extend from the second half of the year through 2027, with all aspects currently progressing as planned. Concurrently, AMD is collaborating closely with numerous other clients who, impressed by the product's advantages, are also highly interested in rapidly increasing their adoption of MI450. The company identifies significant opportunities in both the inference and training segments. Consequently, AMD expresses considerable satisfaction with the overall data center growth anticipated for 2026 and is highly confident about achieving tens of billions of dollars in data center AI revenue by 2027.

Q: Regarding the AI revenue outlook for 2027 and the demand situation for MI455 and the Helios platform in the second half, could you discuss the progress of customer collaborations? A: The development of the MI450 series is advancing very smoothly, with the plan for a second-half launch and production start firmly on track. Customer collaborations continue to progress well, and the partnership with OpenAI remains stable. The associated capacity expansion plan, running from the second half through 2027, is proceeding as scheduled. Additionally, we are working closely with many other customers who are very interested in quickly scaling their MI450 usage due to the product's strengths. We see opportunities in both inference and training. Therefore, we are very pleased with the overall data center growth projected for 2026 and are confident about achieving tens of billions in data center AI revenue by 2027.

Q: Concerning the details of the March quarter guidance and the full-year growth trajectory for data center GPUs, what is the outlook? A: We provide guidance for only one quarter at a time, but we can comment on Q1. While overall revenue is expected to decline sequentially by approximately 5%, the Data Center segment is actually projected to grow. The CPU business, which would typically see a high-single-digit percentage decline in a regular seasonal pattern, is instead forecast to deliver good sequential revenue growth in our current guidance. For Data Center GPUs, revenue—including from the China market—is also expected to increase. Thus, the overall Data Center guidance is quite positive. The Client, Embedded, and Gaming segments are experiencing seasonal declines. Regarding commentary for the full year, we are very optimistic. A key theme is exceptionally strong data center growth, driven by two vectors: Server CPU growth is very robust. As AI continues to expand, CPUs remain critical; we have seen strengthening CPU orders over recent quarters, especially in the last 60 days, which will be a strong growth driver. Server CPUs achieved growth from Q4 to Q1—a period that is typically seasonally weak—and this trend is expected to continue throughout the year. In Data Center AI, this year represents a crucial inflection point. The MI355 is performing well, we are satisfied with its Q4 results, and we continue to ramp it in the first half. However, the true inflection point arrives in the second half with the MI450. Revenue from MI450 will begin in Q3 and increase substantially in Q4, carrying momentum into 2027. This outlines the general growth trajectory for the Data Center business for the full year.

Q: What are the expectations for MI308 sales in China after Q1, and can the Data Center revenue achieve the long-term growth target of over 60% in 2026? A: Regarding China business: We were satisfied with some MI308 sales in Q4, which were based on orders from early 2025 and obtained the necessary licenses. We anticipate approximately $100 million in revenue from this in Q1. Given the dynamic nature of the situation, we are not forecasting additional China revenue beyond that. We have submitted license applications for the MI325 and continue to communicate with customers to understand demand. We believe it is prudent not to make further predictions beyond the $100 million mentioned in the Q1 guidance. For the overall Data Center: As mentioned previously, we are very optimistic. The combination of growth drivers we have—including continued strong growth from the EPYC product line (Turin and Genoa), the launch of Venice in the second half which we believe will further extend our leadership, and the very significant MI450 capacity ramp in the second half of 2026—positions us well. While we don't provide specific segment guidance, achieving the long-term growth target of over 60% in 2026 is indeed a possibility.

Q: Concerning server CPU supply capacity, how is the ability to secure additional capacity from partners like TSMC? What is the lead time for wafer output, and what does this imply for the full-year 2026 growth trajectory? Also, how should we think about a pricing inflection point? A: Regarding the server CPU market: We believe the total server CPU market will experience strong double-digit growth in 2026. Regarding our supply capability: We have observed this trend over the past few quarters, leading us to enhance our server CPU supply capacity, which is one reason we raised our Q1 server business guidance. The company sees the ability for continued growth throughout the year. Demand is undoubtedly strong, and we are working closely with our supply chain partners to increase supply. Based on the current situation, overall server demand is robust, and we are increasing supply to meet it.

Q: What is the full-year gross margin framework, and how do you balance the strengthening server CPU business with the potentially accelerating GPU business in the second half? A: We are satisfied with the Q4 gross margin performance. The Q1 guidance is 55%, representing a 130 basis point year-over-year improvement, while our MI355 scale has expanded significantly compared to the previous year. We are benefiting from a very favorable product mix across all businesses. In the Data Center segment, we are ramping new-generation products, including the MI355, which contributes positively to margins. On the client side, we continue to shift towards premium products and are gaining momentum in the commercial business, where margins have been improving well. Additionally, we see a recovery in the Embedded business, which also contributes to margins. We expect all these tailwinds to persist in the coming quarters. When the MI450 ramp begins in Q4, our gross margin will be largely driven by the product mix. We will provide more details then, but overall, we are very pleased with the gross margin progress expected for the year.

Q: Regarding the MI455 capacity ramp, will the business be 100% in rack form? Will there be server business around an eight-GPU architecture? Is revenue recognized upon shipment to rack suppliers? A: Yes, we do have multiple variants of the MI450 series, including an eight-GPU form factor. However, for 2026, the vast majority will be rack-scale solutions. And yes, we will recognize revenue upon shipment to the rack integrators.

Q: Concerning risks in the process of converting chips into racks, and whether pre-built racks or other measures are being taken to ensure a smooth capacity ramp? A: Development is progressing very smoothly for both the MI450 series and the Helios rack, and both are on schedule. We have conducted extensive testing at both the silicon and rack levels. Everything is proceeding well. We are receiving significant test feedback from customers, allowing us to perform substantial testing in parallel. We anticipate the second-half launch will proceed as planned.

Q: Regarding the continued increase in operating expenses, especially as GPU revenue starts growing, how should we think about the upward path? Can we achieve operating leverage, or will operating expenses grow faster as AI revenue increases? A: Regarding operating expenses, we have high confidence in our existing roadmap. In 2025, as revenue grew, we appropriately increased operating expenses. Entering 2026, with the significant growth we anticipate, and according to our long-term model, operating expense growth should be slower than revenue growth. We expect 2026 to follow this pattern, especially as we see the revenue inflection point in the second half. Given the current free cash flow generation and overall revenue growth, investing in operating expenses is absolutely the right decision.

Q: Is the $100 million Q1 revenue from China also on a zero-cost basis like in Q4? Does this pressure gross margins? Also, can you provide the specific scale of the Instinct business in 2025? A: Regarding the Q1 $100 million revenue: The $360 million inventory provision taken in Q4 was associated not only with the China revenue recognized in that quarter but also covered the $100 million in MI308 revenue we expected to ship to China in Q1. Therefore, the Q1 gross margin guidance is very clear. Regarding the scale of the Instinct business: We do not provide guidance at the business unit level. However, to assist your modeling, if you look at the Q4 Data Center AI data, even excluding the non-recurring China revenue, you can still see growth. There was sequential growth from Q3 to Q4, which should be helpful for your models.

Q: Client business performed strongly in Q4, but considering inflationary costs in the memory market, have order patterns changed? What is the outlook for Client market growth and health in 2026? A: The client market performed exceptionally well in 2025, with strong growth in both ASPs due to a shift to premium product mixes and in unit volumes. Entering 2026, we are monitoring the business closely. Based on current observations, the total addressable market (TAM) for PCs may see a slight decline, considering commodity price inflation pressures, including for memory. Our model for the full year anticipates the second half to be slightly below seasonal patterns compared to the first half. Even in a declining PC market environment, we believe our PC business can achieve growth. Our focus area is the commercial market, where we made good progress in 2025 and expect to continue gaining share in the premium segment in 2026.

Q: Regarding competitors partnering with SRAM architecture suppliers, how does this impact the competitiveness of HBM-based Instinct in the inference market? How is Instinct addressing the demand for low-latency inference? A: This reflects the expected evolution as the AI market matures. As inference scales, efficiency—measured in tokens per dollar or inference efficiency—becomes increasingly important. Our chiplet architecture possesses strong optimization capabilities, allowing for tuning for different phases of inference, not just training. We expect to see more workload-optimized products, which could be delivered via GPUs or more ASIC-like architectures. We have a full compute stack to address all these needs. From this perspective, we will continue to focus on the inference segment, viewing it as a significant opportunity alongside the training capacity ramp.

Q: Regarding the partnership with OpenAI, is the plan for 6 gigawatts over three and a half years on track to start in the second half? Can you provide more details on the partnership? A: We are working closely with OpenAI and our Cloud Service Provider (CSP) partners to deliver the MI450 series and execute the capacity ramp plan. This plan is scheduled to commence in the second half. MI450 progress is smooth, and Helios is also performing well. We are engaged in deep co-development with all these partners. Looking ahead, we are optimistic about ramping MI450 capacity for OpenAI. However, it's important to note that we have numerous other customers who are very interested in the MI450 series. Therefore, alongside the OpenAI partnership, we are working with several other customers to ramp capacity during the same timeframe.

Q: Concerning the competition between x86 and ARM architectures in the server CPU market, particularly whether x86 has an advantage in agent workloads, and what is your view on NVIDIA's ARM CPUs? A: There is significant demand for high-performance CPUs currently, especially for agent workloads. These AI processes or agents generate substantial traditional CPU tasks, and the vast majority currently run on x86. The advantage of EPYC lies in our workload optimization; we have the best cloud processors and enterprise processors, as well as low-cost forms for uses like storage. We believe all these factors come into play when building a complete AI infrastructure. CPUs will continue to be a critical part of the AI infrastructure build-out. As we mentioned at our November Analyst Day, this is a multi-year CPU cycle, and we continue to see this. EPYC is optimized for all these workloads, and we will continue to work with customers to expand EPYC's market share.

Q: Regarding the procurement timeline for memories like HBM, is it done a year in advance, or six months? A: Considering the lead times for supply chain components like HBM and wafers, we work closely with suppliers on a multi-year framework covering demand forecasting, capacity planning, and co-development. We are satisfied with our supply chain capabilities. We have been planning for this round of capacity expansion for several years; regardless of current market conditions, we have been planning for a significant ramp in both CPU and GPU businesses for the past few years. Therefore, we believe we are well-prepared for the substantial growth in 2026. Given supply chain tightness, we have also entered into multi-year agreements that extend beyond that scope.

Q: Regarding architectural shifts like system accelerators, KV cache offload, and more discrete ASIC-style computing, will AMD follow these directions? What is your view on the evolution of your own system architecture? A: With our highly flexible chiplet and platform architecture, we have the capability to deliver different system solutions tailored to various needs. We fully recognize that there will be different solutions in the future; there is no one-size-fits-all approach. Rack-scale architecture is excellent for the highest-end applications, like distributed inference and training. Simultaneously, we see opportunities in enterprise AI for other form factors, so we are investing across the entire relevant spectrum.

Q: How will gross margins evolve from MI300 to MI400 to MI500? Will they increase, decrease, or fluctuate? A: At a high level, each generation offers more capability, more memory, and delivers more value to customers. Therefore, generally, gross margins should improve gradually with each generation. Typically, at the start of mass production for a new generation, margins tend to be lower. As scale is achieved, yields improve, testing improves, and overall performance increases, margins within each generation improve. It's a dynamic process, but long-term, one should expect each generation to have higher gross margins.

Q: What is the expected magnitude of the decline in the Gaming business in 2026, and what is the annual trajectory? A: 2026 is the seventh year of the current product cycle. Typically, at this stage, revenue tends to decline. We do expect a significant double-digit percentage decline in semi-custom revenue for 2026. The ramp of next-generation products, such as for Xbox, is expected to reverse this decline trend.

Q: Concerning the rack-scale system capacity ramp, could supply constraints in the second half impact or limit revenue growth, particularly the sequential growth from Q3 to Q4? A: We have planned at every component level. For our Data Center AI capacity ramp, we do not believe we will be supply-constrained based on the plans we have in place. We have an aggressive but achievable ramp plan. Given AMD's scale, our top priority is ensuring the smooth execution of the Data Center capacity ramp, encompassing both the GPU side (Data Center AI) and the CPU side.

Q: What were the biggest investment areas in 2025? What will be the largest incremental OpEx investment area in 2026? A: In 2025, the primary and largest investment focus was on Data Center AI, including accelerating the hardware roadmap, expanding software capabilities, and the acquisition of ZT Systems to enhance system-level solution capabilities. Additionally, significant investment was made in go-to-market efforts to support revenue growth and expand the commercial and enterprise markets for the CPU business. For 2026, we expect to continue investing aggressively, but revenue growth is anticipated to outpace the increase in operating expenses, thereby driving earnings per share growth.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment