South Korean artificial intelligence startup Upstage is reportedly in discussions with Advanced Micro Devices (AMD) to purchase 10,000 of the chipmaker's latest AI accelerators. This substantial procurement forms a key part of Upstage's strategy to introduce larger-scale AI computing infrastructure into the South Korean market. The company's planned acquisition of 10,000 AMD AI chips, coupled with AMD's recent major collaborations with technology leaders including Celestica and Hewlett Packard Enterprise to accelerate production scaling of its Helios rack-scale AI computing infrastructure, signals growing market acceptance of AMD's AI computing solutions. This trend suggests AMD may gradually capture market share in the trillion-dollar AI computing cluster sector, where NVIDIA currently dominates with approximately 90% market share.
Upstage CEO Sung Kim revealed in a media interview that he discussed the procurement of AMD MI355 AI accelerator products during his meeting with AMD CEO Lisa Su in Seoul last week. Kim stated, "While we currently utilize numerous NVIDIA chips in the Korean market, we aim to diversify our computing power supply by incorporating alternative AI chip providers, including AMD." Upstage is one of four teams participating in a government-backed super AI competition designed to select South Korea's premier national-level foundational AI model. Dubbed the "AI Squid Game," referencing the popular Netflix survival drama created by a Korean team, this initiative represents a crucial component of South Korea's ambition to become a global AI powerhouse. A specialized jury supervised by the Ministry of Science and ICT will evaluate and eliminate teams every six months, with two finalists selected by early next year. The competition winner will receive additional NVIDIA AI GPU computing infrastructure.
Kim indicated that Upstage is currently developing a super-large language model with approximately 200 billion parameters for the important upcoming competition round this summer. He emphasized the company's competitive advantage lies in creating high-performance AI models at relatively lower costs by combining scale effects with efficient processing methods—a strategy aimed at competing against cost-effective AI model rivals from China and the United States. Upstage is a leading South Korean AI startup focused on large AI models and enterprise-grade AI software solutions. The company holds two prominent industry positions: participation in the government-supported "sovereign AI foundational model" competition and self-reported cumulative fundraising exceeding $100 million by 2024, claiming the title of South Korea's most-funded large AI model company historically. Beyond general-purpose AI models, the startup actively invests in enterprise Document AI and sovereign AI international expansion, targeting specific "AI+" applications in sectors including finance, insurance, healthcare, and advanced manufacturing. Kim also noted consideration of Asian markets like Vietnam and the UAE as significant potential targets for deploying sovereign-grade AI training/inference systems within their national borders.
AMD's advancement into rack-scale AI infrastructure represents a strategic expansion, with the company aggressively increasing production capacity for its Helios computing clusters. Upstage's negotiations for 10,000 MI355 units, combined with its CEO's explicit statement about diversifying from NVIDIA to AMD, indicates AMD is transitioning from an alternative AI GPU option to a genuine consideration for large-scale AI computing infrastructure deployment. Recent reports confirm AMD's deepened collaboration with Celestica to introduce its new Helios rack-scale AI computing infrastructure platform—a direct competitor to NVIDIA's NVL72 rack-scale AI platform—to global data center markets. These developments collectively demonstrate strengthening market recognition for AMD's AI computing cluster solutions. More significantly, Helios represents a rack-scale narrative rather than a single-card story: AMD has elevated competition from individual GPUs to integrated platforms encompassing 72-card racks, network interconnects, and combined CPU+GPU+NIC architectures, partnering with Celestica to accelerate mass production.
AMD's partnership with Celestica to accelerate Helios' market entry coincides with broader collaborations with multiple technology leaders to challenge NVIDIA's dominance in vertically integrated AI computing infrastructure solutions. Previous announcements revealed AMD's alliances with Hewlett Packard Enterprise and Broadcom aimed at supplying open, rack-scale artificial intelligence computing infrastructure for high-performance computing clusters and large AI data centers, while accelerating global "sovereign AI" research progress. Helios is scheduled for customer availability in late 2026, with Meta having signed a multi-generational, multi-year agreement to deploy up to 6 gigawatts of AMD Instinct GPU computing clusters, initial gigawatt-scale deployment expected in second half 2026. OpenAI has also participated in AMD MI450 design optimization. Combined with strong demand from entities like Upstage for sovereign AI/local large-scale computing, these developments reflect not isolated orders but a broader trend: increasing customer reluctance to concentrate AI infrastructure investments with single suppliers, with AMD positioned to capture demand for "secondary core alternatives + open standards + reduced vendor lock-in."
These recent positive catalysts undoubtedly provide robust short-to-medium term bullish drivers for AMD's stock prospects. The company has progressed from an AI chip market follower to a competitive AI training/inference system-level infrastructure contender. During its 2025 Analyst Day, AMD set aggressive targets including achieving $100 billion annual data center chip revenue within five years and exceeding 80% compound annual growth rate for data center AI computing-related revenue. CEO Lisa Su projected the total addressable market for AI data centers—encompassing AI central processors, AI accelerators, and high-performance networking products—to surpass $1 trillion by 2030, far exceeding the estimated $200 billion in 2025, implying over 40% compound annual growth. Regarding overall profitability, Su forecast earnings per share reaching $20 within three to five years. Wall Street firm Citigroup analysts designated AMD as "king of the hill," assigning a 12-month price target of $260. TipRanks-compiled analyst targets show a Wall Street average target price of $285, implying approximately 42% potential upside over the next twelve months. AMD shares closed at $201.33 last Friday.
NVIDIA CEO Jensen Huang unveiled what he termed an "unprecedented AI computing revenue vision" during the March 17 GTC event, informing global investors that the company's future revenue in artificial intelligence chips could reach at least $1 trillion by 2027, driven by robust demand for Blackwell architecture GPU computing and even stronger anticipated demand for the upcoming Vera Rubin architecture AI computing system. This projection significantly exceeds the $500 billion AI computing infrastructure roadmap presented at the previous GTC conference targeting 2026. As model scales, inference chains, and multimodal/agentic AI workloads drive exponential expansion in computing power consumption, technology giants' capital expenditure increasingly concentrates on AI computing infrastructure. Global investors continue anchoring the "AI bull market narrative"—centered on product iterations and AI computing cluster delivery expectations for NVIDIA, Google TPU clusters, and AMD—among the most certain growth investment themes in global equity markets. This trend also indicates sustained investor interest in power supply, liquid cooling systems, optical interconnection supply chains, and other AI training/inference-related investment themes, with industry leaders including NVIDIA, AMD, Broadcom, TSMC, and Micron maintaining prominent positions in equity markets despite geopolitical uncertainties in the Middle East.
According to analysis from Wall Street institutions including Morgan Stanley, Citigroup, Loop Capital, and Wedbush, the global artificial intelligence infrastructure investment wave centered on AI computing hardware remains in its early stages. Driven by unprecedented "AI inference-side computing demand," this global AI infrastructure investment cycle lasting through 2030 could reach $3 to $4 trillion.
Comments