Under the wave of AI-driven technological advancement, traditional operational logic has shifted. We now stand at the epicenter of multiple supercycles, where technological iteration, supply-demand dynamics, and macroeconomic trends converge as powerful forces, amplifying AI's transformative impact across every facet of production and daily life.
In the AI 2.0 era characterized by large models, the triad of "large models + massive computing power + big data" has become the foundational paradigm for next-generation artificial intelligence. Computing power, now as essential as utilities, has emerged as critical infrastructure. IDC data projects China's intelligent computing capacity will reach 2,781.9 EFLOPS by 2028.
Spurred by surging demand, China's AI server market more than doubled year-over-year in H1 2025. However, the industry faces multifaceted challenges—from GPU and memory supply chain cost fluctuations to evolving enterprise implementation requirements—all demanding adaptive responses from computing infrastructure.
This transformation has ushered in a three-phase development cycle for AI infrastructure: "short-term acceleration, medium-term expansion, and long-term structural upgrades." Beyond hardware accumulation, the priority now lies in creating synergistic systems that enable enterprises to embrace AI democratization through comprehensive services.
China's intelligent computing sector is experiencing robust growth across industries. The "AI Plus" initiative, consecutively featured in government work reports, has spurred multi-departmental policy frameworks aimed at deep industrial integration. Current large model development exhibits three evolutionary trajectories: 1. Depth: Exponential scaling from billion- to trillion-parameter architectures for enhanced knowledge representation 2. Breadth: Transition from unimodal processing to multimodal sensory data fusion 3. Length: Breakthroughs in extended-context coherent reasoning
AI advancement now follows dual pathways: exploring model capability ceilings through massive parameters/computing/data, and optimizing floors via algorithmic and infrastructure co-design. These complementary approaches collectively drive toward more efficient, accessible AI. Market penetration is projected to leap from 8% (2021) to 60% next year, with China's unique computing ecosystem undergoing rapid expansion. By 2029, accelerated server markets may exceed $140 billion with 2+ million AI server shipments.
As computing power becomes the decisive factor for large model implementation, the industry confronts a triple challenge: 1. Accelerating GPU and model iterations requiring stronger infrastructure 2. Geopolitical tensions forcing strategic balancing between global and domestic GPU solutions 3. AI servers' high-load operations demanding unprecedented reliability
LENOVO GROUP's infrastructure division head Zhou Tao observes China's computing sector has transitioned from scale competition to systemic collaboration—spanning technical integration, scenario-specific applications, and open ecosystems.
Enterprise needs now evolve toward converting computing power into growth drivers. The industry's hardware-centric approach has created inefficiencies: fragmented chip scheduling, communication bottlenecks, training interruptions during scaling, and crude quantization strategies. Addressing this, LENOVO GROUP launched its "AI Factory" solution—a standardized system transforming disjointed development into streamlined production lines for deployable AI agents and vertical models.
"Like historical factories that powered industrial revolutions, our AI Factory processes client data as raw material through intelligent development platforms to deliver finished AI products," explained Chen Zhenkuan, VP of LENOVO GROUP China Infrastructure.
This transformation requires robust foundational support. LENOVO GROUP has introduced new AI servers and upgraded its Heterogeneous Computing Platform 4.0, enhancing capabilities across pretraining, post-training, inference, and hyper-converged scenarios. The platform already achieves 12,000+ Tokens/s throughput for local model deployment.
"With decades of infrastructure expertise, we've built comprehensive capabilities from consulting to AI lifecycle management—qualifying us to deliver this turnkey solution that lowers SMEs' AI adoption barriers," noted Huang Shan, LENOVO GROUP's strategy director.
From explosive growth to rationalization and resurgence, AI is evolving into autonomous expert systems. The coming infrastructure window will be defined by enabling fragmented AI applications to achieve scale across industries.
Comments