Nanjing Founder Builds Computing Power Chip, Fosters 280 Billion Yuan Giant

Deep News02-28

Moore Threads Technology Co.,Ltd., often referred to as China's version of NVIDIA, has released its first performance report since going public. On February 27, the company announced that its revenue for 2025 increased to 1.5 billion yuan, a more than twofold growth, while its losses narrowed by approximately 37% year-on-year, indicating it is still in a phase of rapid development.

The driving force behind the company is 60-year-old Zhang Jianzhong, a native of Nanjing who serves as Chairman and CEO. Zhang previously worked at NVIDIA for 15 years and was a key figure in helping Jensen Huang expand the company's market presence in China. He founded Moore Threads in 2020, and within six years, has built it into a chip giant with a market capitalization exceeding 280 billion yuan. Zhang leads a top-tier, in-house GPU team that is intensively tackling technical challenges.

The company's flagship product, the MTT S5000, has already entered mass production. Zhang also holds three new strategic products that are slated for release this year.

**Primary Weapon** The standout feature of the company's performance is its flagship product, the MTT S5000, which was launched in 2024. Built on the self-developed "Pinghu" architecture, it is a full-featured GPU computing card specifically designed for large model training, inference, and high-performance computing. It has achieved mass production and serves as the main hardware product for shipments.

In mid-February, Moore Threads publicly disclosed the MTT S5000's hardware performance parameters for the first time. Key specifications include a single-card AI dense computing power of up to 1000 TFLOPS, 80GB of video memory, and support for full-precision computing from FP8 to FP64. Industry insiders note that in practical tests, the MTT S5000's performance is comparable to NVIDIA's H100, with some metrics even surpassing the H100 in multimodal large model fine-tuning tasks. Notably, the MTT S5000 is one of the first domestic GPUs to natively support FP8 precision training. As large model parameters continue to grow, support for FP8 computing precision has become a core standard for cutting-edge model training and inference.

Ultimately, a chip's value is determined by its practical engineering application. Moore Threads states that the "Kua E" ten-thousand-card intelligent computing cluster, built using the MTT S5000, can efficiently support the training of trillion-parameter large models, with computational efficiency reaching that of international GPU clusters of the same generation and scale.

In a speech delivered in December 2025, Zhang Jianzhong expressed strong confidence. He mentioned that previously, most large model developers were hesitant to use domestic cards for training, fearing suboptimal results. He asserted that if a model was previously trained on Hopper architecture, switching to the S5000 would yield training results for large language models that are "only better, not worse."

Since February, Moore Threads has announced multiple collaborations, confirming that the MTT S5000 has been successfully adapted with several domestic new models, including those from Zhipu, MiniMax, and Alibaba's Qwen. This rapid pace of integration is crucial because the AI computing capabilities linked to the S5000 are the primary engine for the company's performance growth. From January to June 2025, AI computing-related business contributed over 90% of the total revenue, making it the main source of both income and profit.

According to the prospectus, products in this segment command relatively high average prices. For instance, the intelligent computing clusters secured major orders from large clients. In the first half of last year, Zhang Jianzhong's team sold five sets, with an average unit price reaching 110 million yuan. Until new products are ready, the MTT S5000 will continue to be the primary driver for the company's advancement.

**Three New Strategic Products** Zhang Jianzhong is pursuing opportunities by moving at a rapid pace, striving to catch up with NVIDIA. Since its founding, Moore Threads has maintained a rigid annual iteration cycle for its GPU architecture, consistently launching new chips each year to adapt to market changes.

Over two months ago, at the inaugural MUSA Developer Conference, Zhang took the stage for a two-and-a-half-hour presentation, comprehensively showcasing new additions to the company's "arsenal." The most anticipated reveals were three key initiatives: the codenamed "Huagang" next-generation GPU architecture, and two chips based on this architecture—"Huashan" and "Lushan." These represent the evolution of Moore Threads' hardware capabilities.

The "Huagang" architecture utilizes a new-generation instruction set, increasing computational density by 50% and improving energy efficiency tenfold, directly addressing the power consumption challenges of intelligent computing centers. It supports intelligent computing clusters scaling beyond one hundred thousand cards. Of the two new chips, "Huashan" focuses on integrated AI training and inference alongside ultra-large-scale intelligent computing, aiming to become a solid foundation for next-generation "AI factories." "Lushan" specializes in high-performance graphics rendering, boasting a 16x improvement in geometry processing performance and a 64x increase in AI computing performance.

Based on the company's previous release cadence, these three products are expected to hit the market around 2026. Unlike some other domestic GPU companies, Zhang Jianzhong has positioned Moore Threads to develop "full-feature GPUs." This approach aims for a "one-chip, multi-capability" design, where a single GPU chip integrates various functions like AI computing acceleration, graphics rendering, physical simulation, and scientific computing, while also supporting different computational precisions. This path meets broader application demands and offers significant competitive advantages in future AI trends such as the metaverse, world models, and embodied intelligence, although it presents greater research and development challenges.

The technical foundation supporting these full-feature GPUs is the MUSA unified architecture, painstakingly developed in-house by Zhang's team. MUSA integrates GPU hardware and software functions, encompassing a unified chip architecture, instruction set, programming model, software runtime libraries, and driver framework. Now at version 5.0, it is considered mature. MUSA is not sold as a standalone product but serves as the core technological anchor for Moore Threads' successive generations of GPUs.

**Building an Ecosystem** For domestic computing power to truly succeed, merely manufacturing chips is insufficient; the ecosystem must be strengthened. It's akin to building a highway—paving the road is just the beginning; it requires users and traffic to thrive. The MUSA architecture developed by Zhang Jianzhong is designed to provide standardized "road signs" and "traffic rules" for this computing power highway, enabling developers to run their large models smoothly.

MUSA offers two main advantages: strong compatibility and a rich set of tools. It features a "universal interface" that can easily connect with mainstream large models like GLM-5, MiniMax M2.5, and Kimi K2.5. It is also compatible with the mainstream GPU ecosystem dominated by NVIDIA, allowing developers to port their work without rewriting code or significantly modifying products.

Moore Threads also provides "ready-to-use tools," such as native FP8 acceleration and the SGLang-MUSA engine optimization, which developers can utilize directly, saving time and effort while reducing the difficulty of migration and deployment. For example, on February 11, shortly after Zhipu's GLM-5 was released, Moore Threads announced a "Day-0 adaptation" achieved with support from the MTT S5000.

The next step for Zhang Jianzhong involves building the domestic computing power ecosystem from the ground up. In his speeches, he has pointed out the need to cultivate talent. Using the Moore Academy as a platform, the company aims to build a developer growth system that integrates industry and education, establishing MUSA as an ecosystem hub. It is reported that nearly 200,000 developers have been involved through the "Domestic Computing Ecology and AI Education Co-construction Initiative," which brings cutting-edge technology and industrial practice into over 200 universities across China.

Zhang concurrently launched the MUSA Developer Program, designed to provide AI learners with computing power support and technical empowerment. "We will iteratively improve the MUSA architecture ecosystem and accelerate the mass production scaling of the MTT S5000 and its rapid adaptation with the domestic large model ecosystem," Moore Threads stated.

The prospectus mentions that management anticipates achieving profitability on a consolidated basis as early as 2027. Having reported a net loss exceeding 1 billion yuan in 2025, Moore Threads has a gap to close. Reaching profitability and successfully building a computing power ecosystem present multiple challenges that lie ahead for Zhang Jianzhong.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment