FOURTH PARADIGM recently announced that its "ModelHub XC" has completed adaptation and certification for 108 mainstream AI models on Moore Threads Technology Co.,Ltd.'s GPUs. These models cover various tasks, including text generation, visual understanding, and multimodal Q&A, with plans to expand to thousands of models within the next six months, further strengthening the domestic computing ecosystem. Notably, during this batch adaptation process, Moore Threads Technology Co.,Ltd.'s hardware demonstrated significant advantages in quantized models. Its GPUs, leveraging hardware-level support for low-precision data types, optimized instruction sets, and cache mechanisms, effectively reduce memory usage and enhance inference speed, ensuring both efficiency and energy savings in practical deployments. Through precise calibration and optimization, the adapted models achieve improved performance while maintaining inference accuracy for commercial applications.
Moore Threads Technology Co.,Ltd. officially launched its STAR Market IPO on November 24, with an issue price of 114.28 yuan per share, setting a new high for A-share IPO prices in 2025. As AI inference efficiency becomes a critical challenge for industrial adoption, achieving stable and efficient model operation on domestic chips has emerged as a key factor in advancing the computing ecosystem. In response, FOURTH PARADIGM has focused on enhancing model compatibility and operational efficiency on domestic hardware through its proprietary EngineX engine technology, significantly lowering deployment barriers for developers.
Currently, ModelHub XC has completed adaptation and validation for multiple model series, including Meta, Qwen, Deepseek, Hunyuan, and Open Sora, on Moore Threads Technology Co.,Ltd.'s GPUs. The EngineX engine, serving as the underlying framework, enables "engine-driven, plug-and-play multi-model deployment," effectively addressing bottlenecks in model compatibility and scalability for domestic chips.
About ModelHub XC: ModelHub XC is an AI model and tool platform designed for the domestic computing ecosystem, integrating community and service functions. It aims to drive AI innovation and adoption on local hardware, offering end-to-end solutions spanning model training, inference, and deployment.
Comments