On April 23, Tencent officially launched and open-sourced Hy3 Preview (Hunyuan 3.0 Preview). This model is a Mixture-of-Experts (MoE) language model that integrates fast and slow thinking. It features a total of 295 billion parameters, with 21 billion activated parameters, and supports a context length of up to 256K tokens.
The strategy for Hy3 Preview is not to pursue an excessively large parameter count but to position itself as a solution offering a balance of performance and cost-effectiveness. The goal is to become one of the optimal choices for practical implementation across most business scenarios. Tencent believes that the 300 billion parameter scale represents a sweet spot for balancing capability and efficiency. At this level, abilities such as complex reasoning, long-context understanding, and instruction following are fully realized, while the marginal gains from further increasing parameters diminish significantly—doubling the investment often yields only single-digit percentage improvements.
Beyond capabilities for everyday conversation, Hy3 Preview has focused on enhancing its performance in coding, agent intelligence, instruction following, and contextual understanding. It is already deployed in numerous internal Tencent products. The launch of Hy3 represents a strategic recalibration for Tencent in the evolving AI landscape.
Over recent months, Tencent has undertaken significant organizational upgrades and workflow restructuring for its Hunyuan large model team. In February, the company re-established its core large model research infrastructure, encompassing pre-training and reinforcement learning, while also focusing on improving data quality. Tencent established three guiding principles for model development: emphasizing well-rounded capabilities over specialized strengths, pursuing authentic evaluation metrics beyond easily manipulated public leaderboards, and prioritizing cost-effectiveness.
Hy3 Preview is not only the first major model released after a complete rebuild of the Hunyuan development pipeline but also marks the first major achievement since Yao Shunyu joined Tencent as Chief AI Scientist and head of the AI Infrastructure and Large Language Model departments. Training for Hy3 Preview began in late January 2026, with the entire process from training to launch taking less than three months. Internally, this is viewed as the beginning of the Hunyuan model's journey to solve real-world problems.
Yao Shunyu stated that Hy3 Preview is the first step in the Hunyuan model's rebuild. Tencent hopes that this open-source release will gather genuine feedback from the community and users to enhance the practicality of the final Hy3 version. The company is also scaling up pre-training and reinforcement learning to push the model's intelligence ceiling higher. Through deep co-design with various Tencent products, the aim is to continuously improve the model's overall performance in real-world scenarios and begin exploring specialized capabilities. During development, the Hunyuan team collaborated closely with the Yuanbao product team on co-design.
The Hunyuan team believes model evaluation should not rely solely on leaderboard scores but on adaptability within complex capability systems and performance in actual business scenarios. Consequently, the team developed over 50 internal benchmarks to assess the model's practical utility and deployability, while also ensuring the model evolves through tight integration with Tencent's internal business applications.
The release of Hy3 Preview signals an acceleration in Hunyuan's development. With new infrastructure and technical philosophies in place, even larger-scale models are already in the pipeline. As AI competition intensifies, the focus has shifted to how effectively models perform within complete workflows, or their ability to "execute tasks." This explains the emphasis on enhancing coding, agent intelligence, instruction following, and contextual learning in Hy3 Preview.
To validate Hy3 Preview's practical utility, the Hunyuan team conducted internal user evaluations covering coding and general workflow scenarios. Data provided by Tencent indicates that Hy3 Preview achieved an overall win rate of approximately 55% to 56% in user blind tests. The model has been integrated into internal AI Agent products like CodeBuddy and WorkBuddy. Tencent's data shows that on these platforms, Hy3 Preview reduced first-token latency by 54%, decreased end-to-end response time by 47%, and increased success rates to over 99.99%.
In real user environments, Hy3 Preview has stably powered complex Agent workflows of up to 495 steps, covering diverse office scenarios such as document processing, data analysis, knowledge retrieval, and MCP toolchain orchestration. Tang Daosheng, Tencent's Senior Executive Vice President and CEO of the Cloud and Smart Industries Group, stated publicly in March that the application paradigm for AI is transitioning from "Chatbot" to "AI Agent." He emphasized that AI implementation is not just an algorithmic challenge but an engineering one. As the capability gap between leading models narrows, the key differentiator for companies is no longer merely "which model is stronger," but who can better engineer the application of these models.
Tencent is evidently attempting to demonstrate that even without the absolute top-tier model, possessing a stable foundation, extensive interfaces, and superior engineering capabilities can lead to success in the era of AI Agents. The release of Hy3 Preview signifies that Tencent remains focused not on the myth of ever-increasing parameters, but on leveraging its vast social and tool ecosystem to efficiently hone its skills at the 300B parameter baseline. How far this strategic tempo will carry Tencent in the next phase of the AI race will depend on whether the final Hy3 version can translate theoretical knowledge into practical, qualitative breakthroughs.
Comments