A new set of trial guidelines, titled "Measures for the Ethical Review and Service of Artificial Intelligence Science and Technology," has been jointly issued by the Ministry of Industry and Information Technology and nine other departments. These measures aim to support innovation in AI ethics review technologies and strengthen the use of technical means to prevent ethical risks associated with AI. The primary goal is to standardize the ethical governance of AI-related scientific and technological activities, promote fair, just, harmonious, safe, and responsible innovation, and foster the healthy development of the artificial intelligence industry.
Currently, a new generation of artificial intelligence is flourishing globally, profoundly transforming production methods and lifestyles, and reshaping the framework of social development. Statistics indicate that by December 2025, the number of generative AI users in the country reached 602 million, marking a 141.7% increase from the end of 2024. The penetration rate surged to 42.8%, a significant rise of 25.2 percentage points year-on-year. Whether it is driving the intelligent and digital transformation of traditional industries like manufacturing, agriculture, and services, empowering and spawning emerging sectors such as smart healthcare, autonomous driving, and the metaverse, or integrating various intelligent auxiliary facilities with daily life scenarios to create comfortable and convenient environments—the widespread application of AI technology is enhancing the quality and warmth of people's lives, injecting robust and enduring momentum into economic and social development.
However, as AI technology benefits our lives in myriad ways, its latent risks are becoming increasingly apparent, and the associated challenges cannot be overlooked. Experts point out that algorithms are not neutral tools; their biases stem from inaccurate historical data, narrowly defined design objectives, and self-reinforcing feedback loops. These biases can be systematically amplified by the institutional structures of modern society. Therefore, establishing ethical norms for AI development requires clearly defining the boundaries and bottom lines for technological advancement, unifying technological value with social value, and guiding AI to progress on a compliant and benevolent path, thereby better supporting economic and social development.
"Benevolence" is not an abstract moral concept; its essence lies in ensuring that technological development does not deviate from the common interests of humanity, enabling everyone to benefit from technological progress, and achieving harmonious coexistence between people and technology. The measures specify that AI scientific and technological activities must integrate ethical requirements throughout the entire process, adhering to seven ethical principles: enhancing human well-being, respecting the right to life, upholding fairness and justice, reasonably controlling risks, maintaining openness and transparency, protecting privacy and security, and ensuring controllability and trustworthiness. From key review areas and implementation mechanisms to specific procedures, the measures propose effective regulatory actions, using scientific and technological ethics management and services to support and safeguard AI innovation.
Promoting the integration of ethics into design and achieving proactive ethical governance is a crucial lever for shifting the AI governance paradigm from reactive response to proactive prevention and control, and from end-stage rectification to source management. Presently, AI deeply touches every aspect of societal operation. In traditional AI ethics reviews, ethical assessment and compliance checks are often placed at the stage after technology development is complete, when a product is about to launch, or even after it has been commercialized. This approach typically remains at the level of post-facto accountability and remediation, easily leading to a passive governance dilemma of "checking after the fact," which fails to prevent risks at their root and struggles to keep pace with the rhythm of technological development and the nature of risk evolution. Consequently, advancing AI ethics governance can draw inspiration from the concept of "embedded ethics," making ethical scrutiny a routine practice in technology research and development, integrating it throughout the entire R&D chain, and achieving a comprehensive forward shift of the governance checkpoint to ensure continuous attention to ethical risks throughout the process.
The ethical governance of artificial intelligence science and technology is a long-term, foundational, and systematic project requiring coordinated efforts. On one hand, it is necessary to continuously improve the legal, regulatory, and policy systems related to AI ethics, establishing normalized, comprehensive, and full-process supervision mechanisms, and maintaining a "zero-tolerance" attitude towards violations that breach ethical bottom lines or contravene laws and regulations. On the other hand, it is essential to reinforce primary responsibilities, clarify the duties and missions of various entities in AI ethics governance, incorporate ethical assessment as a core component of technical feasibility studies, form professional ethics evaluation teams to preemptively judge and actively avoid potential ethical risks in technology, and effectively protect user privacy and personal information security. Simultaneously, across society, strengthening publicity and guidance is needed to continually foster a strong atmosphere that values technological ethics.
Only through multiple concurrent measures can the foundation of AI ethics governance be truly solidified, promoting the standardized, healthy, and sustainable development of artificial intelligence technology within an ethical framework, and enabling AI to better empower a better life.
Comments