The Ministry of Industry and Information Technology, along with nine other government departments, has jointly issued the "Measures for the Review and Service of Science and Technology Ethics in Artificial Intelligence (Trial)" (referred to as the "Ethical Measures"). This represents a major institutional arrangement in China to balance the development and security, innovation and responsibility, as well as efficiency and fairness of artificial intelligence. It provides clear ethical guidelines for the positive development of AI technology and the high-quality growth of the smart industry.
The "Ethical Measures" stipulate that all AI-related scientific and technological activities must integrate ethical requirements throughout the entire process. These activities should adhere to principles such as enhancing human well-being, respecting the right to life, protecting intellectual property, upholding fairness and justice, reasonably controlling risks, maintaining transparency, ensuring controllability and trustworthiness, and strengthening accountability.
While AI is advancing rapidly with new application scenarios emerging continuously, its rapid progress is accompanied by significant ethical risks that have raised public concerns. For instance, algorithmic black boxes and algorithmic discrimination can exacerbate social inequality. Technologies like deep synthesis and content manipulation can disrupt the public opinion environment. Data misuse and privacy violations directly infringe upon citizens' personal dignity. Highly autonomous decision-making systems blur accountability boundaries and challenge human agency. Furthermore, some human-machine integration technologies directly touch upon core boundaries related to life, health, and personal dignity.
These issues often stem from AI research and applications that lack scientific or social value, being driven solely by the pursuit of profit.
In addition to outlining general and simplified procedures for ethical review of AI technology, the "Ethical Measures" also establish an expert reassessment process. While the general and simplified procedures can be conducted by the organization itself or delegated to another entity, the expert reassessment requires stricter scrutiny based on a list of "AI scientific and technological activities requiring expert ethical reassessment" jointly issued by the Ministry of Industry and Information Technology and the Ministry of Science and Technology in collaboration with relevant departments. Clearly, activities listed here carry higher risks and attract greater public attention.
The appendix to the "Ethical Measures" identifies three key areas requiring expert ethical reassessment. These areas are not only focal points for ethical review in AI but are also critical for preventing ethical lapses in AI technology and are of significant public concern.
First, strict reassessment is required for the research and development of human-machine integration systems that significantly impact human subjective behavior, psychological emotions, and life health. Experts argue that technologies such as brain-computer interfaces, affective computing, neural modulation, and cognitive enhancement directly affect human perception, emotions, cognition, and even vital functions. If developed disorderly, they could lead to severe consequences like personality alienation, interference with autonomous consciousness, and damage to life and health. The fundamental ethical stance of AI development must be to uphold the principle that "humans are ends, not means," ensuring technology always serves humanity rather than颠覆ing human agency. A simple search for "brain-computer interface" and "emotion control" yields numerous "application guides." Regardless of the technology's maturity, the consequences of machines controlling human emotions are unpredictable. Even with intentions "for good," negative outcomes cannot be ruled out. This is an aspect that must be taken seriously in AI ethics.
Second, strict reassessment is necessary for the R&D of algorithmic models, applications, and systems capable of social mobilization and guiding public consciousness. This aims to prevent public risks arising from "algorithmic power." If algorithms are specifically configured, they can easily be used to create information cocoons, amplify social divisions, and manipulate public opinion. Technologies like generative AI, algorithmic recommendation, deep synthesis, and public opinion guidance systems already possess strong capabilities for information distribution, opinion shaping, and emotional agitation. The frequent mention of "internet water armies" clearly illustrates the serious dangers of "algorithmic power."
Third, strict reassessment is mandated for the development of highly autonomous automated decision-making systems. For example, in scenarios involving personal safety and significant public interests—such as transportation, healthcare, industry, emergency response, and public management—highly autonomous systems, if plagued by algorithmic bias, logical flaws, or失控operation, could directly cause casualties, major property losses, and systemic risks. These fields are not "testing grounds" for AI development; real-world application must not proceed without extensive validation, as they concern the life and property safety of unspecified groups, which equates to public safety.
In summary, as global competition in AI technology intensifies, the world is also entering a critical period for establishing systems to guard against AI's negative effects. The "Ethical Measures" fill a gap in China's specialized ethical review system for AI and clearly delineate "red lines" in these three key areas. As "red lines," they must not be crossed, and violations will face severe penalties. This direction will also be crucial for future regulatory development.
Comments