To standardize AI services and applications, promote the healthy and orderly development of the industry, and safeguard the legitimate rights and interests of citizens, the Cyberspace Administration of China has recently issued a notice to deploy a four-month nationwide campaign titled "Clear and Bright: Rectifying Misconduct in AI Applications."
According to a relevant official from the administration, the campaign will be carried out in two phases. The first phase, named "Clear and Bright: Special Governance of Typical Non-Compliant Issues in AI Application Services," will focus on rectifying failures to fulfill large model filing obligations as required, insufficient security review capabilities, safety issues with large model training data, AI data poisoning, and inadequate labeling of AI-generated content. This phase aims to strengthen governance at the source of AI technology.
The second phase, "Clear and Bright: Rectifying Chaos in AI-Generated Information Content," will concentrate on addressing problems such as using AI to generate "digital junk," creating and disseminating false information, spreading violent or vulgar content, impersonating others, infringing on minors' rights, and engaging in astroturfing activities. The goal is to resolutely clean up illegal and harmful information and take legal action against non-compliant accounts, MCN agencies, and website platforms.
The first phase will target seven major types of issues: 1) Failure to fulfill large model filing obligations as stipulated by the Interim Measures for the Management of Generative Artificial Intelligence Services. 2) Insufficient security and content filtering capabilities of AI platforms, leading to the generation of illegal or harmful content. 3) Safety issues with large model training data, including the use of unauthorized or non-compliant data sources. 4) AI data poisoning, such as tampering with training data or using malicious marketing techniques like generative search engine optimization. 5) Inadequate labeling of AI-generated content, including failure to add labels or non-standard label placement. 6) Misuse of AI technology for illegal activities, such as cyberattacks or unauthorized use of AI for impersonation. 7) Inadequate security management of open-source models, including lack of identity verification and risk mitigation mechanisms.
The second phase will address seven key problems: 1) Using AI to distort classical works or generate "digital junk" with low value or harmful tendencies. 2) Creating and spreading false information, including rumors about public policy, social issues, or emergencies. 3) Impersonating public figures or generating defamatory content without authorization. 4) Producing violent, vulgar, or sexually suggestive content. 5) Harming minors by generating inappropriate or misleading content targeting children. 6) Using AI to operate fake accounts or manipulate public opinion through astroturfing. 7) Providing non-compliant AI services, such as applications offering illegal functions or generating harmful content.
The official emphasized that local cyberspace authorities must fully recognize the importance of this campaign in promoting the standardized development of AI applications and protecting netizens' rights. They are urged to strictly implement territorial management responsibilities, guide platforms in self-inspection, and establish long-term governance mechanisms to ensure the campaign achieves tangible results.
Comments