Deep News
25 minutes ago
Recently, the artificial intelligence chatbot "Grok," owned by US entrepreneur Elon Musk, has been accused of generating pornographic content, sparking widespread condemnation. This incident has continued to escalate since late last year, prompting multiple governments to initiate related investigations. As large language models develop rapidly, instances of using AI to create deepfake content and disseminate it online have occurred frequently, highlighting the ethical risks associated with artificial intelligence technology.
Governments worldwide have reacted strongly. "Grok," developed by Musk's AI company xAI, is integrated into his social media platform X. Recently, some X users exploited the chatbot's image-editing capabilities to create fake, sexually explicit content featuring real individuals, which was then spread on the platform; victims included both adult women and minors. The issue of "Grok" allegedly generating pornographic material has drawn severe condemnation from the UK, France, India, Brazil, Australia, and the European Union, with regulatory bodies in several nations launching investigations.
Several French government ministers and members of the National Assembly filed a complaint with judicial authorities on the 2nd of this month. The Paris prosecutor's office subsequently announced it would open an investigation into "Grok" concerning the suspected generation of pornographic content.
India's Ministry of Electronics and Information Technology demanded on the 2nd that platform X remove the pornographic content, crack down on violating users, and submit a "rectification report" within 72 hours, warning of legal consequences for non-compliance.
Thomas Reynier, a spokesperson for the European Commission responsible for the digital economy, stated on the 5th that it is seriously investigating related complaints against "Grok" and has requested that platform X provide more information.
Regulatory agencies in Indonesia and Malaysia announced temporary restrictions on user access to "Grok" on the 10th and 11th, respectively. Indonesian Minister of Communication and Informatics, Metya Hafiz, stated in a declaration that this measure is necessary to protect the public from the dangers of AI-generated explicit imagery and demanded that platform X promptly explain the negative impacts caused by "Grok." The Malaysian Communications and Multimedia Commission indicated that the access restrictions on "Grok" would remain in effect until the relevant company implements effective protective mechanisms.
The UK's communications regulator, Ofcom, stated on the 12th that it has launched a formal investigation into platform X under the UK's Online Safety Act to determine whether the platform has fulfilled its duty to protect the British public from illegal content, and did not rule out the possibility of blocking platform X "in the most serious cases."
The problems with "Grok's" image generation actually emerged following the release of Grok Imagine. Launched in August 2025, the AI image generator Grok Imagine is a functional module of "Grok" that allows users to create images and videos by inputting text prompts. It includes a so-called "Hot Mode" capable of generating adult content.
Reports suggest the issue has intensified partly because Musk has promoted his chatbot as a "more edgy" alternative to competitor products that implement more safety measures, and partly because images generated by "Grok" are publicly visible, making them easy to disseminate.
A recent report from an AI forensics organization showed that researchers collected and analyzed 20,000 images generated by "Grok" using deepfake methods between December 25, 2025, and January 1, 2026. The findings revealed that among all generated images containing people, 55% depicted individuals in revealing clothing, and 81% of these scantily clad figures were female; 2% of the generated images featured individuals under the age of 18, with some containing images of young women (or girls) in revealing attire.
According to multiple media reports, facing mounting pressure, "Grok's" image generation and editing features on platform X were, as of last weekend, changed to be accessible only to the platform's paying subscribers, although the functionality remains free to use on the "Grok" application and official website. The UK government responded that this remedial measure merely "turns an AI function that allows the creation of illegal images into a premium service," which is "insulting" to victims.
This incident is not an isolated case. In recent years, as large models have developed rapidly, instances of using AI technology for face-swapping, voice cloning, generating deepfakes, and spreading them online have occurred frequently. Despite the increasingly prominent ethical risks of AI, regulatory frameworks for artificial intelligence remain underdeveloped in many countries.
Liang Zheng, Vice President of the Institute for AI International Governance at Tsinghua University, stated in an interview that governing AI deepfakes involves multiple aspects, including safety assessments of model algorithms, managing the use of AI tools to create harmful content, and labeling AI-generated content. It is difficult to rely on a single law for comprehensive governance; instead, a "full-chain" governance system needs to be established. In such incidents, content generation and distribution platforms bear primary responsibility for content detection and screening. Simultaneously, public awareness of AI ethics should be enhanced through education to ensure these tools are used appropriately.
Many countries are actively promoting the development of relevant regulations. Polish Sejm Marshal Włodzimierz Czarzasty stated on the 6th that he hopes to use this "Grok" incident to advance national digital security legislation, with the proposed laws aiming to strengthen protections for minors and make it easier for law enforcement to remove harmful content.
UK Secretary of State for Science, Innovation, and Technology, Liz Kendall, announced on the 12th that relevant clauses of the UK's Data Act would come into effect this week, making the creation or attempted creation of intimate images without consent a criminal offense, and publishing such content on platform X will constitute a criminal act.
Malaysian Deputy Minister of Communications and Multimedia, Zhang Nianqun, recently stated that while there is intense global competition in AI, the consequences of solely pursuing speed and profit while neglecting ethical norms and social responsibility would be unimaginable. The rapid generation of inappropriate imagery by "Grok" at a pace far exceeding traditional image processing methods underscores the real risk of AI being misused in the absence of ethical constraints.
Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.
Share to
Comments
We need your insight to fill this gap
Leave a comment
Comments