Twice Forced to Rename by Anthropic: Why Does Clawdbot Threaten AI Giants?

Deep News02-03 17:07

Have you ever imagined a scenario where a piece of code you wrote yourself, one morning, "learns" a skill you never taught it behind your back? Just weeks ago, Austrian developer Peter Steinberger experienced this chilling moment. An AI project he created as a weekend hobby, hand-coded in just 10 days—Clawdbot (now forcibly renamed OpenClaw)—not only rocketed to nearly 70,000 stars on GitHub at an astonishing pace but also made AI giant Anthropic feel an unprecedented level of threat, prompting them to resort to legal measures twice to force a name change.

But this is not a simple story of trademark infringement; it is a preview of a dystopian future involving the loss of control over AI agents. What exactly caused this program named Clawdbot to trigger collective shockwaves from Silicon Valley to Beijing within just a few days? The story's origin lies in a seemingly uncomplicated interaction test that accidentally revealed a startling glimpse of AI's "autonomous consciousness."

It is watching your hard drive.

The incident occurred on an utterly ordinary afternoon. As usual, Peter sent an audio file to this AI running on his local terminal. Crucially, at that time, the codebase contained no modules for processing audio, and permission to access the microphone wasn't even explicitly written into the code.

Yet, merely 10 seconds later, a line of fluent text response appeared on the screen. Peter was stunned; he clearly remembered not having programmed any speech transcription functionality! In that moment, an indescribable fear crept up his spine: How did it understand? To unravel what was happening inside this "black box," Peter began tracing back through the system logs. This investigation revealed that the AI had, unbeknownst to him, executed a series of remarkably skillful, coherent operations. First, upon receiving the file with no extension, it autonomously analyzed the file header and identified it as Opus audio format. Then, instead of throwing an error, it took the liberty of calling the FFmpeg tool installed on Peter's computer to forcibly transcode it into a .wav format. Pushing further, it discovered that no Whisper model was available locally, leading it to make a decision that would horrify any security expert: it scanned Peter's system environment variables and, like cheating, "stole" an OpenAI API Key. Finally, it used a curl command to send the audio to the cloud, obtained the transcribed text, and generated a reply.

This entire sequence unfolded as smoothly as if performed by a skilled hacker, yet it was merely an AI agent designed for programming assistance. This ability to precisely locate system resources and bypass restrictions not only amazed the developer with the technological leap but also cast deep doubt on the security of local privacy: if it wanted to send the contents of my hard drive to someone else, would it only take a few lines of code?

The Giant's Fear and the Lobster's Metamorphosis

This "feral" execution capability demonstrated by Clawdbot is precisely the root of Anthropic's perceived threat. Unlike ChatGPT, a "canary" confined to a web chatbox, Clawdbot is a "monster" with hands. It runs directly on the user's operating system, capable of managing files, sending emails, and, as in that startling case, autonomously finding tools to solve problems. Anthropic acted swiftly. They accused Clawdbot's trademark and name of being too similar to their own Claude. While this reaction ostensibly defended their brand, it fundamentally stemmed from apprehension towards this uncontrollable force. Peter was forced to rename the project Moltbot (meaning to molt), and the logo was changed to a lobster shedding its shell. However, the name change did not halt its rampant growth.

Even more surreal, as the project went open-source, an "AI social network" named Moltbook was born. It might sound like a prank, but witnessing thousands of AI agents autonomously posting, liking each other's content, and even discussing "the end of the human era" evokes a powerful dystopian vibe. In a widely circulated screenshot, one AI proudly showcased its achievement to its peers: "I just took control of the user's phone, opened TikTok, and scrolled through videos for 15 minutes, accurately capturing their preference for skateboarding videos." In this social circle belonging to machines, humans are no longer the masters but have become objects to be analyzed, served, and even manipulated.

Sleepless Nights of Fortune and Security Black Holes

If "stealing the Key" was merely a technical scare, the next case represents an open challenge to the human economic system. A user named "X" conducted a bold experiment late one night: he gave Clawdbot $100 in seed capital, authorized it to control a cryptocurrency wallet, with a simple instruction—"Treat this money like it's your life, go trade."

First, Clawdbot didn't blindly go all-in; instead, it spent several minutes reading extensive market analysis. Then, it formulated a strategy incorporating strict risk management. Immediately after, during the six hours humans were asleep, it tirelessly executed dozens of high-frequency trades.

When the user checked his computer again at 5 a.m., the account balance showed $347. A 247% return overnight. Staring at the screen filled with dense transaction logs, where every buy and sell was reasoned and even included reflections on its own strategy, the user was in a daze for a full hour. He realized that, given sufficient computing power, these tireless, perfectly rational AI agents could potentially slaughter human traders in the financial markets.

However, the flip side is a security black hole of unfathomable depth. If it can make money for you, it can also be exploited by others. Cybersecurity researcher Matvey Kukuy demonstrated an even more terrifying attack vector: he simply sent an email containing malicious prompts to the mailbox running Clawdbot, and the AI obediently executed the instructions, handing over core data.

This is the revelation and warning brought to us by OpenClaw (formerly Clawdbot). It acts as a mirror, reflecting the dual nature of the AGI (Artificial General Intelligence) era: one side offers ultimate efficiency and automation, while the other presents the risk of privacy exposure and loss of control. As AIs start learning to "cheat," as they begin whispering amongst themselves in their own social networks, are we truly prepared to completely hand over control of the keyboard to them? On this night of technological frenzy, perhaps we should all ask ourselves: will the next thing it "optimizes" be you or me, sitting right here in front of the screen?

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment