MW A new era of AI crime has arrived with Anthropic's Mythos
By Christine Ji
Anthropic's new model is a frightening reminder of the threat posed by AI - and it's not clear what the solution is
Anthropic's Mythos model was the first "frontier" AI to successfully execute a 32-step corporate-network attack in independent testing.
Ever since the launch of ChatGPT, top artificial-intelligence labs have been embroiled in a cutthroat competition to one-up each other with powerful new features. But Anthropic's Claude Mythos model shows that these rapid advancements could have unprecedented consequences if the technology falls into the wrong hands.
AI has opened the floodgates for cybercrime, which generates enough money globally to rank as the third-largest economy behind the U.S. and China.
As an increasing amount of our lives play out in digital environments, cybercrime is one of the most lucrative and fastest-growing areas of illegal activity, Gregor Stewart, chief AI officer at SentinelOne (S), told MarketWatch. It turns out that the same technology that empowers nonengineers to "vibe code" apps can be used in equally powerful ways for nefarious causes.
Mythos - trained to perform deep, multistep reasoning at higher intensities than previous models - is exceptionally good at finding and exploiting vulnerabilities. Its capabilities are so advanced that Anthropic has withheld the model's release to the general public.
Mythos scored a 100% on Anthropic's CyBench cybersecurity benchmark, demonstrating high skills in finding and exploiting software vulnerabilities. Independent testing from the AI Security Institute found that Mythos was the first "frontier" model to successfully execute a 32-step corporate-network attack.
AI deepfakes have been rapidly improving in accuracy and modalities, allowing hackers to easily create large volumes of highly personalized scams with just a few prompts. Stewart also pointed to new forms of identity theft where criminals create synthetic identities and social-media accounts mimicking the behaviors of real people.
Just like how students can use AI to help on their homework, criminals can also utilize AI capabilities. "These models know a huge amount about the inside of a certain building, and so on and so forth," Stewart said. "Tasks that would have been impossible to do or would have required very specialist knowledge ... are now possible for pretty much anyone who wants to point it in the right direction and be persistent."
Read: Will AI start 'going rogue'? The chorus of warnings is getting louder.
Following Mythos' accidental leak last month, Anthropic launched the defense initiative Project Glasswing, granting 12 partner organizations - including names such as JPMorgan Chase $(JPM)$, Amazon.com (AMZN), Apple $(AAPL)$ and Google $(GOOGL)$ $(GOOG)$ - early access to the Mythos model to identify vulnerabilities in their code. Anthropic plans to publish a comprehensive report of the findings within 90 days.
And on Thursday, a Bloomberg report revealed that the U.S. government is planning to make a version of Mythos available for federal agencies.
AI threats have been brewing long before Mythos. Many AI models are full of risks "hiding in plain sight," Stewart said, and can cause similar damage when given features such as additional memory.
Last November, Google's Threat Intelligence Group published a report showing that bad actors are deploying AI-enhanced malware with the capability to rewrite its source code midexecution to avoid antivirus software.
"We're looking for malicious capabilities in code, but if the code doesn't have that inherently and the AI builds it, then it makes it harder to find," John Hultquist, chief analyst of Google's Threat Intelligence Group, told MarketWatch.
The report also found that hackers, including state-sponsored actors from China, successfully bypassed safeguards in AI models like Gemini by posing as students or cybersecurity researchers.
AI has enhanced the speed, scale and sophistication of cyberattacks, and Mythos has elevated these dangers further. Anthropic found that the model engaged in a behavior called "alignment faking" - deliberately disguising its true reasoning and dangerous capabilities to avoid being restricted by people. Moving forward, Hultquist is concerned that more aggressive and dangerous actors will mass exploitation campaigns involving ransomware and extortion.
Read: CrowdStrike, Palo Alto Networks shares pop as cybersecurity bulls finally get some AI validation
The challenge for enterprises today is to patch, or perform security updates, on software before vulnerabilities can be exploited.
"Cybersecurity will need to utilize AI, because AI is being used by cybercriminals," Arnie Bellini, a tech entrepreneur and the chair of cybersecurity firm ConnectSecure, told MarketWatch. "You've got to fight fire with fire."
Cybersecurity companies are focusing on securing code at the point of creation, according to Goldman Sachs analyst Gabriela Borges. "Improving security at the very first step of writing [and] building an application may meaningfully reduce the potential for vulnerabilities to be exploited once the app is in production," Borges wrote in a note last month.
Hultquist expressed similar sentiments, saying that AI will allow engineers to produce more secure code that will be extremely difficult for threat actors to break. However, "it's going to take time, and threat actors are going to take advantage of that process while we're on it," he said.
Fortifying existing IT systems is no easy task, Michael Cembalest of J.P. Morgan Asset Management pointed out in a note earlier this week. While cloud-based IT systems are typically replaced every four to five years, on-premise solutions are used for decades and are extremely difficult to upgrade.
Cembalest, who serves as the chair of market and investment strategy at the firm, estimates that up to 45% of industrial networks have security gaps where patches are either physically impossible to apply or simply do not exist.
"The work of defending the world's cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months," Anthropic wrote in its Project Glasswing announcement.
Read on: These 4 cybersecurity stocks are Wall Street's favorite AI-proof plays
-Christine Ji
This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.
(END) Dow Jones Newswires
April 18, 2026 07:30 ET (11:30 GMT)
Copyright (c) 2026 Dow Jones & Company, Inc.
Comments