Artificial intelligence firm Anthropic PBC is providing technology companies with access to a more powerful, unreleased AI model to help prepare for potential cyberattacks that could arise from the widespread deployment of such advanced systems.
Anthropic announced on Tuesday the launch of a specialized initiative called "Project Glasswing," which includes participation from Amazon, Apple, Microsoft, Cisco, and other organizations. Participants will be able to use Anthropic’s new Mythos model to identify security vulnerabilities in their own products and share findings with industry peers.
The AI startup clarified that it currently has no plans to release Mythos to the public. Instead, it will use feedback from Project Glasswing to establish security guidelines for the technology.
This arrangement reflects growing concern among technology firms that more advanced AI models could be misused by criminals and state-sponsored hackers to uncover source code flaws and bypass cybersecurity defenses. AI tools have already been employed to assist in cyberattacks, including an incident where hackers used AI to infiltrate Mexican government systems.
Anthropic’s competitor OpenAI has also emphasized the improving cybersecurity capabilities of its models and launched pilot programs aimed at giving defenders "priority access" to its tools.
Newton Cheng, Head of Frontier Red Team Cybersecurity at Anthropic, stated, "We believe this is not just an issue for Anthropic but an industry-wide challenge that requires collaboration between private companies and governments. Through Project Glasswing, we aim to help cybersecurity defenders stay ahead of threats."
Anthropic noted that it has discussed Mythos and its security-related capabilities with U.S. officials but declined to specify which agencies were involved. Mr. Cheng mentioned that the company has previously collaborated with the Cybersecurity and Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST).
Although Mythos is a general-purpose AI model not specifically designed for cybersecurity applications, Mr. Cheng revealed that it has already identified multiple security risks. These include a 27-year-old vulnerability in critical internet software and a 16-year-old flaw in the code of a popular video application—despite the latter having been scanned over 5 million times by automated detection tools without being detected.
Diane Payne, Head of Research Product Management at Anthropic, indicated that the company has implemented safeguards to ensure that Project Glasswing participants' access to the Mythos model is strictly controlled. However, she declined to provide further details for security reasons.
Comments