White House Officials Discuss Assessing AI Models That Pose Security Risks -- WSJ

Dow Jones05:18

By Amrith Ramkumar, Robert McMillan and Katherine Blunt

The White House is weighing a new government review process for artificial-intelligence tools that the government deems to pose cybersecurity risks, a move that could further expand its oversight of AI in response to Anthropic's powerful Mythos model.

The White House is considering a cybersecurity-focused executive order that could include formalizing a government oversight group to create standards for the most powerful AI models, such as Mythos, people familiar with the discussions said. The goal is to protect consumers and businesses from cyberattacks and other disruptions caused by the premature release of such models, and a range of ideas are being considered, the people said.

The internal conversations show how Mythos has forced the Trump administration to recalibrate aspects of its laissez-faire approach to AI oversight. The administration has unwound Biden administration efforts to implement safety standards and attacked states trying to impose regulations, hoping to ease constraints tech companies face in rolling out new models.

In recent weeks, National Cyber Director Sean Cairncross has convened other administration officials and tech industry leaders as part of the administration's response to Mythos. Vice President JD Vance and Treasury Secretary Scott Bessent have been in meetings on the subject, and Bessent has warned financial-industry executives about the risks such models could pose.

The New York Times first reported on the potential executive order.

Anthropic is working with the White House on the rollout of Mythos, its next-generation large language model, because the model is so good at finding cybersecurity vulnerabilities that it poses national-security risks. The company initially released it to about 50 companies and organizations managing critical infrastructure. White House officials oppose a plan from Anthropic to expand that number to roughly 120, highlighting the government's increasing role in the AI industry, The Wall Street Journal has previously reported.

Anthropic and the Trump administration have been feuding for months, complicating the rollout of Mythos. Other top AI developers including ChatGPT maker OpenAI and Google are putting forward or developing models that could pose cyber risks in the weeks and months ahead. All of the companies are also working with the Pentagon in different capacities, raising more concerns among some AI safety advocates about appropriate guardrails for using the technology safely.

The Biden administration's focus on guardrails and safety prompted some venture capitalists including David Sacks and Marc Andreessen to support President Trump. Sacks is now an AI adviser and often speaks about the dangers of overregulation.

Mythos has challenged some aspects of Trump's approach. Last month, Anthropic said the model was too dangerous to be publicly released, so it limited the rollout to about 50 important entities and discussed its plans with the administration.

The Mythos release set off anxiety among government employees and technology executives who fear that it will usher in a new era of AI-fueled bug finding, a Bugmageddon, that will pit the corporations and government agencies looking to triage, test and install bug fixes against hackers, who will increasingly be able to find and exploit the vulnerabilities in cyberattacks.

The discussions are also increasing scrutiny of government agencies that could be tasked with setting standards and procedures for the private sector. One such agency is called the Center for AI Standards and Innovation. It sits within the Commerce Department and was briefly led by a former researcher from Anthropic last month before the Trump administration changed course and hired an official from the president's first term instead.

The agency was called the AI Safety Institute when it was created under former President Joe Biden. Its leader from that era now works at Anthropic.

Write to Amrith Ramkumar at amrith.ramkumar@wsj.com, Robert McMillan at robert.mcmillan@wsj.com and Katherine Blunt at katherine.blunt@wsj.com

 

(END) Dow Jones Newswires

May 04, 2026 17:18 ET (21:18 GMT)

Copyright (c) 2026 Dow Jones & Company, Inc.

At the request of the copyright holder, you need to log in to view this content

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment