Anthropic Abandons Signature Safety Commitment in Major Policy Shift

Deep News05:22

Anthropic, which has long positioned itself as a safer alternative to other AI competitors, is now relaxing its commitment to maintaining safety "guardrails." This represents one of the most dramatic policy reversals in the AI industry to date—a startup once dedicated to "benefiting humanity" is shifting its focus toward profitability and success.

In its 2023 Responsible Scaling Policy, Anthropic stated it would delay certain research and development if AI advancements posed potential dangers. However, in a blog post published this Tuesday, the company announced updated rules: it will no longer postpone development simply because it believes it does not hold a significant technical lead over competitors. Anthropic explained in its statement:

"The policy environment has shifted to prioritize AI competitiveness and economic growth, while safety-focused discussions have yet to gain substantial traction at the federal level."

A spokesperson for Anthropic commented, "From the beginning, we've said that the pace of AI development and the uncertainties in the field would require us to rapidly iterate and improve our policies."

Analysts suggest this move highlights how the lofty ideals that initially guided AI startups are increasingly clashing with profit pressures and competitive demands. Anthropic is competing against a range of strong rivals for dominance in this revolutionary technology, including OpenAI, Google, and Elon Musk's xAI.

Anthropic's CEO, Dario Amodei, previously worked at OpenAI but left in 2020, partly due to concerns that the company was prioritizing commercialization and speed over safety.

OpenAI originally operated as a nonprofit before transitioning to a more traditional for-profit model last year. It also updated its mission statement in 2024, removing the word "safely." Previously, its goal was to "ensure that artificial general intelligence benefits all of humanity safely."

Both OpenAI and Anthropic are now pushing for initial public offerings as early as this year, aiming to capitalize on investor interest in AI. Anthropic was recently valued at $380 billion, while OpenAI is raising funds at a valuation exceeding $850 billion.

Anthropic's policy update coincides with escalating tensions with the U.S. Department of Defense. The dispute centers on Anthropic's insistence on maintaining safety guardrails for its Claude AI tool. On Tuesday, the Pentagon threatened to invoke a Cold War-era law if Anthropic does not accept government terms by Friday, which would compel the company to allow military use of its technology.

According to reports, during a meeting on Tuesday between Amodei and U.S. Defense Secretary Pete Hegseth, U.S. officials outlined potential consequences, including designating Anthropic as a supply chain risk and invoking the Defense Production Act to mandate the use of its AI software, even without the company's consent.

Earlier this month, senior safety researcher Mrinank Sharma announced his departure from Anthropic. In a letter to colleagues posted on X, he wrote, "I constantly reflect on the situation we are in. The world is in peril—not just from AI or biological weapons, but from a series of interconnected crises unfolding simultaneously."

Anthropic and OpenAI are not the only ones grappling with AI safety measures. Earlier this month, reports indicated that Elon Musk's SpaceX and its subsidiary xAI are participating in a secret new Pentagon bidding project to develop voice-controlled autonomous drone swarm technology. This marks a controversial shift, as Musk has previously opposed creating "new tools for killing." He even sued OpenAI, having expected the organization to remain nonprofit and committed to building safe AI for the public good.

Previous reports also noted that OpenAI supports Applied Intuition's proposal in the drone bidding project, though its involvement would be limited to the "mission control" component—translating battlefield commanders' voice commands and other instructions into digital orders.

Conflicts between Amodei and OpenAI CEO Sam Altman have sometimes surfaced publicly. Last week, at an AI summit in New Delhi, while both stood alongside Indian Prime Minister Narendra Modi and others joined hands on stage, the two CEOs refused to hold hands.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment