Anthropic eases AI safety rules for competitive edge
- by Business Line
- Feb 25, 2026
- 0 Comments
- 0 Likes Flag 0 Of 5
Copy
Anthropic in 2023 said in its Responsible Scaling Policy that it would delay AI development that might be dangerous.
| Photo Credit:
Dado Ruvic
Anthropic PBC, known for its commitment to artificial intelligence safeguards, has loosened its central safety policy, saying the move is necessary to keep pace in a rapidly changing field.
The company in 2023 said in its Responsible Scaling Policy that it would delay AI development that might be dangerous. In a Tuesday blog post, Anthropic said it was updating its rules to say it would no longer do so if it believes it lacks a significant lead over a competitor.
“The policy environment has shifted toward prioritising AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level,” Anthropic said in its post.
Recently valued at $380 billion, Anthropic is racing OpenAI, Alphabet Inc’s Google and Elon Musk’s xAI Corp for dominance in what many view as a revolutionary new technology.
“From the beginning, we’ve said the pace of AI and uncertainties in the field would require us to rapidly iterate and improve the policy,” an Anthropic spokeswoman said.
The updated policy, which was earlier reported by Time, coincides with a growing dispute with the US Defense Department over Anthropic’s insistence on guardrails for use of its Claude AI tool. The Pentagon on Tuesday threatened to invoke a Cold War-era law to compel Anthropic to allow the US military to use the startup’s technology.
Anthropic is also making a bigger push into the legal industry, recently announcing partnerships with LegalZoom, Harvey and Intapp that can connect their legal resources with Claude.
More stories like this are available on bloomberg.com
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




