Anthropic CEO: Claude is not suitable for military applications; expects to sign more explicit terms

ChainNewsAbmedia

Anthropic CEO Dario Amodei was interviewed by The Economist editor Zanny Minton Beddoes, explaining why Anthropic opposes certain Pentagon contract clauses. He believes Anthropic’s stance is not only based on democratic values but also concerns about whether existing AI models are reliable enough for military defense and who should ultimately control them. Amodei also expressed a desire to open dialogue with the government to discuss clearer AI regulations.

Amodei: Claude Has Not Yet Achieved Fully Autonomous Military Capabilities

After Trump’s administration banned federal agencies from using the company’s models for mass surveillance or autonomous weapons, Claude was labeled as an AI risk provider in the U.S. Amodei stated that Anthropic’s main model, Claude, is not yet ready for fully autonomous military applications. As a private AI provider, explaining proper use to government clients is challenging but necessary for safety.

He clarified that Anthropic has a limited-scope contract with the Pentagon, which has so far been problem-free. However, they are concerned about future use cases. Fully autonomous military systems could enable AI to control millions of drones without adhering to traditional military protocols, and if not properly regulated, such applications could become uncontrollable.

How Does Anthropic Justify Limiting Weapon Use?

In the interview with The Economist, Amodei explained that restricting Claude from commanding autonomous weapons is based on two reasons: insufficient reliability and lack of human oversight frameworks (Video 0:00-0:18, 5:48-5:58).

Insufficient Reliability: Amodei believes Claude is not yet capable of supporting fully autonomous military scenarios, comparing it to an aircraft manufacturer announcing certain flight methods as unsafe (Video 0:05-0:49).

Lack of Human Oversight: He worries that without human regulation, a single person could control millions of weapons, bypassing traditional military accountability (Video 5:48-6:55).

How to Address Disagreements on Military AI?

Amodei expressed hope that Anthropic and the U.S. government can work together more carefully to handle disagreements over AI applications. He emphasized that AI providers and the government should discuss the reliability and governance of existing models to mitigate potential risks. He believes AI companies have a responsibility to clarify whether their models are mature enough for high-risk applications.

He proposed initially signing a limited agreement covering agreed-upon use cases, allowing both sides to establish operational standards before expanding to more controversial tasks. Ongoing discussions are needed to find a consensus and balance—preventing private companies from gaining excessive power while ensuring government authority remains challengeable.

The name “Anthropic” derives from “anthropology,” reflecting the founders’ original intention: to create a human-centered AI company. This commitment has made Amodei one of the few CEOs willing to openly challenge the U.S. government and directly accuse China of stealing Claude technology. How Anthropic balances its principles with national security concerns will be a critical turning point in AI militarization. The world is watching how this standoff over technological sovereignty and ethical boundaries will unfold.

This article, “Anthropic CEO: Claude Not Ready for Military Use, Seeks Clearer Contract Terms,” first appeared on Chain News ABMedia.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments