When "Legal Use" Collides with Security: The Fight for Governance Over Military AI
It is rare to see a tech provider say "no" when there is up to $200 million at stake and the client is the country with the largest defense budget in the world. Yet, Anthropic did.
On February 26, 2026, its CEO, Dario Amodei, announced that the company "cannot, in good conscience," accept the U.S. Department of Defense's demand to remove safeguards from Claude. At the heart of the disagreement are two boundaries: banning the model's use in mass domestic surveillance and in fully autonomous lethal weapons. According to The Guardian, the Pentagon, under Defense Secretary Pete Hegseth, responded with an ultimatum: if Anthropic does not allow the use of Claude for “all legal purposes” in classified environments, the contract may be canceled at 5:01 PM ET on February 27 and the company could be labeled as a "supply chain risk." This rating typically associated with external threats, not a U.S. firm.
At face value, this seems like a clash of values. The actual mechanics are more uncomfortable: the government is attempting to convert the governance of a model into a contractual clause, while the company aims to make its security policies a condition of service. In between are the operators who need capabilities and the taxpayers who need assurances.
The Contract as a Battleground for Control of the Model
What matters is not only that a contract for up to $200 million exists. It’s the nature of the asset being purchased. A foundational model is not traditional software that is delivered and becomes "locked" in a version. It’s a system that updates, refines, integrates with data streams, and acquires new capabilities over time. In practice, buying AI today resembles contracting for critical infrastructure more than licensing a product.
This is why the Pentagon pushes for the formula of “all lawful purposes.” From its perspective, any further restrictions to the law equate to a private provider imposing operational limits on the State. Undersecretary for Research and Engineering, Emil Michael, quoted by The Guardian, framed the issue as a defense of civil liberties against decisions made by "Big Tech" and questioned whether a private company should set boundaries beyond democratically approved norms.
On the other hand, Anthropic argues that the proposed language would allow its safeguards to be ignored at will. An anonymous spokesperson described the latest draft received from the DoD showed “virtually no progress” and that the “commitment” was primarily legalistic. The company does not discuss military use in general; Amodei has even stated that he believes in the “existential importance” of using AI to defend the United States. They discuss two applications where, according to their thesis, the model could “undermine democratic values.”
This signals a strategic move in the market: the negotiation is no longer about price, performance, or support but about who holds the bypass key. If the client can deactivate controls “when needed,” safeguards become mere decoration. If the provider can block categories of use even in classified settings, the State feels it is outsourcing operational sovereignty.
The threat of labeling Anthropic as a “supply chain risk” amplifies the standoff: it punishes not only this specific contract but also the future capability to sell to the government and be accepted by contractors like Boeing or Lockheed Martin, who have reportedly been asked to evaluate their exposure, according to the same coverage. It’s a power tool that reshapes incentives throughout the entire sector.
Security Becomes Architecture, Not Just a Principle
From the outside, “safeguards” sounds like a corporate policy. In reality, it is product design and, by extension, power design. When a model prohibits mass surveillance or autonomous weapons, it essentially limits the scalability of certain uses. And that’s where friction emerges: AI lowers marginal costs and makes replicable what was previously expensive.
In surveillance, the change is drastic. The difference between manually analyzing scattered information and automating classification, prioritization, and correlation is the shift from a costly operation to a potentially ubiquitous capability. The same logic applies to lethal systems: automation reduces friction, speeds decision cycles, and blurs responsibility if there is no explicit design for human oversight.
The Pentagon insists it operates under laws like the Fourth Amendment, according to the cited arguments, and that it should not accept additional private limits. That point holds consistent at the institutional level. The issue is that “legal” and “prudent” are not synonyms in systems that amplify capacity. One use may be legal and still create an operational precedent that is difficult to dismantle later.
Here lies the core of the political economy of AI: the discussion shifts from “what the model can do” to “what the surrounding system can achieve.” A model in a classified environment, integrated via platforms like Amazon and Palantir (mentioned in the story), is not a chatbot. It is a component in a decision chain. In that context, safeguards are less a moral stance and more a way to manage systemic risk.
What Anthropic is defending, at least in its public framing, is an idea of AI as augmented intelligence, not as a substitute for judgment. If a system is used for mass surveillance or delegating lethality, the human ceases to be a supervisor and becomes, at best, a late signer. That’s the boundary the company is trying to establish in the contract.
A Precedent that Reshapes the Military AI Supplier Market
The market had already begun consolidating a thesis: the Department of Defense does not want a supplier; it wants a catalog. Last summer, the Chief Digital & AI Office awarded contracts of up to $200 million to Anthropic, Google, xAI, and OpenAI to customize generative AI for military uses, according to The Guardian. The obvious idea is to avoid dependence on a single actor and accelerate capabilities.
But the detail that shifts the negotiation is that Anthropic, according to the same source, is the only model used in classified environments to date, giving it an advantage in experience and, therefore, negotiating power. This power is precisely what the Pentagon seeks to neutralize by standardizing the clause of “all lawful purposes” as a contractual baseline.
If that clause becomes the norm, the message for other providers is clear: anyone looking to enter classified work must accept that their internal policies cannot function as limits. The government does not need to promise that they will use it for mass surveillance or autonomous weapons. It just needs to secure the right to do so if its legal interpretation allows it.
In parallel, a competitive avenue opens up. The note mentions that xAI accepted the “all lawful purposes” standard for classified work. This generates an immediate financial risk for Anthropic: losing the contract means not only losing potential revenue but also ceding ground in a segment where credibility is built with real deployments, not just promises.
There is also a cost for the Pentagon if it carries out the threat. Spokesperson Sean Parnell warned on X that ceasing to allow those uses would jeopardize operations and “warfighters,” and that they will not allow “any company to dictate the terms.” This stance hardens negotiations but also raises the reputational cost of forgoing an already integrated supplier. In defense procurement, switching suppliers is rarely a clean transition, even though Amodei has indicated a willingness for a “smooth transition” to another.
The less-discussed angle is industrial governance: if the U.S. labels a local company as a “supply chain risk” in a contractual dispute, the entire sector internalizes that non-compliance is punished as a threat. This accelerates market “discipline” but may disincentivize those who build robust security barriers by design.
The Defense Production Factor and the Expansion of State Power Over Models
The coverage also mentions that the Pentagon considered invoking the Defense Production Act (DPA) to force unrestricted access, despite legal doubts raised by AI policy experts. This possibility matters even if it is never executed. It signals that the State is willing to treat advanced models as strategic resources and push wartime instruments into the digital economy.
From a business perspective, the DPA changes the risk calculation. If the government can declare a system “essential” and force conditions, then the strategy of “selling to the public sector to stabilize income” ceases to be straightforward. The contract becomes more like an agreement with coercion capability. And that forces boards to ask what they are really buying: sales or exposure.
There is a contradiction that Amodei highlighted, which is operationally relevant: at the same time the threat to declare Anthropic a supply chain risk is looming, evaluations are underway to force access because the model would be essential for national security. That tension is not merely rhetorical; it reveals a State that wants dependence without dependence, meaning access and control without accepting the asymmetry that the provider also has power.
The probable outcome is not an absolute win for either side. It is a contractual redesign that creates a gray area: safeguards on paper, exceptions under certain processes, internal audits, and language that preserves maneuverability for the DoD. The problem with the gray zone is that, in systems that scale, what is not technically guaranteed erodes over time.
What is clear, however, is the pattern: military AI enters a phase where the critical asset is no longer the model but the governance of its use. Whoever controls that lever controls the value.
The Inevitable Direction is Technical Governance, Not Just Clauses
As a futurist focused on impact, I see this dispute as a symptom of maturity. AI has ceased to be an “innovative” purchase and has become infrastructure. In infrastructure, the real discussion is about who defines limits, who audits exceptions, and who bears the cost when a “legal” use produces operational, political, or social harm.
C-level executives should read this case as a lesson on AI procurement in regulated sectors: security principles not integrated into architecture and processes end up being negotiated as contractual appendices, and appendices break under pressure. If security relies on goodwill, it does not scale.
It is also a reminder of how digital convergence erodes decision monopolies. Neither government can assume total control over models built in the private sector, nor can companies pretend internal policies replace democratic frameworks. The only stable exit is to design systems where augmented intelligence preserves verifiable human oversight and traceability.
This market is transitioning from digitalization to the disruption of governance: value will shift to those who convert safeguards into auditable engineering and reduce the cost of compliance without degrading capabilities. Technology must empower human judgment and democratize guarantees, not just accelerate operations.











