Daily headlines formulate new variations on the theme: artificial intelligence is too powerful to be left unregulated. Lawmakers, guardedly seconded by big tech CEOs, warn of catastrophe. Agencies draft frameworks. Editorial boards demand ethical guardrails. We are told that only government can ensure that AI is “safe,” “aligned,” and deployed responsibly. Yet, in the same week, the Pentagon reportedly moved to blacklist one of the country’s most prominent AI firms for refusing to remove certain ethical constraints from its flagship system. The contradiction is not subtle. It is fundamental.
Anthropic, the San Francisco–based AI company founded by Dario and Daniela Amodei, has built its reputation on what it calls “AI safety” and “constitutional AI.” Its Claude line of large language models competes at the frontier of capability with systems from OpenAI and Google DeepMind. But Anthropic has distinguished itself by emphasizing that its models are not merely powerful; their architecture incorporates embedded guardrails. The company has publicly stated that it has declined to grant the Department of Defense unrestricted use of its models for certain applications, specifically mass domestic surveillance and fully autonomous weapons. In response, Secretary of Defense Pete Hegseth declared the company a “supply chain risk” and signaled that it could be excluded from defense procurement ecosystems unless it complied with the government’s terms.
The Pentagon’s position, as reported, is that it must have access to AI tools for “any lawful use.” From the government’s perspective, the stakes are obvious. AI is now integral to logistics, intelligence analysis, battlefield planning… and potentially autonomous systems. The United States faces strategic competition with China and other adversaries investing heavily in military AI. Should a private vendor’s internal ethical policies veto what defense officials consider lawful and necessary uses of a technology purchased with public funds?
Legal scholars in this field like Tess Bridgeman immediately pointed out contradictions, dubious legal assumptions, and barriers to implementing the position Hegseth seemed to take — and even questioned his intention to carry out his threat.
Alongside domestic surveillance, major powers are also racing to integrate AI into military systems, including autonomous weapons and autonomous operational capabilities. China, for example, has been reported to develop AI-enhanced unmanned vehicles and AI-powered “swarm” technologies capable of cooperative action among large numbers of drones or robotic units without continuous human control.⁷ That trend reflects a broader global pattern in which artificial intelligence is moving out of analytics and into direct force application, raising deep questions about how ethical limits are set and enforced — questions that are precisely the ones at stake in the dispute over who controls AI ethics in the first place.
That argument has force. The Constitution charges the federal government with providing for the common defense. National defense is not a hobby; it is one of the state’s core responsibilities. In wartime and even in tense peacetime, the government must be able to procure the tools it deems essential. Historically, America’s economic strength has been decisive in war precisely because private industry could be mobilized to produce ships, planes, steel, and munitions at scale. The phrase “arsenal of democracy” was not rhetorical flourish. It described a system in which private enterprise — operating for profit — supplied the matériel that preserved political freedom.
But what is happening in this dispute is not merely procurement. It is not a disagreement over price, performance, or delivery schedules. It is a dispute over who controls the ethical architecture of a technology. Anthropic is not refusing to sell computers. It is refusing to strip out guardrails that it believes are integral to the responsible design of its product. The Pentagon is not demanding a faster processor. It is demanding that the company relinquish the authority to restrict how its system is used.
The contradiction becomes acute. For the past several years, government officials have insisted that AI companies must internalize ethical responsibility. They must prevent misuse. They must anticipate harms. They must build systems that refuse dangerous or unlawful requests. Regulators argue that without such constraints, AI systems could be used for surveillance abuses, disinformation campaigns, or autonomous violence. Ethics, we are told, cannot be left to the market alone.
Across the globe, governments are already pushing the boundaries of what AI can do in practice. Most starkly, China’s government has built what may be the largest AI-supported surveillance apparatus in human history, deploying hundreds of millions of public-facing cameras linked to facial-recognition and data-fusion systems that can identify and track individuals in real time. As of recent years, analysts estimate that China operates well over half of the world’s surveillance cameras — many of them capable of identifying people and tracing movements across cities, social venues, and even everyday public spaces — creating an infrastructure that can monitor citizens at an unprecedented scale.⁶
Yet when a company attempts to operationally implement those very ethical constraints in its design, and applies them even to its most powerful potential customer, it is threatened with economic exclusion. The message is unmistakable: ethics are mandatory — except when the sovereign decides otherwise.
The mechanism of pressure is also revealing. By designating Anthropic a “supply chain risk,” the Defense Department reportedly signaled not only that it would decline to contract with the company but that other firms in the defense industrial base might be discouraged — or effectively barred — from doing business with it. In modern administrative practice, such a designation can function as a form of industrial excommunication — the Pope excommunicating the stubborn Jewish heretic Baruch Spinoza. The company is not nationalized; it is isolated. The pressure is not a knock at the factory gate; it is a warning to partners and customers that continued association carries risk.
There are historical precedents for government commandeering or directing private industry in times of emergency. The Defense Production Act of 1950 grants broad authority to prioritize contracts and allocate materials deemed necessary for national defense. Presidents of both parties have invoked it in wartime and in domestic emergencies. Yet even at the height of the Korean War, the Supreme Court in Youngstown Sheet & Tube Co. v. Sawyer rejected President Truman’s attempt to seize steel mills absent explicit congressional authorization. The Court’s decision stands as a reminder that “necessity” does not erase constitutional structure. Emergency powers have limits.
What is novel here is that the object of contention is not physical production but moral design. AI systems like Claude are unique tools; they are decision-support systems that can be configured to refuse certain tasks. Anthropic has said that it drew two bright lines: no mass domestic surveillance of Americans and no fully autonomous weapons. Whether one agrees with those lines is not the immediate question. Can a private company in a constitutional republic adopt such lines and adhere to them — even when the government disapproves?
As I argue in my forthcoming book, A Serious Chat with Artificial Intelligence, the defining feature of contemporary AI is not merely its intelligence but its embedded moral design, the guardrails that translate power into socially tolerable use. What is at stake in the present controversy is not a contract term but control of that moral design.
Critics of capitalism often describe corporations as purely profit-driven entities, indifferent to moral considerations so long as consumers are served and shareholder returns are maximized. Some defenders of capitalism have encouraged that caricature, arguing that business should concern itself solely with serving consumers within the bounds of law. But the real world is more complicated. Companies are run by individuals — owners, directors, executives — who have moral convictions, reputational concerns, religious beliefs, and political commitments. These shape corporate policies in ways that go beyond immediate profit calculation.
In a free society, the liberty and diversity of judgment is foundational, even metaphysical. If there is an implicit “social contract” — a rational incentive for instituting government — it is to gain the benefits of its defense of our rights without relinquishing our fundamental means of survival — the freedom to act on our judgment. Firms choose to avoid certain markets, to refuse certain clients, or to embed certain principles in their products. A newspaper may decline to publish particular advertisements. A technology company may refuse to build backdoors into its encryption. A pharmaceutical company may set conditions on distribution. The principle underlying these choices is freedom of conscience exercised through private property and voluntary exchange.
The government’s legitimate role is to protect the rights of individuals against force and fraud, including aggression by foreign powers. That role necessarily includes maintaining an adequate defense. It may purchase weapons, hire troops, build infrastructure, and contract with private suppliers. But the claim implicit in the Pentagon’s reported demand is more expansive: that when national security is invoked, the state’s judgment supersedes the moral constraints of the supplier. The company may sell — but only on terms that dissolve its own ethical boundaries.
Is that claim unique to this administration? Hardly. Governments of all stripes tend to assert broad discretion in matters of defense and intelligence. The difference here is that the technology in question is widely used in civilian contexts and subject to intense debate about ethical design. If AI companies are to be treated as quasi-public utilities whose internal policies must conform to federal guidance, then the principle should be stated plainly. If, instead, they are private actors responsible for their own moral choices within the law, then those choices cannot be overridden by administrative pressure alone.
There is also the matter of competition. Reporting indicates that Claude has been used, via partnerships with firms such as Palantir, in significant U.S. operations abroad. Whether one applauds or criticizes those operations, they demonstrate how quickly frontier AI has become operationally consequential. If Anthropic steps back from certain uses, other firms — OpenAI, Google DeepMind, or new entrants — may be willing to step forward. The economic incentives are enormous. Defense contracts are lucrative. Refusal by one vendor creates opportunity for another.
That dynamic helps explain the relative silence of other major AI firms. A united industry front insisting on the legitimacy of private ethical limits would force the government into negotiation. A fragmented industry competing for favor shifts leverage to the state. In the absence of solidarity, “AI ethics” risks becoming whatever the most powerful customer demands.
None of this is to deny the seriousness of national security. If adversaries develop and deploy autonomous weapons or pervasive AI-driven surveillance, American officials cannot simply abstain on moral grounds. The world is not a seminar room. But the constitutional design of the United States presumes that power is divided and limited precisely because the concentration of unchecked authority is dangerous — even when exercised with good intentions.
The deeper issue raised by the Anthropic–Pentagon dispute is not whether a particular application of Claude should be permitted. It is whether the state may simultaneously demand that private innovators internalize ethical responsibility and then exempt itself from those same constraints. If ethics are indispensable to safe AI, they are most indispensable where power is greatest and secrecy deepest. If, on the other hand, ethical guardrails must yield whenever national security is invoked, then regulators should be candid that “ethical AI” is conditional, not foundational.
