Trump Directs Federal Agencies to Immediately Cease Using Anthropic AI Technology
In a significant escalation of tensions between the government and the artificial intelligence sector, President Donald Trump has ordered all federal agencies to begin phasing out the use of Anthropic technology. This directive follows an unusually public dispute between the AI company and the Pentagon over the safety and ethical deployment of artificial intelligence in military contexts.
Pentagon Ultimatum Precipitates Presidential Action
The presidential order came just over an hour before the Pentagon's deadline for Anthropic to permit unrestricted military use of its AI technology or face severe consequences. This development occurred nearly 24 hours after Anthropic CEO Dario Amodei publicly declared that his company "cannot in good conscience accede" to the Defense Department's demands.
At the core of this conflict lies a fundamental disagreement about AI's role in national security and profound concerns about how increasingly capable machines might be deployed in high-stakes scenarios involving lethal force, sensitive information, or government surveillance.
Anthropic's Stance and Industry Reactions
Anthropic, creator of the Claude chatbot, had sought specific assurances from the Pentagon that its technology would not be used for mass surveillance of American citizens or in fully autonomous weapons systems. After months of private negotiations erupted into public view, the company issued a statement asserting that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will."
The dispute has polarized the technology industry, with Anthropic's main rivals, OpenAI and Google, voicing support for Amodei's position in an open letter released late Thursday. This support came despite the fact that both companies, along with Elon Musk's xAI, also hold contracts to supply AI models to the military.
Military Warnings and Potential Consequences
Military officials had warned that if Amodei did not compromise, they would not only cancel Anthropic's contract but also "deem them a supply chain risk"—a designation typically applied to foreign adversaries that could severely jeopardize the company's critical business partnerships.
Defense Secretary Pete Hegseth and Amodei met on Tuesday, where military officials outlined potential actions including invoking the Cold War-era Defense Production Act to grant the military sweeping authority to use Anthropic's products without the company's approval.
Industry Leaders Take Sides in Growing Controversy
The debate has drawn prominent technology leaders into opposing camps. Elon Musk sided with the Trump administration, stating on his social media platform X that "Anthropic hates Western Civilization." This comment followed Defense Undersecretary Emil Michael's criticism of Amodei, alleging he "has a God-complex" and "wants nothing more than to try to personally control the US Military."
In a surprising development, OpenAI CEO Sam Altman—a former colleague of Amodei's—publicly supported Anthropic's position during a CNBC interview, questioning the Pentagon's "threatening" approach and suggesting that much of the AI sector shares similar ethical boundaries.
Political and Military Perspectives
Concerns about the Pentagon's strategy have been raised by both Republican and Democratic lawmakers, as well as former military leaders. Retired Air Force General Jack Shanahan, who previously led the Defense Department's Project Maven, expressed sympathy for Anthropic's position in a social media post.
"Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end," Shanahan wrote, noting that Claude is already widely used across government agencies, including in classified settings, and that Anthropic's ethical boundaries appear "reasonable."
Broader Implications for AI Governance
This confrontation represents a critical moment in the evolving relationship between government institutions and private technology companies developing advanced artificial intelligence. The outcome may establish important precedents for how AI safety concerns are balanced against national security requirements.
As the situation continues to develop, the technology industry watches closely to see whether other AI companies will face similar pressures from government agencies seeking to deploy their technologies for military and security applications.
