Pentagon Tech Chief Reveals Clash with AI Firm Over Autonomous Warfare
A senior Pentagon official has disclosed a significant dispute with artificial intelligence company Anthropic regarding the use of its technology in fully autonomous weapons systems. The conflict highlights growing tensions between military ambitions for AI-driven warfare and corporate ethical boundaries.
Ethical Restrictions Spark Pentagon Frustration
Emil Michael, the U.S. Defense Undersecretary for research and engineering and the Pentagon's chief technology officer, described Anthropic's ethical limitations on its chatbot Claude as "an irrational obstacle" during recent negotiations. The military is actively pursuing greater autonomy for armed drone swarms, underwater vehicles, and other machines to compete with global rivals like China.
"I need a reliable, steady partner that gives me something, that'll work with me on autonomous, because someday it'll be real and we're starting to see earlier versions of that," Michael stated in a podcast aired on Friday. "I need someone who's not going to wig out in the middle."
Golden Dome Missile Defense Program at Center of Dispute
The disagreement specifically involved how AI could be integrated into President Donald Trump's future Golden Dome missile defense initiative, which aims to position U.S. weapons in space. Michael shared a hypothetical scenario where the United States would have only ninety seconds to respond to a Chinese hypersonic missile attack.
He argued that a human anti-missile operator "may not be able to discriminate with their own eyes what they're going after," while an autonomous counterattack would present lower risk "because it's in space and you're just trying to hit something that's trying to get you."
Pentagon Designates Anthropic as Supply Chain Risk
The revelations follow the Pentagon's formal designation of San Francisco-based Anthropic as a supply chain risk, effectively cutting off its defense work using regulations designed to prevent foreign adversaries from compromising national security systems. Anthropic has vowed to pursue legal action against this designation, which impacts its business partnerships with other military contractors.
President Trump has additionally ordered federal agencies to immediately cease using Claude, though the Republican president granted the Pentagon a six-month phase-out period for systems deeply embedded in classified military operations, including those utilized in the Iran conflict.
Anthropic's Narrow Ethical Boundaries
Anthropic maintains that it only sought to restrict its technology from two specific high-level applications: mass surveillance of American citizens and fully autonomous weapons systems. The company emphasized that "Anthropic understands that the Department of War, not private companies, makes military decisions" in response to Michael's comments.
Dario Amodei, Anthropic's CEO, stated the company "has never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner."
Months of Negotiations Reach Impasse
Michael, a former Uber executive who assumed the Pentagon's "AI portfolio" in August after being sworn in last May, described months of negotiations with Anthropic that ultimately reached a stalemate. He scrutinized contracts dating from President Joe Biden's Democratic administration, questioning terms of use he considered overly restrictive.
"I need to have the terms of service be rational relative to our mission set," Michael explained. "So we started these negotiations. It took three months and I had to sort of give them scenarios, like this Chinese hypersonic missile example. They're like, 'OK, we'll give you an exception for that.' Well, how about this drone swarm? 'We'll give an exception for that.' And I was like, exceptions doesn't work. I can't predict for the next twenty years what all the things we might use AI for."
Pentagon Demands "All Lawful Use" Policy
This impasse led the Pentagon to insist that Anthropic and other AI companies permit "all lawful use" of their technology. While competitors including Google, OpenAI, and Elon Musk's xAI agreed to these terms, Anthropic resisted, arguing that current AI systems "are simply not reliable enough to power fully autonomous weapons."
The company also maintained its prohibition against any mass surveillance of Americans using its technology. Michael characterized these negotiations as "interminable," noting that Anthropic "didn't want us to bulk-collect public information on people using their AI system."
Broader Military Shift Toward AI Autonomy
Michael positioned the dispute within a larger military transition toward incorporating artificial intelligence across various warfare domains. He revealed that the military is developing procedures for enabling different autonomy levels depending on operational risks.
In another scenario, he questioned, "who could oppose if you have a military base, you have a bunch of soldiers sleeping, that you have a laser that can take down drones autonomously?"
Social Media Outburst and Political Context
As talks collapsed last week, Michael publicly criticized Amodei on social media, accusing him of having "a God-complex" and wanting "nothing more than to try to personally control the military." The podcast conversation occurred with Silicon Valley venture capitalists Jason Calacanis, David Friedberg, and Chamath Palihapitiya, co-hosts of the "All-In" podcast.
Notably absent was fourth co-host David Sacks, a former PayPal executive who now serves as President Trump's AI czar and has been a vocal critic of Anthropic, particularly regarding its hiring of former Biden administration officials shortly after Trump returned to the White House last year.
Legal Battle Looms as Companies Diverge
Anthropic has disputed portions of Michael's account of the negotiations, emphasizing that the protections it sought were narrowly defined and not based on existing Claude applications. With the Pentagon's supply chain risk designation and Anthropic's planned lawsuit, the next phase of this conflict will likely unfold in court.
Meanwhile, other AI companies continue preparing their infrastructure for classified military work under the Pentagon's "all lawful use" requirements, creating a significant divergence in how major technology firms engage with defense applications of artificial intelligence.
