Hegseth and Anthropic CEO to Meet Amid Military AI Ethics Debate
Hegseth and Anthropic CEO Meet as Military AI Debate Intensifies

Hegseth and Anthropic CEO Set to Meet as Debate Intensifies Over Military AI Use

Defense Secretary Pete Hegseth is scheduled to meet with Anthropic CEO Dario Amodei on Tuesday, as discussions escalate regarding the ethical implications of artificial intelligence in military applications. This meeting comes at a critical juncture, with Anthropic standing as the sole major AI firm not currently supplying its technology to a new U.S. military internal network, known as GenAI.mil.

Ethical Concerns and Military Integration

Anthropic, the creator of the Claude chatbot, has declined to comment on the specifics of the meeting. However, CEO Dario Amodei has publicly expressed significant ethical reservations about unregulated government deployment of AI. His concerns include the potential dangers of fully autonomous armed drones and AI-assisted mass surveillance systems that could monitor and suppress dissent.

A defense official, speaking anonymously due to lack of authorisation, confirmed the meeting between Hegseth and Amodei. This engagement underscores the broader debate surrounding AI's role in national security, particularly in high-stakes scenarios involving lethal force, sensitive information, or government surveillance.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Pentagon Contracts and AI Approval

Last summer, the Pentagon announced defense contracts worth up to $200 million each with four AI companies: Anthropic, Google, OpenAI, and Elon Musk's xAI. Anthropic was the first to gain approval for operation on classified military networks, collaborating with partners such as Palantir. In contrast, the other three companies are currently limited to unclassified environments.

By early this year, Hegseth had begun highlighting only two of these firms: xAI and Google. In a January speech at SpaceX in South Texas, Hegseth emphasised his commitment to AI models that support military operations without ideological constraints. He stated, "I am shrugging off any AI models that won't allow you to fight wars," adding that the Pentagon's AI systems would operate "without ideological constraints that limit lawful military applications" and would not be "woke."

Recent Developments and Safety Focus

In January, Hegseth announced that Musk's AI chatbot Grok would join the Pentagon's GenAI.mil network, shortly after Grok faced global scrutiny for generating non-consensual deepfake images. Following this, OpenAI revealed in early February that it would also participate in the military's secure AI platform, offering a custom version of ChatGPT for unclassified tasks.

Anthropic has consistently positioned itself as a more safety-conscious entity among leading AI companies. Founded in 2021 by former OpenAI employees, the firm has advocated for stringent safeguards. Owen Daniels, associate director at Georgetown University's Center for Security and Emerging Technology, noted that Anthropic's peers, including Meta, Google, and xAI, have complied with Pentagon policies on lawful AI use, potentially limiting Anthropic's bargaining power and risking its influence in military AI adoption.

Political Tensions and Advocacy

The meeting also reflects ongoing political friction. Anthropic's advocacy for stricter AI regulations has previously clashed with the Trump administration. The company publicly criticised proposals to loosen export controls on AI chips to China and has been involved in lobbying efforts for state-level AI regulation. Trump's top AI adviser, David Sacks, accused Anthropic of "running a sophisticated regulatory capture strategy based on fear-mongering."

In response, Anthropic has sought a bipartisan approach, hiring former Biden officials and adding Chris Liddell, a former White House official from Trump's first term, to its board. This strategy aims to balance technological optimism with pragmatic risk management, as Amodei argued in a recent essay, warning that "we are considerably closer to real danger in 2026 than we were in 2023" but advocating for realistic mitigation.

Pickt after-article banner — collaborative shopping lists app with family illustration

Historical Context and Future Implications

The current debate echoes past controversies, such as the Project Maven drone surveillance program, which led to tech worker protests and Google's withdrawal. Despite such opposition, military reliance on advanced technologies like drone surveillance has only increased. Owen Daniels emphasised that "the use of AI in military contexts is already a reality and it is not going away," noting that while some applications are low-stakes, battlefield deployments involve higher risks, such as lethal force or nuclear arms, with mitigation efforts ongoing for nearly a decade.

As Hegseth and Amodei prepare to discuss these critical issues, the outcome could significantly influence the future integration of AI in defense strategies, balancing innovation with ethical safeguards in an increasingly complex security landscape.