Trump's AI Warfare: A Dangerous New Era of Military Strategy
Trump's AI Warfare: A Dangerous New Era

Trump's AI Warfare: A Dangerous New Era of Military Strategy

An explosion in Tehran, Iran, on 1 March 2026, captured in a photograph by Abedin Taherkenareh for EPA, symbolises a stark new reality in global conflicts. Donald Trump is reportedly leveraging artificial intelligence to fight his wars, representing a dangerous turning point in military history. The technology that many people use merely as a chatty tool for daily tasks is now aiding US military aggression, and there is alarmingly little that can be done to curb this trend.

The Militarisation of AI: From Chatbots to Combat

Artificial intelligence boasts a wide array of capabilities, from organising shopping lists and crafting bedtime stories for children to enhancing workplace efficiency and improving governmental operations. However, the risks associated with the militarisation of AI are often underreported and demand louder attention. In the past three months, Trump's White House has allegedly employed AI on two occasions to effect regime change or come perilously close to doing so, as seen in the recent case in Iran, where the task was left to ordinary Iranians to complete.

First, Anthropic's Claude AI model, commonly used as a more discerning alternative to ChatGPT, was purportedly utilised both to plan and execute the abduction of Nicolás Maduro from his compound in Venezuela, though the specifics of its application remain unclear. Then, this weekend, reports emerged that the same AI tool was deployed again to analyse intelligence, aiding in the devastating missile barrage on Iran by identifying targets and running simulations.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Significance and Unease in Global Conflict

The significance of these events cannot be overstated. AI has been integrated into the planning and execution of military operations, resulting in an unknown number of casualties and destabilising the Middle East. This development has sparked widespread unease, shared by many, including Dario Amodei, CEO of Anthropic, who has engaged in a public dispute with the US president after refusing to relax two "red lines" for Claude: prohibiting its use for mass domestic surveillance or fully autonomous weapons that select targets without human oversight. Meanwhile, OpenAI has entered an agreement with the Pentagon, claiming stronger protections than those sought by Anthropic.

Regardless of contractual details, it is crucial to reiterate that a tool initially designed as a conversational interface for tasks like email summarisation and cover letter writing is now part of the chain that converts information into violence. Questions about who should control AI and its military applications, once abstract debates among academics, have become urgent realities following the events in Venezuela and Iran.

Shifting Principles and Historical Parallels

Traditional principles of armed conflict emphasised deterrence through formidable weapons without actual use, as seen in the theory of mutually assured destruction with nuclear arms. However, early indications from war games suggest that AI decision-makers are prone to trigger-happiness with nuclear weapons, undermining this deterrent. With AI proving effective in military planning, more countries are likely to adopt it, despite moral quandaries over AI-driven military decisions. Historians may view this period akin to the nuclear bombings of Japan, marking a clear before and an uncertain after in warfare.

Limited Options and International Imperatives

So, what can be done? Very little, as opportunities for a blanket ban on military AI have been missed. Over a decade ago, Demis Hassabis took a principled stand by selling DeepMind to Google only under the condition that the technology not be used militarily, but last year, Alphabet quietly abandoned this promise. Trump's actions have further eroded such ethical boundaries.

Pickt after-article banner — collaborative shopping lists app with family illustration

Now, the international community must work diligently to pull Trump back from the brink. Allies should pressure his administration not only to use AI responsibly in military contexts but also to accept binding constraints. This includes fostering international commitments, transparent procurement standards, and meaningful oversight, with other nations joining rather than treating ethics as a hindrance. If the world's most powerful military normalises consumer-grade AI models in regime-change operations, we will enter a whole new, more dangerous world.

Chris Stokel-Walker is the author of TikTok Boom: The Inside Story of the World’s Favourite App.