AI Warfare Escalates in Iran Conflict as Pentagon Clashes with Tech Firms
AI Warfare Escalates in Iran as Pentagon Clashes with Tech

The Dawn of AI-Driven Warfare: A Paradigm Shift in Modern Conflict

The intensifying conflict involving Iran has starkly illuminated a profound and rapid transformation in military strategy, driven by the escalating deployment of artificial intelligence. This technological revolution is collapsing the distinction between theoretical debate and battlefield reality, with AI systems now actively involved in identifying and prioritising targets, recommending specific weaponry, and even evaluating the legal grounds for launching a strike. This shift represents an era of warfare that operates, as experts describe, "quicker than the speed of thought."

Corporate Safeguards and Pentagon Pushback

A significant political row has erupted in the United States over the control of these powerful AI capabilities. The AI company Anthropic publicly insisted it could not remove its built-in safeguards, which are designed to prevent the Department of Defense from utilising its technology for purposes like domestic mass surveillance or the development of fully autonomous lethal weapons. In response, the Pentagon stated it had no interest in such applications but argued vehemently that such critical decisions should not be made unilaterally by private corporations.

The administration's reaction was severe; not only was Anthropic dismissed from its role, but it was also blacklisted as a potential supply-chain risk. OpenAI subsequently stepped into the breach, while asserting it had maintained the same ethical red lines previously declared by Anthropic. However, OpenAI's CEO, Sam Altman, acknowledged internally that the company does not ultimately control how the Pentagon uses its products and conceded that the handling of the deal made the organisation appear "opportunistic and sloppy" to its own employees and users.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Human Control Becoming a Mere Formality

Campaigners like Nicole van Rooijen, executive director of Stop Killer Robots, warn that the core issue extends beyond whether autonomous weapons will be deployed. The concern is how precursor AI systems are already fundamentally transforming the conduct of war, with meaningful human control riskily devolving into an afterthought or a bureaucratic formality. This paradigm shift is not a future possibility but a present reality.

Despite the public controversy, reports indicate that Anthropic's Claude AI system has facilitated a massive and intensifying offensive in Iran, an operation already estimated to have killed over a thousand civilians. AI is not a prerequisite for military errors or unaccountability—human decisions at the highest levels remain culpable, as seen when Pentagon officials dodged questions about a strike on an Iranian school that killed 165 schoolgirls. However, AI dramatically amplifies the scale and speed of lethal operations.

The Dehumanising Scale of Automated Targeting

The impacts are palpably obvious to military users. One Israeli intelligence source, reflecting on AI's use in the war on Gaza, observed ominously that "the targets never end. You have another 36,000 waiting." Another official revealed he spent a mere twenty seconds assessing each AI-proposed target, stating he had "zero added-value as a human, apart from being a stamp of approval." This process eases mass killing in every sense, introducing further moral and emotional distancing while drastically reducing operational accountability.

The Urgent Need for Democratic Oversight

In the face of this accelerating technological arms race, there is a pressing and essential need for robust democratic oversight and binding multilateral constraints, rather than leaving fateful decisions solely in the hands of tech entrepreneurs and defence departments. As bombs continued to fall on Iran, states convened in Geneva to address the issue of lethal autonomous weapons systems. The draft text under consideration provides a strong foundation for an international treaty that is desperately needed.

Pickt after-article banner — collaborative shopping lists app with family illustration

While most governments seek clear, enforceable guidance on the military application of AI, the largest global players resist firm regulation, though their participation in discussions is a minimal positive. The blistering pace of AI-driven warfare creates a perilous perception that caution equates to ceding strategic advantage to adversaries. Yet, as growing numbers of technology workers and even military officials are coming to realise, the profound dangers of uncontrolled technological expansion in warfare are far greater than any perceived tactical delay. The future of conflict governance must be decided now, before the paradigm shift becomes irreversible.