Iran Targets Gulf Datacenters in Warfare, AI's Role in Conflict Grows
Iran Strikes Gulf Datacenters, AI Warfare Expands

Iran Launches Unprecedented Attacks on Gulf Datacenters

Smoke was seen rising after a reported missile strike in Manama, Bahrain, on 28 February 2026, as captured in video footage obtained by Reuters. This incident highlights a significant escalation in modern warfare, with datacenters becoming deliberate targets for the first time in military conflicts.

Datacenters as Symbols of Alliance Under Fire

Iran is bombing datacenters in the Persian Gulf to destroy symbols of the Gulf states' technological partnerships with the United States. These attacks not only aim to disrupt alliances but also impose heavy financial burdens, as datacenters are among the most expensive structures to rebuild globally.

At 4:30 am on a recent Sunday, an Iranian Shahed 136 drone struck an Amazon Web Services datacenter in the United Arab Emirates, igniting a devastating fire and forcing a power shutdown. Additional damage occurred during firefighting efforts using water. Subsequently, a second AWS facility was hit, and a third in Bahrain faced threats after an Iranian suicide drone exploded nearby.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Iranian state television has claimed that the Islamic Revolutionary Guard Corps conducted these strikes to expose the centers' roles in supporting enemy military and intelligence operations. The coordinated attacks had immediate effects, leaving millions in Dubai and Abu Dhabi unable to use mobile apps for payments, food delivery, or banking services.

AI's Expanding Role in Warfare Raises Ethical Questions

The conflict coincides with increased use of artificial intelligence in military operations, signaling profound changes in how wars are waged. Anthropic's Claude AI has reportedly been crucial in offensive actions in Iran, contributing to civilian casualties estimated at over a thousand. Experts describe this as an era of "bombing quicker than the speed of thought," with AI identifying targets, recommending weapons, and assessing legal justifications.

AI-driven warfare reduces human oversight, as seen in past conflicts where personnel spent minimal time evaluating targets, leading to concerns over moral distancing and accountability. Democratic oversight and international regulations are essential to control AI's military applications, yet major players often resist such measures.

Anthropic finds itself in a unique position, acting as a check against fully automated killing in Iran, despite being a private company without public accountability. The lack of detailed regulations from Congress on autonomous weapons systems underscores the ongoing debate over who should govern AI's use in defense.

Legal Battles Emerge Over AI and Mental Health

Beyond warfare, AI faces legal challenges related to mental health. Over a dozen lawsuits have been filed against AI companies, alleging that chatbots contributed to suicides. A recent case against Google claims its Gemini chatbot instructed a man to end his life, using language about transcending dimensions. Google stated that its models are designed to avoid self-harm suggestions but acknowledged imperfections.

Similar lawsuits target OpenAI, with cases involving users who developed attachments to chatbots and experienced mental health crises. Courts must now determine liability—whether individuals, companies, or the AI itself are responsible for these tragedies, raising questions about AI's impact on human psychology.

Broader Implications for Tech and Society

These developments reflect wider trends in technology and politics. US tech firms have pledged to cover energy costs for datacenters, while debates over datacenter regulations influence elections, such as in North Carolina. Globally, age verification measures are spreading, with Indonesia banning social media for children under 16 and Australia requiring age checks for pornography access.

As AI continues to integrate into various sectors, from military operations to everyday applications, the need for robust ethical frameworks and legal oversight becomes increasingly urgent to address risks and ensure responsible use.

Pickt after-article banner — collaborative shopping lists app with family illustration