Iran's AI Lego Propaganda Targets Trump and Netanyahu in Slopaganda War
Iran's AI Lego Propaganda Targets Trump and Netanyahu

Iran's AI Lego Propaganda Targets Trump and Netanyahu in Slopaganda War

In a bizarre twist to modern information warfare, Iranian propaganda efforts have deployed AI-generated videos featuring Lego figurines of Donald Trump, Benjamin Netanyahu, and Satan. This surreal content represents the latest escalation in what experts term "slopaganda"—AI-generated material designed for political manipulation.

The Rise of Slopaganda in Global Conflicts

The concept of slopaganda, coined by researchers Mark Alfano and Michał Klincewicz in a recent academic paper, describes propaganda that uses artificial intelligence to create attention-grabbing, emotionally charged content. This phenomenon has accelerated dramatically since late 2025, with both state and non-state actors exploiting generative AI tools.

Following US-Israeli strikes on Iran in early March, the White House released a video blending real military footage with clips from movies, television series, and video games. Iran and its sympathizers responded by flooding social media platforms with outdated war footage alongside completely fabricated AI-generated content showing attacks on Tel Aviv and American bases in the Persian Gulf.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Most recently, viral videos reportedly created by an Iranian team depict Donald Trump, Jeffrey Epstein, Satan, Benjamin Netanyahu, Pete Hegseth, Ayatollah Khamenei, and other figures as Lego figurines engaged in various scenarios. This represents just one manifestation of slopaganda, which can also include images, text, or any other content AI systems can generate.

How Slopaganda Manipulates Public Perception

Slopaganda operates through several psychological mechanisms that bypass traditional critical thinking defenses. First, through repeated exposure across both traditional and social media, this content can penetrate mental barriers when it captures attention through emotional arousal—typically negative emotions—and reaches distracted audiences scrolling through feeds or switching between browser tabs.

Second, slopaganda effectively dilutes what philosophers call the "epistemic environment" with falsehoods and half-truths. Generative AI tools like ChatGPT can function as what scholars describe as "bullshit machines," producing content indifferent to truth. Slopaganda represents a specialized form of this AI-generated bullshit, though with distinct characteristics when deployed in campaigns like the Iranian Lego videos.

Rather than aiming for factual accuracy, this slopaganda serves expressive and emblematic purposes, creating emotional associations between concepts and figures. The intended linkages suggest connections like Satan with Trump or the United States with evil, bypassing rational evaluation through symbolic manipulation.

The Threat to Shared Truth and Public Trust

A third concern involves genuinely misleading slopaganda, whether by design or through "context collapse" where jokes or trolling escape their intended context and are misunderstood as serious information. During conflicts, crises, and emergencies when authoritative sources are scarce but information demand is high, misleading slopaganda including deepfakes can spread rapidly with significant consequences.

Once misleading information or particular associations enter public consciousness, they prove difficult to dislodge. Even small misleading effects across large populations can influence group beliefs, election outcomes, protest movements, or public sentiment about unpopular military engagements.

Fourth, the prevalence of slopaganda may create a corrosive effect on trust itself. As people become better at identifying AI-generated content, they may also become more likely to misidentify authentic material as slop. This could lead to a general erosion of trust in genuinely trustworthy individuals and institutions, fostering what researchers describe as "nihilistic doubt" about the possibility of knowing anything with certainty.

When identifying trustworthy sources becomes difficult or impossible, people may default to believing whatever they find comforting, invigorating, or infuriating. In increasingly polarized societies facing interlocking economic, political, military, and environmental crises, the breakdown of shared sources of truth threatens to exacerbate existing divisions.

Pickt after-article banner — collaborative shopping lists app with family illustration

Three Strategies to Counter the Slopaganda Threat

Researchers propose interventions at three levels to address what they term the "slopaganda shitstorm." First, individuals can enhance their digital literacy by learning to identify telltale signs of AI generation in text, images, and video. They should practice checking sources thoroughly rather than merely glancing at headlines, and consider blocking sources that routinely spread slopaganda rather than evaluating each piece of content in isolation.

Second, industry and regulators can implement technological solutions such as watermarking AI-generated content. Some content may require removal from platforms where people access news and important information to prevent contamination of the information ecosystem.

Third, large technology companies including OpenAI, Google, and X can be held accountable for their role in creating the tools that enable slopaganda. This could involve taxation and other interventions to fund both regulatory efforts and digital literacy education programs.

While slopaganda appears likely to remain a feature of the information landscape, researchers argue that with sufficient foresight and courage, societies may still adapt to this challenge and potentially even control its most damaging effects. The battle for truth in the digital age has entered a new phase where plastic figurines and artificial intelligence combine to shape perceptions in ways both absurd and profoundly concerning.