X Cracks Down on Unlabeled AI War Videos with 90-Day Monetization Ban
X Bans Unlabeled AI War Videos from Monetization Program

X Implements Strict Penalties for Unlabeled AI-Generated War Content

Elon Musk's social media platform X has taken decisive action to combat the proliferation of artificial intelligence-generated videos depicting the ongoing conflict in the Middle East. The company has announced that users who post AI-made war videos without clearly labeling them will face suspension from X's monetization program for an initial period of 90 days. Any subsequent violations will result in permanent removal from the program, according to a statement from the company's head of product, Nikita Bier, issued on Tuesday.

Combating Misinformation During Times of Conflict

Bier emphasized the critical importance of authentic information during wartime, stating that "with today's AI technologies, it is trivial to create content that can mislead people." He further elaborated that "during times of war, it is critical that people have access to authentic information on the ground." This new policy comes in direct response to the recent military strikes between the US, Israel, and Iran, which have plunged the region into heightened conflict and triggered a significant wave of misleading AI-generated posts across social media platforms.

Examples of Deceptive AI-Generated Content

The platform has been flooded with fabricated videos designed to appear as genuine war footage. One particularly egregious example included shots of supposed Israeli soldiers weeping in fear, purportedly in reaction to an Iranian strike. This clip has amassed more than 1.4 million views. Another widely circulated fabrication, viewed by over 2.1 million people, depicted Dubai's iconic Burj Khalifa skyscraper completely engulfed in flames after a supposed attack by Iran.

A separate video posted on X claimed to show "Iranian missiles hit[ting] central Israel," with footage appearing to depict a massive explosion on a building. In reality, this clip was entirely AI-generated, a fact that was later identified and marked by vigilant users on the platform. The company clarified on Tuesday that AI-made content would be identified either through crowdsourced notes from users or by analyzing metadata and other digital signals that indicate the use of generative AI tools.

Further Instances of Fabricated War Footage

Additional misleading content included a video falsely claiming that Iranian ballistic missiles had obliterated "everything in their path" in Tel Aviv. The AI-generated footage showed what appeared to be a barrage of rockets raining down on the Mediterranean city, with explosions and clouds of smoke visible in the distance as the user filming seemingly zoomed in. Another post described and apparently captured on video an attack on an unnamed Israeli airport. However, these seemingly terrifying scenes were entirely fabricated by artificial intelligence.

How to Identify AI-Generated Videos

Experts have outlined several key indicators to help users spot AI-generated video content. According to the BBC, common signs include low picture quality and very short video durations. Some AI bots may also incorporate out-of-date information, leading to inaccurate depictions of locations. The Better Business Bureau notes that strange textures or an almost airbrushed look can be strong indicators. Other telltale signs include physical inconsistencies within the scene, unnatural shadows, and irregular lighting. Interestingly, the presence of typos can sometimes be an encouraging sign of human creation, as humans are more likely to make such errors than machines.

Musk's Vision and Platform Policy Alignment

Elon Musk has previously predicted that AI-made video represents the future of digital content, even as his own platform now seeks to combat the misinformation propagated by this very technology. "Most of what people consume in five or six years - maybe sooner than that - will be just AI-generated content," Musk stated in October. Under the newly implemented guidelines, X users are required to add a "Made with AI" label by pressing the menu on their post and selecting the "Add Content Disclosures" option. Users who fail to comply and post unlabeled AI-made videos of war will face suspension from the monetization program, initially for 90 days, followed by permanent removal for repeat offenses.

External Praise and Internal Adjustments

The policy shift has received praise from external observers. Sarah Rogers, the under secretary of state for public diplomacy during the Trump administration, commended the move, stating, "This is a great complement to X's community notes system, which results in less 'reach' (thus monetization) for content annotated as inaccurate." Rogers added, "You don't need a Ministry of Truth to incentivize truth online." This development occurs as X continues to tighten its guardrails concerning AI. Last month, the company announced it would make adjustments to its AI tool, Grok, to prevent the generation of overly sexualized images. Grok had previously faced criticism for engaging with antisemitic tropes and claims of white genocide.