Millions Create Deepfake Nudes on Telegram as AI Tools Fuel Global Digital Abuse Wave
AI Deepfake Nudes on Telegram Drive Global Abuse Surge

Millions Exploit AI on Telegram to Generate Deepfake Nudes in Global Abuse Epidemic

A comprehensive investigation by the Guardian has uncovered a disturbing trend where millions of individuals worldwide are leveraging advanced artificial intelligence tools to create and disseminate deepfake nudes on the encrypted messaging platform Telegram. This phenomenon is industrialising the online abuse of women, with at least 150 active channels identified across numerous countries, including the UK, Brazil, China, Nigeria, Russia, and India.

Global Channels Facilitate Non-Consensual Content Creation

These Telegram channels, which function as large encrypted group chats, offer services such as "nudified" photos or videos for a fee. Users can upload any image of a woman, and AI algorithms produce explicit content, including videos of her performing sexual acts. Many channels provide feeds featuring celebrities, social media influencers, and ordinary women, all altered by AI to appear nude or engaged in sexual activities. Additionally, followers exchange tips on available deepfake tools, further enabling this abusive ecosystem.

While Telegram has long hosted channels dedicated to non-consensual nude images, the proliferation of accessible AI tools now allows virtually anyone to become a target of graphic sexual content, viewable by millions. For instance, a Russian-language channel advertised a deepfake bot with the slogan, "a neural network that doesn't know the word 'no'." Similarly, a Chinese-language channel with nearly 25,000 subscribers saw men sharing AI-generated videos of their "first loves" or "girlfriend's best friend." In Nigeria, a network of channels disseminates deepfakes alongside stolen intimate images.

Platform Policies and Enforcement Challenges

Telegram's terms of service explicitly prohibit "illegal pornographic content" on publicly viewable channels and bots, as well as activities deemed illegal in most countries. The platform reported removing over 952,000 pieces of offending material in 2025 and employs moderators with custom AI tools to monitor public areas. However, investigations reveal instances where one channel is shut down, only for another with a near-identical name to remain active, underscoring persistent enforcement gaps.

This issue has gained prominence following recent controversies, such as the use of Elon Musk's Grok AI chatbot on X to generate non-consensual images, prompting Ofcom, the UK's media regulator, to launch an investigation. Despite these actions, a reservoir of forums, websites, and apps, including Telegram, continues to provide easy access to graphic, non-consensual content. A report by the Tech Transparency Project found dozens of nudification apps available on major app stores, with 705 million downloads collectively.

Broader Implications and Legal Deficiencies

Anne Craanen, a researcher at the Institute for Strategic Dialogue, notes that Telegram channels are integral to an internet ecosystem devoted to creating and sharing non-consensual intimate images. They enable users to evade controls from larger platforms like Google and bypass AI safeguards. Craanen emphasises that the dissemination and celebration of this material reflect misogynistic undertones, aiming to punish or silence women.

The real-life consequences are devastating, leading to mental health issues, isolation, and job loss. Mercy Mutemi, a lawyer in Kenya, represents victims who have faced job denials and school disciplinary hearings due to deepfake images. Ugochi Ihe of TechHer in Nigeria highlights cases where women are ostracised by families after threats involving images from Telegram channels, noting that reputational damage is often irrecoverable.

Globally, legal protections remain inadequate. According to 2024 World Bank data, less than 40% of countries have laws against cyber-harassment or cyberstalking, and the UN estimates 1.8 billion women and girls lack legal safeguards from online abuse. Campaigners point to factors like poor digital literacy and poverty, particularly in low-income countries, exacerbating vulnerabilities.

In response, companies like Apple and Google have removed or suspended many nudification apps, and Meta has taken action against groups sharing intimate images. Yet, challenges persist, with Indicator reporting thousands of nudifier ads on Meta's platforms since late last year. As AI tools intensify online violence against women, the need for robust regulation and enforcement becomes increasingly urgent to combat this growing digital abuse crisis.