Tech Giants Face Fines for Non-Consensual Intimate Images Under New UK Law
The UK government has tabled a significant amendment to the Crime and Policing Bill, imposing stringent new regulations on technology firms. This legislative move mandates the removal of non-consensual intimate images from online platforms within 48 hours of being reported. Failure to comply could result in substantial financial penalties or services being blocked entirely within the United Kingdom.
Prime Minister's Stance on Online Violence
Prime Minister Sir Keir Starmer has declared this initiative as the latest step in the "21st century battle against violence against women and girls" in digital spaces. He has vowed to put technology companies "on notice," emphasising a zero-tolerance approach to online abuse.
Sir Keir stated: "As director of public prosecutions, I witnessed first-hand the unimaginable, often lifelong pain and trauma that violence against women and girls causes. As Prime Minister, I will leave no stone unturned in the fight to protect women from violence and abuse. The online world is the front line of this 21st century battle. That’s why my government is taking urgent action against chatbots and 'nudification' tools. Today we are going further, putting companies on notice so that any non-consensual image is taken down in under 48 hours. Violence against women and girls has no place in our society, and I will not rest until it is rooted out."
Government and Regulatory Measures
The Department for Science, Innovation and Technology (Dsit) has outlined that the government aims to ensure victims only need to report an image once for it to be removed across multiple platforms. Additionally, reported images would be automatically deleted if there is any attempt to re-upload them, streamlining the protection process for affected individuals.
Technology Secretary Liz Kendall asserted that the era of tech firms "having a free pass are over." She explained: "No woman should have to chase platform after platform, waiting days for an image to come down. Under this government, you report once and you’re protected everywhere. The internet must be a space where women and girls feel safe, respected, and able to thrive."
Minister for violence against women and girls, Alex Davies-Jones, added that the legal change means "tech platforms can no longer drag their feet" in addressing such online abuse and harmful content, signalling a shift towards more proactive corporate responsibility.
Broader Regulatory Context and Future Plans
The government has indicated that plans are being considered by communications regulator Ofcom to classify intimate images shared without consent similarly to child sexual abuse and terrorism content. This classification would involve digital marking of such material, enabling automatic removal if someone attempts to repost it, thereby enhancing preventative measures.
Dsit also announced it will publish guidance for internet service providers on blocking access to sites hosting non-consensual intimate images, specifically targeting "rogue websites" that may fall outside the scope of the Online Safety Act. This move aims to close existing loopholes and strengthen enforcement capabilities.
Recent Developments and Industry Impact
This regulatory push follows recent government vows to close legal loopholes that have allowed chatbots to create deepfake nude images, with further curbs on social media platforms planned. The initiative comes in the wake of a public dispute between ministers and Elon Musk earlier this year, after his Grok AI chatbot—integrated into the social media platform X—was widely used to generate fake nude images of women, highlighting the urgent need for such legislative action.
The combined efforts of legislation, regulatory oversight, and corporate accountability underscore a comprehensive approach to combating online violence and protecting individuals in the digital age.



