The proposals are being made through an amendment to the Crime and Policing Bill, which is making its way through the House of Lords.
Under the plans, victims would only have to flag an image once, rather than contact different platforms separately.
Tech companies would have to block the images from being re-uploaded once they have been taken down.
The proposal would also provide guidance for internet service providers to be able to block access to sites hosting illegal content, the idea being that this would target rogue websites that currently fall outside of the reach of the Online Safety Act.
Women, girls and LGBT people are disproportionately affected by Intimate Image Abuse (IIA).
A government report in July 2025, external found young men and boys were largely targeted for financial sexual extortion – sometimes referred to as “sextortion” – where a victim is asked to pay money to keep intimate images from being shared online.
A Parliamentary report published in May 2025, external highlighted an increase of 20.9% increase in reports of intimate image abuse in 2024.
“I saw firsthand the unimaginable, often lifelong pain and trauma violence against women and girls causes,” Prime Minister Sir Keir Starmer said, referencing his time as director of public prosecutions.
“My government is taking urgent action against chatbots and ‘nudification’ tools,” he added.
Technology Secretary Liz Kendall said: “The days of tech firms having a free pass are over… no woman should have to chase platform after platform, waiting days for an image to come down”.
The announcement comes after the government’s standoff with X in January, when AI tool Grok was used to generate images of real women wearing very little clothing.
This eventually led to the function being removed for users.
Legislation was brought in earlier in February which made non-consensual deepfake images illegal in the UK.
