61% of women in the UK are concerned about becoming a victim of deepfake pornography, a new study reveals.
New research released by cybersecurity provider ESET today found that 61% of women in the U.K. were concerned about becoming a victim of deepfake pornography.
These concerns aren’t unfounded either, with the study also finding that 9% of women surveyed have reported that they either have been a victim or know someone who was.
Unfortunately, deepfake porn isn’t just a challenge in the U.K. but across the world. In fact, according to Home Security Heroes, 95,820 deepfake videos were available online. 98% of all those videos were categorized as deepfake porn.
What’s notable about ESET’s research, in particular, is that it comes just months after the passage of the Online Safety Safety Act, which made it a criminal offense to share deepfake pornography without consent.
Key Takeaways
- 61% of women in the UK are worried about being victims of deepfake pornography, reflecting a significant societal concern.
- 98% of deepfake videos are categorized as pornographic
- Deepfake pornography can have severe consequences, including mental health challenges and reputational damage, and victims often lack the resources to combat its spread effectively.
- While some countries, like the UK, have criminalized the sharing of deepfake pornography, regulatory responses vary around the world.
- Individuals are encouraged to advocate for stricter regulations and to report deepfake pornography to both social media platforms and law enforcement.
The Real Impact of Deepfake Pornography
ESET’s research has found that two in five (39%) individuals believe that deepfake pornography is a significant risk of sending intimate content, yet a third (34%) of adults have still sent them.
The survey collected data from over 2,000 British citizens and also found that 57% of under-18s are also concerned about being a victim of deepfake pornography.
While the UK has criminalized the creation of deepfake porn images, it is not a silver bullet — although it is a step in the right direction.
Jake Moore, global cybersecurity advisor at ESET, told Techopedia via email:
“Although criminalizing the creation of deepfake pornography may not solve the problem, it helps send a clear message to people (in particular young people) that it is extremely damaging to all those involved and could significantly impact the creator as well.”
The precise damage caused by deepfake porn is difficult to calculate — as many victims may not come forward. In any case, according to My Image My Choice, an organization seeking to address intimate image abuse, deepfake porn can be “life-shattering” and contribute to mental health challenges.
While we’ve seen high-profile public figures like Taylor Swift fall victim to NSFW deepfaked images – which were shared across X and viewed 47 million times, most victims are everyday young women who don’t have a large platform or resources to fight back for their removal.
At the same time, the accessibility of AI-driven image and video creation tools has made it easier than ever before to create such images. All a bad actor needs to do is feed an image of the victim into an AI image or video generator, and they can start producing a synthetic image.
Moore added:
“Large companies offering deepfake technology with off-the-shelf products are well aware of the issue and have put measures in place to reduce and stop the creation of pornographic material on their platforms.
However, AI software can be generated by anyone with enough time, money, and training data — plus there are underground alternatives without the same morals, so the problem becomes more difficult to police.”
A Look at Current Deepfake Regulations and Protections
As it stands, the regulatory landscape is all over the place in terms of addressing non-consensual deepfaked images. While the U.K. has criminalized the sharing of such images, the U.S. hasn’t, with no federal law against sharing such images — though individual states are moving towards adding protections for victims.
For instance, The Governor of Illinois changed state law to enable victims of deepfaked NSFW images to sue for damages.
Other countries like India lack specific laws against these types of images but could offer some protection under existing regulations, such as the Information Technology Act 2000, which can punish criminals with up to 3 years imprisonment or a fine for capturing and sharing an individual’s image in mass media.
In terms of distribution channels, while mainstream porn and social media sites have protections against deepfaked images, these have been proven ineffective.
Despite banning this type of content, Pornhub has been found to feature deepfaked videos of celebrities in the past. Likewise, Facebook and Instagram have come under fire for allowing deepfaked content of actress Jenna Ortega.
When considering the current legal landscape and content moderation activities of porn and social media sites, these types of images will remain in circulation for the foreseeable future.
What Can Be Done About Non-Consensual AI Content?
At the moment, besides advocating for regulators and social sites to prevent the spread of non-consensual AI-generated images and videos, there isn’t much that individuals can do to protect themselves from this type of content.
On paper, users could be cautious about what images they share online — but given the importance of maintaining an online presence both professionally and socially, this doesn’t appear to be a viable option for most people (it also falls under the category of victim blaming).
Our guide on ways to protect yourself against deepfakes may also be helpful.
As an imperfect solution, ESET recommends turning social media accounts private to reduce who has access to images and reporting deepfaked pornography whenever you encounter them — both to social media providers and law enforcement.
That being said, it is important to highlight that being a victim of deepfake porn isn’t due to the victim posting the wrong photos or having the wrong privacy settings — it’s a consequence of bad actors choosing to create and share synthetic NSFW images without consent.
The Bottom Line
Deepfakes are here to stay. If you want to help prevent the spread of this type of content, then the best way to do so is to use your voice to put pressure on regulators and popular websites to restrict the spread of this type of content.
The less ambiguity there is, the less opportunity there is for malicious individuals to harass victims with impunity.