Last Updated on January 31, 2025 by Steven W. Giovinco
The recent controversy surrounding UK’s Channel 4 documentary Vicky Pattison: My Deepfake Sex Tape highlights a troubling intersection between AI-generated content, consent, and online reputation damage.
By including deepfaked imagery of Scarlett Johansson in lingerie—without her consent—Channel 4 may have not only crossed ethical boundaries but potentially violated a UK law. Legal experts argue that even in a documentary meant to raise awareness, broadcasting nonconsensual AI-generated imagery risks amplifying the very harm it is meant to highlight and expose.
AI, Consent, and Ethical Boundaries in Media
The growing accessibility and variety of AI tools has made it easier than ever to create hyper-realistic false images, videos and audio. Exposing this in a documentary certainly raises awareness, yet there are ethical implications of using non consensual deepfakes. This case raises questions about where to draw the line between responsible journalism and unintentionally perpetuating the harm caused by deepfake abuse?
Scarlett Johansson’s Long Battle Against Deepfake Exploitation
Scarlett Johansson has been outspoken against deepfakes since 2018, calling them “demeaning” and warning of the lack of control over one’s own image. Despite this, her image was again used to exacerbate the very issue she fought against. Johansson was among the first celebrities to experience deepfake sexual abuse, and this repeated use probably underscores how little progress has been made in protecting people from AI-generated harm.
The Reputation Crisis: How AI-Generated Content Harms Individuals
From an online reputation management standpoint, this underscores a growing crisis. Individuals—celebrities and everyday people—are increasingly vulnerable to AI-generated images, videos, and other reproductions. Once shared, these manipulated media can cause lasting online damage that is difficult to repair, and might be impossible to remove. This extends beyond celebrities; deepfake technology is now being weaponized against private individuals, leading to reputational harm, harassment, and emotional distress.
The Need for Stronger Protections and Accountability in AI Development
As tools become more sophisticated, there is a need for stronger protections against AI-generated abuse. While it is certainly laudable to raise awareness, this approach can normalize or even amplify the damage caused by deepfakes and related reproductions. Instead, perhaps the focus should be on accountability—stressing that platforms, AI developers, and lawmakers create stricter regulations to prevent the unauthorized use of someone’s likeness.
Beyond PR: Safeguarding Identity in the Age of AI Fabrication
This issue serves a reminder that managing and protecting online reputations goes beyond traditional PR—it’s about safeguarding identity, dignity, and personal security where AI can seemingly fabricate reality. As legal discussions evolve, it’s clear that media outlets, content creators, and technology companies must be held accountable, at least partially, for their role in preventing the spread of AI-generated exploitation. Awareness alone is not enough; real action is needed to prevent further violations of digital identity and personal autonomy.