40649
post-template-default,single,single-post,postid-40649,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.4,vc_responsive,elementor-default,elementor-kit-38031
Title Image

Miss Americana and the Deepfake Images: Taylor Swift’s Potential Courses of Legal Action

Miss Americana and the Deepfake Images: Taylor Swift’s Potential Courses of Legal Action

It’s been a long time coming for the United States government to start taking the dangers of deepfake technology more seriously, especially in regard to its use for nonconsensual pornography. When perpetrators recently targeted their latest victim, Miss Americana popstar Taylor Swift, they should have known better than to go after this beloved singer’s big Reputation.

The explicit deepfake images first emerged at the end of January of this year.[1] The images depict lewd and sexual acts of Swift wearing NFL Kansas City Chiefs apparel in the Kansas City Stadium, purportedly in response to the recent media influx of her support for her boyfriend, Chiefs tight end, Travis Kelce.[2] The original images were traced back to a community on 4chan, a message board which utilizes AI to share offensive content such as hate speech, racism, conspiracy theories, and the like.[3] The group of people who created the images were found to be playing “a game” to see whether they could use AI software to create lewd and violent images of famous female figures.[4] Despite AI programs such as OpenAI and Microsoft employing various measures to protect the privacy of celebrities and the use of their likeness for inappropriate content, the point of these “games” is to “[challenge] circumventing the safeguards of generative AI products, and new restrictions are seen as just another obstacle to ‘defeat.’”[5]

Within a matter of weeks, the images made their way onto mainstream social media platforms, such as X.[6] In the 17 hours it took for X to suspend the user who shared the images on the platform, the sexually-explicit AI images of Swift “attracted more than 45 millions views, 24,000 reports, and hundreds of thousands of likes and bookmarks.”[7] Eventually X and Taylor’s army of angry Swifties (Swift’s fan base) were able to report and take down all of the offensive content, but the damage to Swift’s privacy had already been done.[8]

What are Deepfakes? 

Deepfake is the product of “deep learning” technology, such as AI, used to create “fake” copies of a person’s voice or likeness, the product of which is so seamless it can oftentimes be difficult to differentiate from genuine film or footage.[9] While deepfake technology has many beneficial uses in certain industries such as education, film and even retail, the widely available use of the technology by the public entitles citizens to manipulate imagery with the intent to harm and discriminate.[10] Unfortunately, explicit AI generated material such as the images made of Swift, and the lax enforcement of social media platforms’ protections thereof, disproportionately harms women and girls.[11] In fact, 90–95% of deepfake videos are nonconsensual pornographic videos, 90% of which target women.[12] Despite the oftentimes irreparable damage deepfake images can cause to victims professionally, socially, and emotionally, they are oftentimes left without any sort of legal remedy.[13]

Possible Remedies

Swift nor her team has yet to announce whether she intends to take legal action against the creators of the deepfakes. Despite the serious repercussions deepfake technology has on its victims, few states have legislation in place that permit victims to take legal action, and federally, there is not yet a legal remedy.[14] Nevertheless, there are a few potential legal avenues Swift and other victims might have available to get the justice they deserve.

Copyright Claim

One such possibility is a copyright claim. Under the presumption that Swift owned copyright in the images of herself that were uploaded into the AI programming systems, she may have a claim over the work produced. If the AI training data included copyrighted images, “this would be an improper use of copyrighted material, and perhaps, using copyrighted material for commercial purposes.”[15] It is, however, unlikely that Swift would have copyright over these photos, so she would not be able to pursue such action unless she collaborated with the copyright owners. This may incentivize platforms to remove deepfakes in order to mitigate any damages from infringement liability they may be held responsible for under the Digital Millennium Copyright Act.[16]

There is also the chance that these new images created be permitted under the Fair Use Doctrine of Copyright law.[17] So long as the new image is transformative in nature, for a new purpose and character of use than the original photograph, and likely to not interfere with the same potential market as the original, the deepfakes would be protected.[18] However, most of these images are likely being spread without a commercial purpose and would thus have no impact on the market.[19] It would most likely be very difficult to recover under a copyright claim.

Privacy – Related Claims

Swift might have better luck under a privacy or publicity action. While the specific protections vary based on jurisdiction, these rights are typically associated with protecting a personality’s name, image, voice and otherwise likeness.[20] The rights to privacy and publicity are left to the states to regulate, many of whom do not have specific laws addressing deepfakes, or have statutes very narrow in scope that may not cover the wide range of illicit content deepfakes have been known to portray. They too, in the vein of other intellectual property rights, require a commercial component to bring a successful claim, which remains to be a hindrance to deepfake legal action. [See Frazier, supra note 7; see also Roesler & Hutchinson, supra note 20.[/mfn]

Many privacy related claims turn on the truth, or at the very least, the believability, of the image because that is the source of the harm.[21] However, “courts are increasingly skeptical that online readers/viewers actually believe what they see online, and that has made it harder for plaintiffs to win defamation cases.”[22]

Even if Swift were able to bring a successful claim, platforms may not be responsible for having to remove the content.[23] Social media platforms are protected by Section 230 of the Communications Decency Act.[24] For permitting its users to post more information, this section protects platforms from liability by not considering the operators of these platforms as content publishers.[25] The Act does make an exception for intellectual property infringement posted on the platforms, but many jurisdictions are in contention over whether they even recognize a right of publicity.[26]. Balancing First Amendment claims of speech and expression for the users publishing the content with the right of privacy to individuals, especially public figures such as Swift, makes seeking a remedy for damages caused by deepfakes that much more difficult.[27]

Legislative Action 

Perhaps the best course of action for Swift and other victims of deepfake images would be the support of new legislation targeting the issue directly. The viral nature of the Swift images caught the attention of many influential voices down in Washington, including Karine Jean-Pierre, the White House press secretary, claiming, “[w]hile social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people.”[28]

Swift might have just been the perfect victim to finally get Congress rolling on legislation to start to protect future victims of deepfake images. Ben Decker, CEO of digital investigation agency Memetica, points out “When you have figures like Taylor Swift who are this big [targeted], maybe this is what prompts action from legislators and tech companies because they can’t afford to have America’s sweetheart be on a public campaign against them. I would argue they need to make her feel better because she does carry probably more clout than almost anyone else on the internet.”[29] In fact, merely days after the images of Swift went viral, a bipartisan group of senators introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (“DEFIANCE”) Act.[30] The Act would provide victims with civil right of action against “those who knowingly produced or possessed the image with an intent to spread it” or “‘recklessly disregarded’ that the victims did not consent to its making.”[31] The bill is being led by prominent figures in Congress, including Representative Alexandria Ocasio-Cortez.[32] Americans are in overwhelming support of legislation illegalizing non-consensual deepfake pornography, with studies polling at least 84% of people in favor.[33] This is unfortunately not the first bill that has been introduced to criminalize deepfakes[34], but with backing from self-proclaimed Miss Americana, it may be the last.

No person, regardless of their fame, deserves to have their name and likeness manipulated in such an offensive manner. Should Swift choose to seek legal action or use her platform to encourage others to draw awareness and support the incoming legislation, she could be capable of bringing victims something a whole lot “Better Than Revenge”: “Peace.”

Footnotes[+]

Samantha Fazio

Samantha Fazio is a second-year J.D. candidate at Fordham University School of Law and a staff member of the Intellectual Property, Media & Entertainment Law Journal. She holds a B.A. in English from Siena College.