top of page

Taylor Swift Deepfakes

Sexually explicit AI-generated photos of singer-songwriter Taylor Swift went viral on social media in late January, leading to possible legal action and legislative reform. X, formerly known as Twitter, was the main platform where the AI photos were shared. 

The images consisted of Swift being hypersexualized at a football game and were viewed tens of millions of times before corrective measures were initiated by X to take the photos down and suspend the accounts sharing them. AI-generated photos of the singer also included depictions of her as a Nazi, overweight, endorsing specific politicians, and more. 

X made Swift’s name temporarily unsearchable on the platform in an attempt to cleanse itself of the photos and prevent further dissemination—though variations of her name or alternate phrasings were still searchable, causing the photos to be too. She is allegedly now considering legal action against the deepfake porn sites sharing these images.

“Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed. We're committed to maintaining a safe and respectful environment for all users,” X’s @Safety account said.

These AI-generated photos are known as “deepfakes.” Deepfakes are a form of synthetic media involving either an image, a video, or an audio recording that has been altered or generated to portray something that has not actually happened or someone doing something they did not. It is using technology to manipulate someone's likeness and physical image. The term was first introduced on Reddit in 2017. 

“It’s horrible because so many people don’t question what they see and they believe too many things. That’s our biggest problem,” NHS Mathematics teacher Lisa Carpenter said. 

In response to the photos of Swift, a bill titled the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024, or the “Defiance Act,” was introduced on January 30th by Senators Dick Durbin, Lindsey Graham, Amy Klobuchar, and Josh Hawley.

“Sexually-explicit deepfake content is often used to exploit and harass women—particularly public figures, politicians, and celebrities. For example, in January 2024, fake, sexually-explicit images of Taylor Swift that were generated by artificial intelligence swept across social media platforms. Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit deepfakes is very real. Victims have lost their jobs, and may suffer ongoing depression or anxiety. The laws have not kept up with the spread of this abusive content,” Durbin said.

This recent exploitation calls for the re-examination of the dangers of artificial intelligence and its easy access. Swift was not the first victim of AI-generated pornography, and it incites the question: how can society address the growing threat of AI and safeguard individuals from its misuse, and what measures should be taken to prevent incidents like this from happening? 

Swift becoming the current target of deepfake pornography highlights the severity of the issue and the path that technology is currently on—but also the possibility of change and reformation.

4 views0 comments

Recent Posts

See All
bottom of page