Last month, viral AI-generated pornographic pictures of Taylor Swift circulated on X (formerly Twitter), with one post remaining for 17 hours and receiving more than 45 million views, 24,000 reposts, and hundreds of thousands of likes before the verified account was suspended for violating platform policy. The images, allegedly created using a company’s text-to-image tool Designer, originated from a challenge on 4chan. The posts spurred an explosion of comments and “Protect Taylor Swift” hashtags on X by the army of “Swifties” (the name used by Taylor Swift supporters) seeking to bury the pornographic content. Ultimately, the controversy sparked the attention of Members of Congress.

Regrettably, Taylor Swift is not the only victim of deepfake porn. Malicious actors on the internet have been targeting teen girls and creating AI-generated deepfake images at unprecedented rates. This year, for example, New Jersey high schooler Francesca Mani spoke at a news conference alongside Congressman Joe Morelle, and she explained that nonconsensual AI-generated intimate images of her, along with the images of 30 other girls at her school, were shared on the internet.

While men are victim as well, women are disproportionately impacted by the spread of AI-generated and altered intimate images. A MIT Technology Review report revealed that the vast majority of deepfakes target women. Between 90% and 95% of these videos are non-consensual porn involving women, a report from Sensity AI found. Unfortunately, the current legal and regulatory framework in the U.S. offers victims of such abuse little recourse.

How does deepfake technology work?

Sophisticated deepfake technology trains programs to general realistic impersonations of another’s body and create a realistic image or video. Advancements have made it possible for a user to input a written prompt, and in turn, create a convincing falsified image or video. As we reported in a blog post last year, when it comes to deep fakes, it is becoming increasingly difficult to distinguish between what is real and what is fake. Startups are working on developing detection technology, and the company with the text-to-image tool claims to have closed the loophole that allowed for the Taylor Swift deepfakes. But, the question remains: will it stop bad actors from finding a work around or a new tool to exploit? Probably not.

Federal Deepfake Legislation

Although pornographic deepfakes first emerged in 2017 on Reddit, there is still no federal legislation that specifically regulates deepfakes. What’s more, there is no federal law regulating revenge porn more broadly (though there are state level revenge porn laws as discussed below). The Swifties are not the only ones who think it is high time there was a change in the direction of protection. On February 21, 2024, a coalition of hundreds of researchers and labor activists – including Andrew Yang, Steven Pinker, and researchers at Google, DeepMind and OpenAI – published an “open letter” calling on lawmakers to criminalize AI-generated deepfakes, warning that the images are a threat to society. The letter urges legislators to (1) fully criminalize deepfake child pornography, even when only fictional children are depicted; (2) establish criminal penalties for anyone who knowingly creates or knowingly facilitates the spread of harmful deepfakes; and (3) require software developers and distributors to prevent their audio and visual products from creating harmful deepfakes, and to be held liable if their preventive measures are too easily circumvented.

There have been several attempts at passing federal legislation to protect against AI-abuse. Here are some examples:

  • In 2017, Representative Jackie Speier drafted the Ending Nonconsensual Online User Graphic Harassment (“ENOUGH Act”), which would make revenge porn a federal crime, but it died in committee. Since then, there have been a number of attempts to implement more regulation.
  • Senator Ben Sasse introduced the Malicious Deep Fake Prohibition Act of 2018, which would have made it a federal crime to create or distribute a deepfake with the actual knowledge the video is a deepfake and intent to facilitate unlawful activity. However, this Act too never left committee.
  • Next, Joe Morelle and Tom Ken proposed a bill on December 20, 2022, Preventing Deepfakes of Intimate Images Act, which seeks to criminalize nonconsensual disclosure of AI generated “intimate” images, but this bill has not made significant progress. The Act would make it illegal to share deepfake pornography without consent. It would update the Violence Against Women Act in 2022, adding a provision that creates a civil right of action for victims to sue when someone discloses or threatens to disclose a digitally created or altered intimate image of video without obtaining consent.
  • Representative Yvette Clark’s proposed legislation, the DEEP FAKES Accountability Act, has a similar intent – to provide a pathway for victims of AI abuse to seek justice. Thus, the law would require digital watermarks and disclosure requirements. This too has not yet passed.
  • Another proposed bill would regulate deepfakes more broadly. The Nurture Originals, Foster Art, and Keep Entertainment Safe Act (“NO FAKES Act”), announced on October 12, 2023, would create a new digital replication right and give legal recourse to the unauthorize use of another person’s likeness to create and distribute deepfakes.

Most recently, a new bill was announced in the wake of the Taylor Swift posts were shared on X, “The Disrupt Explicit Forged Images and Non-Consensual Edits” (“DEFIANCE Act”). If passed, this proposed law will allow victims to sue the creators of deepfakes if those who created the deepfakes knew, or “recklessly disregarded” that the victim did not consent to its making. The bill’s announcement cites a 2019 study, which found that 96% of deepfake videos were nonconsensual pornography. Moreover, the proposed act creates a federal civil remedy for victims who are identifiable in a “digital forgery,” defined as “a visual depiction created through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means to falsely appear to be authentic.”

Sadly, as the above demonstrates, no comprehensive federal law addressing deepfake porn or revenge has made significant progress toward becoming a law. Of course, one should never doubt the power of Swifties to enact change, so there is hope.

State Law and the Regulation of Nonconsensual Intimate Images

Individual states have been more successful in passing legislation that directly addresses harms caused by fake AI-generated nonconsensual pornographic images online. California, Florida, Georgia, Hawaii, Illinois, Minnesota, New York, South Dakota, Texas, Virginia, Indiana, and Washington have each passed legislation targeting malicious actors who create and share deepfakes without consent. The state laws generally include various disclosure requirements and some include bans, which are often subject to exceptions. Some state laws require an “illicit motive,” which is a high burden for victims to prove.

For example, California passed a law in 2020 that allows deepfake pornography victims to sue those who create and distribute sexually explicit deepfake material if the victim did not consent to it, and victims can sue for up to $150,000 if the deepfake was “committed with malice.” Two years later, Florida passed a law prohibiting the dissemination of sexually explicit deepfake images without the victim’s consent, and it is a third-degree felony with a maximum sentence of five years in prison, a $5,000 fine, and probation. In October 2023, New York signed into law S1042A, making it illegal to disseminate AI-generated explicit images or “deepfakes” of a person without their consent where violators can spend up to a year in jail along with a $1,000 fine. Just this month, on March 12, 2024, Gov. Eric Holcomb signed a law in Indiana criminalizing the sharing of an AI-generated intimate image or video of nonconsensual pornography. And most recently, Washington Gov. Jay Inslee signed a law that expands criminal penalties under the state’s current child pornography laws to include instances of deepfake porn created by AI.

Why State Revenge Porn Laws are not a Reliable Way to Protect Victims

The problem with relying on state revenge porn laws to regulate the creation and distribution of deepfake pornography is twofold. First, most of these laws do not include language addressing computer-generated images. In fact, only about a dozen explicitly address it. Some laws, for example, may imply that the images must be of the victim’s own private body parts, rather than an AI-generated image.

Second, the laws greatly vary from one state to another. Forty-eight states have banned the spread of revenge porn, but some laws focus on intent, while others focus on consent. Some laws treat violations as misdemeanors, while others treat violations as more serious crimes. Some state laws provide a private right of action, allowing victims to sue creators of fake porn, but many do not. Other states have criminalized deepfake pornography and enacted statutes that require the defendant to disseminate the material with the “intent to coerce, harass, or intimidate.” The intent element can be hard to prove.

What are the problems with the legal remedies that victims currently have?

Evidently, current federal and state laws regulating deepfakes offer victims only spotty protection that many view as insufficient. As noted above, there are serious issues with relying on state revenge porn laws. Thus, not surprisingly, revenge porn victims have turned to using more traditional tort claims or, where available, brought actions under state revenge porn laws seeking a civil recovery. These efforts, to date, are having little success.

Indeed, while pornographic deep fakes arrived on the scene quickly, the law in this area is progressing slowly. Even the FBI has expressed concerns about the ease by which malicious actors can find and target victims using shared media on social media, dating apps, and other online platforms. For example. an FBI report from June 2023 announced that it has received reports from victims whose images or videos were altered into explicit content. In addition to the 600+ signatories to the open letter discussed earlier, other lawmakers and law professors – including Rebecca A. Delfino – have emphasized that the law needs to catch up to the technology. In her law review article titled “Pornographic Deepfakes: The Case for Federal Criminalization of al Criminalization of Revenge Porn’s Next Tragic Act,” Professor Delfino proposed a solution that would criminalize through federal law rather than state law – a law that thus far has proven elusive in today’s gridlocked Congress.

Our Conclusions

There is a clear need to regulate deepfake pornography (and deepfakes more generally) and prevent harm. Civil penalties may be part of a solution, but some states have found that placing the burden of redress and protection on the victim is unfair. Hiring an attorney is expensive and filing a lawsuit runs the “Streisand Effect” risk of drawing more attention to the very material the victim is hoping to suppress. If criminalized, the government is named in court filings, and the victim remains anonymous. Importantly, the government can use fact-finding methods not available to private parties, and criminal prosecution has a punitive function that should be a powerful deterrent, especially against judgment proof defendants. 

A review of the States’ slow, patchwork response to the proliferation of revenge porn makes a federal ban, whether civil and/or criminal, emerge as the most effective way of addressing harm. The call for such legislation has grown louder thanks to the actions of the not-to-be-underestimated Swifties in response to the performer’s deepfake pictures appearing on X. And, the states’ legislatures appear to appreciate the fact that most Americans do not have a legion of devoted fans to orchestrate an extralegal response when their nonconsensual lewd content appears. 

The influential role the porn industry has played in the development of the Internet and its technologies is the topic of extensive scholarship. But, the cost of innovation should not be the safety of people online. As the country (and much of the world) grapples with how to address the risks that accompany AI permeating so much of our public and private lives, the danger generated by deepfake porn cannot be lost in the conversation. 

The post Why the Taylor Swift AI Scandal is Pushing Lawmakers to Address Pornographic Deepfakes appeared first on Global IP & Technology Law Blog.