Deepfakes — sophisticated AI-generated videos capable of seamlessly grafting a person’s face and voice onto another body — are stark examples of the challenges faced with AI technologies. Deepfakes are hyper-realistic fabrications that blur the lines between truth and fiction, presenting a formidable threat in the realm of misinformation. Deepfakes also have political implications, painting political leaders in a false light, spreading false information, and stirring opposition. The ability to create deepfakes has become incredibly accessible. Their emergence threatens anyone with an image of themselves on the internet.
But how do we respond? This blog post provides an overview of some of the legal avenues for meeting them head on, and some of the state-based responses so far.
Navigating the Legal Maze of Deepfakes
Deepfakes extend far beyond mere technological novelty, and delve into complex legal territory when individual rights are trampled. Though no federal law currently addresses the topic, there are various state laws and legal theories that may help to reign in deepfakes:
- Right of Publicity: Across numerous jurisdictions, individuals are endowed with the right of publicity, affording them control over their image and likeness. Deepfakes that portray individuals in compromising or false scenarios infringe upon this right, potentially leading to reputational harm. Bringing a legal action for violating publicity rights is one avenue to stop these unauthorized portrayals.
- Defamation: Deepfakes can serve as conduits for spreading malicious falsehoods, tarnishing reputations, and inflicting emotional distress. Instances of deepfake-enabled defamation could warrant this legal action to provide compensation for the damage caused.
- Privacy Violations: Deepfakes are also intrusive, capable of breaching the sanctity of individuals’ private lives or exploiting their identities for nefarious purposes. State-based privacy protections provide another avenue for curtailing deepfakes.
- Copyright Infringement: Incorporating copyrighted material without consent is a common facet of deepfake creation and could constitute copyright infringement. Copyright owners whose creative works are included in deepfakes could bring a claim for infringement. Taking down content under the Digital Millennium Copyright Act’s provisions could be one way to get this content removed quickly from social media.
- Personality Rights and Trademark Infringement: Beyond the realm of publicity rights, certain jurisdictions recognize broader personality rights that encompass an individual’s unique traits and characteristics. These are commonly infringed in such instances. Likewise, deepfakes that mimic brand imagery or spokespersons can also encroach upon trademark rights, leading to consumer confusion.
- Harassment: Deepfakes could also constitute cyberstalking or harassment, which is prohibited in many states.
Navigating the Legal Response
The legal landscape surrounding deepfakes is evolving with states picking up the slack from the lack of federal action. In response, California and Texas have enacted stringent measures to combat deepfake-related offenses that could influence elections or that constitutes nonconsensual pornography. Similarly, New York passed a law creating a legal cause of action against the unlawful publication of deepfakes. However, the absence of comprehensive federal legislation creates the potential for this varied legal patchwork to continue– with different treatment from state to state.
Complex Terrain
With the ability to create convincing media that presents false information and offensive portrayals of unsuspecting individuals, as well as to stir false political sentiments during election years, deepfake AI driven technology has the potential to create severe damage to political and social realities.
Without federal legislation, responding to deepfakes requires applying a multitude of state laws. Although remedies do exist, locating the creators of deepfakes can be incredibly difficult, and social media companies are best suited in the interim to locate and remove such content for violations of their terms and conditions. Even if such content is removed, however, the damage can be already done, as videos can quickly become viral before they are removed. What responsibility these companies have to police their own content, and what if any obligation they have to respond to them will continue to play out.