Generative artificial intelligence has opened the door to creative possibilities like never before.

The ability to produce visual or audio compositions simply by typing a text prompt into an AI tool gives users of all artistic levels the ability to create new content, including images or recordings that impersonate others by mimicking their voice or visual likeness.

In the U.S., while no federal right of publicity exists to protect an individual's name, likeness or other recognizable aspects of one's persona from being used by another for commercial purposes such as advertising, a majority of states recognize such a right.

State right of publicity laws, however, typically contain a carveout to permit use of an individual's name or likeness in expressive speech, such as television programs, news articles and films.

As discussed below, a recent lawsuit involving an AI-generated impersonation of the late stand-up comedian George Carlin — which comes amid rising calls for the creation of a federal right of publicity and expanded state law protections — may illustrate whether courts approach right of publicity cases differently in the time of AI.

Main Sequence Ltd. v. Dudesy LLC

The estate of George Carlin — composed of Main Sequence Ltd. and the estate's executor Jerold Hamza — filed suit in January in the U.S. District Court for the Central District of California against Dudesy LLC, a company operating a website and podcast, over its creation and publication of an AI-generated podcast allegedly impersonating Carlin's voice and comedic style.

The podcast at the heart of this case, "George Carlin: I'm Glad I'm Dead (2024)," is alleged to have been entirely written, created and controlled by an AI program called Dudesy AI.

In addition to Dudesy, the complaint names two performers on the Dudesy podcast, as well as several John Doe defendants alleged to have created or contributed to the creation, production and sponsorship of the comedy special at issue.

According to the complaint, the defendants created the Dudesy podcast through their unauthorized use of Carlin's copyrighted works.

Specifically, the plaintiffs allege that the defendants ingested five decades of Carlin's original stand-up comedy routines — to which the plaintiffs claim ownership — into the training database of Dudesy AI, thereby allegedly making unauthorized copies of the copyrighted works, upon which Dudesy AI then created the podcast.

The defendants then allegedly "created a script for a fake George Carlin comedy special and generated a sound-alike of George Carlin to 'perform' the generated script."

The plaintiffs assert that such activities constitute copyright infringement, as the defendants used their "copyrighted works for building and training a dataset for purposes of generating an output intended to mimic the plaintiffs' copyrighted work (i.e., Carlin's stand-up comedy)."

In addition, the plaintiffs assert that the AI-generated podcast misappropriates George Carlin's name, image and likeness in violation of Carlin's right of publicity under California's statutory and common law.

Generally speaking, the state of California protects against unauthorized uses of a person's name or likeness for commercial and certain other exploitative purposes, both via statute and common law.

Under Section 3344 of the California Civil Code, any person who knowingly uses another's name, voice, signature, photograph, or likeness in connection with products or merchandise, or for advertising purposes without consent is liable for damages.

Separately, California has a statute — Section 3344.1 of the California Civil Code — protecting posthumous rights of publicity, which is asserted by the plaintiffs in Main Sequence.

The posthumous right lasts for 70 years after death, and is considered a freely transferable, licensable and descendible property right. This statute includes an exemption, however, for any uses in a "play, book, magazine, newspaper, musical composition, audiovisual work, radio or television program, single and original work of art, work of political or newsworthy value," or an advertisement for any of these works.

According to the complaint, before the AI-created podcast was made available in January to the public on the Dudesy podcast's YouTube channel, the defendants promoted the podcast by releasing AI-generated images on social media designed to look like Carlin.

The plaintiffs argue that the Dudesy special was used to generate advertising revenue for the defendants, sell merchandise, boost subscriptions to a paid service called Dudesy+, and publish further videos to profit from the media attention garnered by the special.

The plaintiffs seek damages as well as preliminary and permanent injunctive relief. As of this writing, the defendants have not filed an answer.

Legislative Efforts to Regulate AI-Generated "Digital Replicas"

The Main Sequence case comes at a time when Congress and states are determining whether changes to existing right of publicity laws are necessary in light of AI advancements.

At the federal level, a discussion draft released of the Nurture Originals, Foster Art and Keep Entertainment Safe Act, which at the time of this writing has still not been formally introduced in the U.S. Senate, has garnered significant attention in its attempt to protect the voice and visual likeness of individuals.

The NO FAKES Act would give individuals "the right to authorize the use of the[ir] image, voice, or visual likeness" in a "digital replica," a computer-generated representation of their image, voice, or visual likeness in a sound recording or audiovisual work in which that individual did not actually perform or appear.

Similarly, in the U.S. House of Representatives, the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act, which has been formally introduced, would establish a federal framework to prohibit the unauthorized public distribution of a "digital voice replica or digital depiction" of an individual's voice or likeness.

While such legislation remains pending before Congress, the state of Tennessee recently enacted legislation to protect against the unauthorized use of an individual's voice in light of the ability to create AI-generated digital replicas.

The Ensuring Likeness Voice and Image Security, or ELVIS, Act updates existing Tennessee law to, among other things, protect an individual's "voice," and create liability for any person publishing, performing, or publicly distributing an individual's voice or likeness without authorization.

While the ELVIS Act was just announced in January, Tennessee Gov. Bill Lee signed the bill into law less than four months later.

As more states and Congress deliberate whether to enact or expand right of publicity laws to address AI and the creation of digital replicas, the existence of cases like Main Sequence may further encourage such legislation.

At the same time, rising calls for the creation of a federal right of publicity and expanded state law protections may influence whether courts approach right of publicity cases differently in the time of AI.

The Main Sequence case may illustrate just how those interests are balanced in a court of law.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.