In April, State Farm debuted a commercial that appeared to feature footage from 1998 of ESPN's Kenny Mayne making startlingly, even eerily, accurate predictions. These "predictions" included using the phrase "it's lit" and referencing the now infamous 2012 "butt-fumble" by the New York Jets. Fans were generally delighted by the ad, finding State Farm's use of technology both entertaining and clever.

The ad was generated using "deepfake" video technology. Deepfake videos are created using neural networks trained on real footage of people, usually celebrities, which can then be used to generate new videos with the celebrities as the stars. Although these super-realistic deepfakes allow for exciting creative possibilities, the blurring of reality and fiction created by these videos also poses a number of dangers if used inappropriately.

The State Farm ad, featuring obvious anachronisms, was easily, if not obviously, identifiable as a manipulated video. This is not always the case. Creation of a deepfake video does not require expensive technology or advanced skills. Instead, the average person can produce deepfake video in a matter of hours using just their computer and some relatively basic software. With the ability to create videos of any person doing or saying anything comes a high potential for abuse. The term "fake news" takes on a whole different meaning, for example, when a video can be made to show a world leader declaring war or announcing new policy.

Nor is the danger presented by deepfakes limited to public figures. In fact, the vast majority of deepfake videos available online today are pornographic. Many are created using images of celebrities, while some are created using personal contacts as a form of "revenge porn." For the individuals affected by these videos, legal recourse may be difficult as the law is just beginning to catch up with the technology.

To combat the risk posed by deepfakes, California enacted two new laws, which went into effect at the beginning of 2020.

California Elections Code Section 20010 (AB 170)

California Elections Code Section 20010(a) prohibits a person, committee or other entity from producing, distributing, publishing or broadcasting campaign material, with "actual malice" and within 60 days of an election that is "materially deceptive audio or visual media of the candidate with the intent to injure the candidate's reputation or to deceive a voter into voting for or against the candidate, unless the media disclosure stating that the media has been manipulated." Materially deceptive audio or visual media includes "an image or audio or video recording of a candidate[] .... that has been intentionally manipulated in a manner such that the image or audio or video recording would falsely appear to a reasonable person to be authentic and would cause a reasonable person to have a fundamentally different understanding or impression of the expressive content of the image or audio or video recording."

A candidate who is the subject of a deepfake in violation of the law may seek injunctive or other equitable relief as well as general or special damages and attorneys fees. In addition, any registered voter may seek a temporary restraining order and an injunction. The law includes a sunset provision and is set to expire on January 2023 unless the legislature takes further action.

California Civil Code Section 1708.86 (AB 602)

California Civil Code Section 1708.86 creates a private cause of action for individuals against any person who creates and intentionally distributes sexually explicit material depicting a person, if the person distributing the material (1) knew or reasonably should have known that the depicted individual did not consent to its creation or disclosure, or (2) knew that the depicted individual did not consent. A depicted individual "means an individual who appears, as a result of digitization, to be giving a performance they did not actually perform or to be performing in an altered depiction." In addition, a depicted individual may rescind consent via written notice unless the individual was given 72 hours to review the terms of the agreement before signing it, or the agreement was signed by authorized representatives.

A prevailing plaintiff may recover disgorgement of profits, economic and noneconomic damages, or statutory damages of up to $150,000 for acts committed with malice.

Implications of California's Deepfake Laws

The practical implications of these two news laws are not yet known. Both will have to withstand First Amendment challenges. There is some concern that the high protections afforded to political speech will make it difficult to enforce Cal. Elec. Code Section 20010. There is also concern that the law may have the effect of chilling free speech.

At the same time, both measures may do little to address the damage caused by deepfakes. A deepfake video of a candidate or individual may be widely circulated before either is able to take any kind of action. Although websites such as Facebook and Twitter have systems in place for monitoring and removing certain types of deepfakes, ever-evolving deepfake technology can make it difficult to quickly identify these materials. Moreover, a victim of a sexually explicit deepfake video may find little solace in an injunction or monetary damages after the video is public and widely circulated.

As more deepfakes are certain to emerge, it is likely that further laws will be needed to keep up with this area of rapidly developing technology. For example, we are likely to see artificially intelligent deepfakes that can be created by programs in real-time, making identifying deepfakes even more difficult. Thus, at a minimum, the law could and should require that deepfakes be identified as such with serious ramifications for the creators if they are not.

Originally published by Los Angeles Daily Journal

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.