Recent Generative AI Developments to Consider this Fall

Overview

  • This summer's flurry of activity in the generative AI space, including lawsuits, regulatory challenges, legislative efforts and evolving technological capabilities, has impacted how AI tools are being used in the marketing and communications industry.
  • Key principles of responsible and ethical AI emerged, including goals to ensure that data collection is done with authorization, that training data is accurate and sufficiently diverse, that AI use is adequately disclosed and that AI tools do not replace human labor.
  • These principles of responsibility, transparency and ethics are likely to drive new AI development into the near future.

While summertime is frequently idealized as a time of relaxation, this past summer did not provide any respite in the dynamic realm of generative artificial intelligence (AI). Instead, the summer hosted an explosion of new AI developments. From grappling with mounting legal and regulatory challenges to responding to proposed legislation and voluntary guidelines, both AI companies and those affected by their technology found themselves navigating a landscape of change — especially those in the marketing and communications industry.

AI Litigation

Following class action lawsuits filed against Stability AI, Midjourney and Deviant Art at the beginning of the year, the summer brought a fresh wave of legal actions targeting prominent players in the generative AI landscape, including OpenAI, Meta and Google. OpenAI, the maker of the immensely popular ChatGPT and DALLE-2 platforms, repeatedly found itself in the crosshairs of multiple class actions and regulatory investigations:

  • In June and July, OpenAI was named as a defendant in class action lawsuits filed by various groups of authors, including comedian Sarah Silverman, alleging that the company misused and infringed upon copyrighted literary works to train its large language model (LLM). Silverman's group of plaintiffs also filed a similar complaint against Meta in connection with its own generative AI chatbot. Other prominent authors such as George R.R. Martin and Jodi Picoult filed additional class action lawsuits against OpenAI and other AI companies in September, alleging similar claims.
  • OpenAI was also targeted in another set of class action litigations alleging that the company violated the privacy and data protection rights of millions of consumers. The plaintiffs alleged that, by engaging in large-scale scraping of online sources to train its model, including social media and blog pages, OpenAI obtained vast quantities of individuals' personal data in the process, which it used without consent. A similar class action was also filed against Google, alleging that the company misused both the enormous volume of personal data and copyrighted information that it had accessed to train its own LLM.

Meanwhile, an important shift occurred in one of the earliest copyright infringement cases brought by a class action of artists against Stability AI, Midjourney and Deviant Art. During a hearing in July, a federal district judge indicated that he was inclined to grant the defendants' motion to dismiss – allowing the plaintiffs to refile the complaint, but in a narrower and more specific manner. The judge noted that, under the U.S. Copyright Act, the artists would need to point to specific works of art that were previously registered for copyright protection and were infringed by the defendants' generative AI tools. This development may result in the litigation scope narrowing and will undoubtedly impact the more recently filed copyright cases.

A likely result is that plaintiffs will need to assert specific, narrowly focused allegations of copyright infringement by generative AI platforms of registered works, rather than broadly alleged, general claims of infringement of all their works. Meanwhile, AI companies may be emboldened by this development and may be less deterred by new class action litigations as these existing cases progress forward.

Global Regulatory and Legislative AI Updates

United States

Over the summer, the Federal Trade Commission (FTC) opened an investigation into OpenAI concerning its alleged use of individuals' personal data that it scraped from the internet without permission to train its LLM.

The FTC sent OpenAI a 20-page letter indicating it is investigating whether the company "engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers." The FTC and its chair, Lina Khan, are focused on holding AI platforms accountable for the information their technologies are trained on, stating during a House Judiciary Committee hearing in July that "there are no checks on what type of data is being inserted into these companies."

OpenAI has historically been secretive about its data collection practices for its generative AI models, but the FTC's investigation could shine a light into these practices and the training methods that OpenAI and similar AI companies utilize. This investigation is the first major regulatory hurdle OpenAI has faced in the United States since introducing its generative AI platforms to the public last year, although the company, and generative AI technology overall, has faced significant scrutiny in other parts of the world.

While generative AI platforms continue to grapple with challenges related to data collection and training, there is a growing consensus within the industry regarding their training methodologies. In July, seven prominent AI companies publicly committed to a set of voluntary guidelines promulgated by the White House under the Biden administration, focused on encouraging the responsible development and deployment of advanced AI technologies. These commitments are intended to provide industry-wide guidance until legislative regulations addressing similar concerns are enacted. They prioritize safety, transparency and societal responsibility during the AI training process.

Despite their laudable goals, these White House guidelines currently lack an enforcement mechanism and apply exclusively to next-generation generative AI models surpassing the capabilities of current industry models. However, the guidelines signify that the executive branch and AI leaders are taking an important step toward acknowledging the necessity and feasibility of governmental regulation and self-regulation. These guidelines will likely continue to serve as a standard to aspire to while legislators in Congress draft and debate various proposals to legislate AI's complex issues.

To view the full article click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.