Below is this week's tracker of the latest legal and regulatory developments in the United States and in the EU. Sign up here to ensure you do not miss an update.

AI Intellectual Property Update:

  • Anthropic released a paper regarding its updated Claude 3 generative AI model. Regarding training data, the paper states:
    • "Claude 3 models are trained on a proprietary mix of publicly available information on the Internet as of August 2023, as well as non-public data from third parties, data provided by data labeling services and paid contractors, and data we generate internally. We employ several data cleaning and filtering methods, including deduplication and classification. The Claude 3 suite of models have not been trained on any user prompt or output data submitted to us by users or customers, including free users, Claude Pro users, and API customers."
    • "When Anthropic obtains data by crawling public web pages, we follow industry practices with respect to robots.txt instructions and other signals that website operators use to indicate whether they permit crawling of the content on their sites. In accordance with our policies, Anthropic's crawler does not access password- protected or sign-in pages or bypass CAPTCHA controls, and we conduct diligence on the data that we use. Anthropic operates its crawling system transparently, which means website operators can easily identify Anthropic visits and signal their preferences to Anthropic."
  • Last year Stack Overflow became one of the first websites to announce it would charge AI giants for access to content used to train chatbots. Now the popular Q&A service for coders has signed up its first customer—Google—in what CEO Prashanth Chandrasekar says is the start of a "meaningful" new stream of revenue. The deal is significant, because it remains unclear how broadly Google and other AI developers will pay for content needed for AI projects.
  • Per Bloomberg: "Reddit's blockbuster deal with Google to train artificial intelligence products on the platform's data is just the beginning of an anticipated licensing bonanza."

AI Policy Update—U.S.:

  • FTC Chair Lina Khan said sensitive personal data should not be included in training AI. Khan said data related to health, location, or web browsing history should be "off limits" for training AI models, adding the FTC is working to create "bright lines on the rules of development, use, and management of AI inputs."
  • House Oversight Chair James Comer (R-KY) and Ranking Member Jamie Raskin (D-MS) introduced a bill that would codify federal governance of agency AI systems, establishes new mechanisms for transparency and accountability, and consolidates and streamlines other AI laws.
  • A government-led effort to craft AI rules would receive up to $10 million under a new bipartisan spending deal. A six-bill $436 billion spending package released Sunday would provide the NIST with up to $10 million to establish the US AI Safety Institute to help implement the order.
  • INCOMPAS is kicking off a new effort to quell industry in-fighting over possible licensing rules and other proposals designed to boost AI safety. The new INCOMPAS effort on AI rules will be led by Mignon Clyburn, a former Democratic commissioner on the Federal Communications Commission; Colin Crowell, managing director at the Blue Owl Group and a former longtime Twitter lobbyist; Milo Medin, who previously served as vice president of access and wireless services at Google; Robert Robbins, the president of the University of Arizona; and Robert Hale, president of communications company Granite Telecommunications.

AI Policy Update—European Union:

  • The European Parliament has scheduled the plenary vote on the Proposed European Union's Artificial Intelligence Act (Proposed EU AI Act) for March 13, 2024. After the European Parliament's plenary vote, the next step will be the final endorsement of the Proposed EU AI Act by the Council of the EU (date still to be confirmed).
  • The Swedish Data Protection Authority (DPA), published a technical description of AI, machine learning, and deep learning and provided its insights on how these three concepts relate to each other. Among others, the Swedish DPA said that "AI is not a specific technology, but is rather defined based on distinctive features of the capacity or function."

AI Policy Update – International:

  • An AI company was found by the Guangzhou Internet Court to have committed copyright infringement in its provision of AI-generated text-to-image services. The first of its kind ruling places clear responsibility on the AI company, which the plaintiff argued reproduced copyrighted images unlawfully and without permission.
  • The Organization for Economic Cooperation and Development published an explanatory memorandum on its updated definition of the concept of "AI system." The updated definition reads as follows: "An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.