On February 14, 2024, Deputy Attorney General Lisa Monaco announced that prosecutors would seek stricter sentences for crimes perpetrated using artificial intelligence (AI). Monaco also announced a new initiative – Justice AI – to study the effective use of AI in the justice system.

Monaco's comments are the latest confirmation that government regulators are laser-focused on both the promise and risks associated with AI, and that innovators and companies utilizing AI should be diligent in establishing their own effective guardrails with this technology.

DOJ's double-edged sword: AI as a source of peril and promise

AI remains a top enforcement priority for the Department of Justice (DOJ), and Monaco's remarks confirmed that government regulators continue to view AI as a double-edged sword – with the ability to fuel criminal activity on the one hand and strengthen the government's enforcement efforts on the other. With respect to AI's peril, given the potential for AI to "enhance the danger of a crime," Monaco warned that prosecutors would seek enhanced sentences for crimes aggravated by the misuse of AI.

On the flip side of the AI coin, Monaco recognized AI's "promise" in supporting and strengthening DOJ's work. For example, Monaco noted that DOJ has already implemented AI to help trace the source of drugs, triage the more than one million public tips submitted to the FBI and synthesize large volumes of evidence in high-impact cases. At the same time, Monaco emphasized that DOJ is committed to ensuring that it "applies effective guardrails for AI uses that impact rights and safety." To that end, Monaco announced the creation of Justice AI, an initiative that will convene individuals from civil society, academia, science and industry to "understand and prepare for how AI will affect [DOJ's] mission and how to ensure we accelerate AI's potential for good while guarding against its risks." Justice AI will eventually inform a report to President Joe Biden on the effective use of AI in the criminal justice system. Additionally, a new Emerging Technology Board, headed by Jonathan Mayer, DOJ's first chief AI officer, will advise DOJ on the "responsible and ethical" use of AI to investigate and prosecute crime. These initiatives follow Biden's October 2023 executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which – among other provisions – charged the DOJ to "anticipate the impact of AI on our criminal justice system."

Current enforcement trends involving AI

Monaco's remarks are far from theoretical – DOJ has already been zeroing in on the use and misuse of AI. For example, in January 2024 DOJ subpoenaed multiple pharmaceutical and digital health companies to inquire about the algorithms they use to synthesize patient data and develop treatment recommendations in order to "learn more about generative technology's role in facilitating anti-kickback and false claims violations." Also in January 2024, the US Attorney's Office for the Southern District of New York charged two men in an ongoing conspiracyto hack the accounts of a fantasy sports and betting website and resell the users' credit card and other sensitive information on the dark web. The defendants allegedly used AI-generated images to advertise their services.

State attorneys general offices are likewise focused on the misuse of AI. In February 2024, the New Hampshire Attorney General's Office issued a cease and desist order against a company that deployed AI-generated robocalls impersonating Biden's voice to discourage recipients from voting in the January New Hampshire presidential primary election. In a statement about the case, New Hampshire Attorney General John Formella stated that, "AI-generated recordings used to deceive voters have the potential to have devastating effects on the democratic election process. ... The [cross-agency] partnership and fast action in this matter sends a clear message that law enforcement, regulatory agencies, and industry are staying vigilant and are working closely together to monitor and investigate any signs of AI being used maliciously to threaten our democratic process." The New Hampshire Attorney General's Office also worked closely with the Federal Communications Commission's Enforcement Bureau, which itself issued a cease and desist letter to a separate entity that allegedly originated the robocall traffic.

AI enforcement trends across federal agencies: SEC and FTC

DOJ is not the only government agency that has recently signaled a growing focus on AI. For instance, on February 13, 2024, Securities and Exchange Commission (SEC) Chair Gary Gensler delivered remarks at Yale Law School and similarly recognized that AI "opens up tremendous opportunities for humanity" but also presents challenges. Gensler explained that the SEC's role entails "both allowing for issuers and investors to benefit from the great potential of AI while also ensuring that we guard against the inherent risks."

Gensler observed that, "fraud is fraud, and bad actors have a new tool, AI, to exploit the public." He went on to describe two types of harm the SEC considers when AI is used to perpetrate fraud. The first is programmable harm, which analyzes whether an algorithm was optimized to manipulate or defraud the public. The second is predictable harm, which considers whether an actor had a reckless or knowing disregard of the foreseeable risks of deploying a particular AI model. Gensler stressed that "[i]nvestor protection requires the humans who deploy a model to put in place appropriate guardrails."

Additionally, Gensler cautioned SEC registrants to carefully consider their claims and disclosures about AI. Specifically, Gensler warned against "AI washing," noting that companies and financial intermediaries "should not mislead the public by saying they are using an AI model when they are not, nor say they are using an AI model in a particular way but not do so. Such AI washing ... may violate the securities laws."

In the competition space, the Federal Trade Commission (FTC) recently issued an inquiry into generative AI investments by tech companies, which FTC Chair Lina Khan noted "will shed light on whether investments and partnerships [involving generative AI] pursued by dominant companies risk distorting innovation and undermining fair competition."

Takeaways

In light of the increased enforcement focus on AI, companies and innovators must wield this double-edged technological sword carefully. As DOJ continues to adapt its enforcement policy to respond to the risks associated with AI, companies should be diligent in establishing their own effective guardrails. Such safety measures may include strengthening internal protocols to detect and manage AI risks, training employees on the responsible use of AI applications, and frequently testing AI systems to ensure they are fair, accurate and safe.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.