In a recent alarming development reported by CNN1, a multinational firm fell victim to an advanced scam using deepfake technology. The scammers orchestrated a complex video conference call, impersonating the company's CFO and other staff members, and they used the sophisticated ruse to mislead a finance worker based in Hong Kong into executing a substantial, unauthorized $25 million fund transfer. This incident not only underscores the immediate financial ramifications, but also highlights the broader operational, reputational and legal challenges posed to businesses by deepfake technology.

While many people are aware of deepfakes and similar scams, this story highlighted how evolving business communications—such as Zoom calls as a daily method of communicating with colleagues from around the world—have made sophisticated businesses more susceptible to deepfake scams. The implications of deepfakes are profound and multifaceted:

Financial Damage: Unauthorized transfers can lead to immediate and considerable financial losses.

Operational Disruption: Addressing a deepfake incident may interrupt business activities, cause additional expenses and lead to further financial losses.

Reputational Harm: Impersonation of key company figures could damage a company's brand in both the near and long term.

Litigation Risks: Post-incident, companies might face legal actions from stakeholders affected by deepfake-related exposure.

State legislators across the U.S. are actively working on laws to curb AI-driven content manipulation.2 Many states, including California, Texas, Florida and New York, have enacted such statutes. But right now, they target deepfake pornography and other AI abuses, which, while reflecting a bipartisan acknowledgment of the need for action, means that the onus remains on companies to proactively guard their operations and reputations. Our current legal frameworks, while evolving, do not fully protect against the diverse and sophisticated ways deepfakes may affect businesses.

Given the associated risks, it is imperative for companies to develop not just a keen eye for potential deepfake scams, but to understand the types of technological protections, protocol protections and training for employees that may be needed. Further, the legal ramifications of suffering a deepfake scam are much easier to work through ahead of time, including to consider the available insurance protection in place. Here are six considerations to take into account to determine whether you are looking at a deepfake:

  1. Assess Quality: Be wary of low-quality videos, often used to hide the imperfections of deepfake technology.
  2. Length and Detail: Short clips might be employed to avoid scrutiny; longer videos allow more time for analysis.
  3. Background Activity: Distracting backgrounds are a common tactic to divert attention from the deepfake itself.
  4. Timing of Uploads: Content uploaded outside of regular business hours might indicate fraudulent activity, exploiting the reduced capacity for immediate company response.
  5. Communication Patterns: Be cautious if the individual in the video does not typically use the medium for such communications.
  6. Verify Source Authenticity: If possible, ensure the legitimacy of the communication's origin. Those publishing deepfakes often use anonymous accounts or accounts impersonating credible sources to lend authenticity to the deepfake.

The complexities of this digital age require companies to be vigilant and safeguard against the cunning nature of deepfakes, which have already impacted other aspects of society such as elections (with Biden's voice being deepfaked to voters in NH's recent primary)3 and the music industry (with Taylor Swift being deepfaked in sexually explicit images).4

In the event of a deepfake scam, it will be critical to make quick, informed decisions. Having privilege protections for the decision-making process may be important in a number of contexts. At Brown Rudnick, we have navigated the complex legal, regulatory, insurance, and reputational implications effectively.

Footnotes

1. https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html.

2. https://news.bloomberglaw.com/artificial-intelligence/state-lawmakers-target-ai-deepfakes-in-taylor-swift-aftermath?source=newsletter&item=body-link®ion=text-section

3. https://time.com/6565446/biden-deepfake-audio/.

4. https://apnews.com/article/taylor-swift-deepfake-images-x-protecttaylorswift-6e5f9d086d1923a1cf5f5cde39fc890a.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.