Marketing using influencers (including celebrities and social media personalities) isn't anything new, but it has seen rapid growth in recent years, commanding more attention from consumers and a larger chunk of marketing budgets. With influencer marketing, brands can target their products to specific audience segments in a way that is engaging, authentic and relatable.

New technology has the potential to disrupt the way in which brands use influencers and celebrities in advertising campaigns. With artificial intelligence (AI), brands can digitally manipulate videos, images and audio clips to create a convincing but fake piece of media, ie, a “deepfake”. Deepfakes are quickly becoming more sophisticated, opening up more innovative and creative possibilities for brands to engage their target audiences. This article explores some of the opportunities and challenges with using deepfakes in advertising and provides our thoughts on some of the legal considerations involved in using deepfakes in brand campaigns.

What is a deepfake?

Deepfakes are pieces of media which have been digitally manipulated to replace one person's likeness convincingly with that of another. Deepfakes are created using a form of AI known as “deep learning”. By way of example, the deepfake TikTok account @deeptomcruise shows someone who appears to be actor Tom Cruise dancing, playing golf and doing a magic trick. The likeness to the actor is uncanny but in reality, actor and impersonator Miles Fisher is behind the camera – his image has been digitally manipulated using deepfake technology.

Much attention on deepfakes focuses on their potential for use in nefarious purposes, particularly as the first known examples of deepfake videos posted online in 2017 featured celebrities' faces swapped with those of porn stars, and deepfakes have been used more recently in fake news. As deepfakes become more convincing, and the public become more aware of the technology, legitimate use cases are being explored.

Use of deepfake technology by brands

Deepfakes can present a number of opportunities for brands wishing to extend their reach in innovative and exciting ways and cut marketing costs:

  • The fashion brand Zalando's #whereeveryouare campaign featured deepfakes of Cara Delevingne. The brand was able to create 290,000 localised ads for towns and villages across Europe using deepfake technology and footage from Delevingne – no sitting for hours in one location learning several languages, practicing pronunciation, or running through several takes. Though it is thought that influencers will charge a premium for use of their image in this way, the time, money and effort it takes to produce an ad campaign can be reduced using deepfake technology. The fashion and beauty industries have also pointed towards other potential use cases, citing how deepfake technology can be used to show products on models with different skin tones, heights and weights to showcase new collections to consumers.
  • Mondelez recently won the Grand Prix in Creative Effectiveness at the Cannes Lions Festival of Creativity for its Shah Rukh Khan-My-Ad for the Cadbury brand, a campaign that used a deepfake of the Bollywood star to help small businesses. Mondelez and advertising agency Ogilvy Mumbai created a deepfake of Khan. The campaign allowed local shop owners to create a free personalised ad in which the Khan deepfake talked about their stores based on information provided by those store owners.
  • Russian telecom company Megafon used a deepfake of Bruce Willis starring alongside Russian actor Azamat Musgaliev without Willis ever appearing on set. The ad sparked a flurry of media attention claiming that Willis had sold his image rights to Deepcake, a company specialising in deepfakes – a claim denied by Willis.
  • Misleading and exaggerated ads:  Ads must not materially mislead or be likely to do so (Rule 3.1 CAP Code; Rule 3.1 BCAP Code), including by exaggerating the capability performance of a product (Rule 3.11 CAP Code; Rule 3.12 BCAP Code). In the beauty and fashion industries, we can look to ASA guidance on  the use of pre and post-production techniques in ads for cosmetics as helpful guidance on the use of deepfakes. The ASA guidance stresses that the use of these technologies and techniques in producing ads should not materially exaggerate the effect the product can achieve (eg, in an ad for mascara, lash inserts should not be used to create a lengthening or volumizing effect beyond what can be achieved by the mascara). ASA guidance  on the use of filters on social media is also relevant. In this guidance, the ASA points to two rulings against ads from  Skinny Tan Ltd and  We Are Luxe Ltd, both of which featured Instagram stories by influencers who were promoting tanning products. In both cases, the influencers had applied filters which altered their skin tone and complexion, making their skin tone appear darker than it would have been without the filters. The ASA found that, because the filters were directly relevant to the performance of the products being advertised, they were likely to have exaggerated the efficacy of the products and misled consumers materially. As with these techniques and technologies, brands using deepfake technology should be careful not to unintentionally exaggerate or misrepresent the effect of a particular product.
  • Testimonials and endorsements:  Testimonials or endorsements used in ads must be genuine – they must not falsely claim or imply that the marketer is acting as a consumer (Rules 2.3 and 3.45 CAP Code; Rule 3.45 BCAP Code). Brands using deepfakes should be careful not to claim or imply that the influencer who is the subject of the deepfake has used a product or service when this is not the case. Brands must ensure that any testimonials and endorsements they use in ads are real and that they accurately reflect what the person said about the product or service being advertised.

Legal considerations

With these new and exciting use cases, brands should consider the following when looking to use deepfakes in advertising:

  1. Advertising rules:  Brands should be careful not to breach relevant rules on advertising through their use of deepfakes. While there are no specific rules under the UK non-broadcast advertising code (CAP Code) and the UK broadcast advertising code (BCAP Code) in relation to deepfakes, the UK Advertising Standards Authority (ASA) has reminded brands that ads using deepfakes will need to comply with existing advertising rules such as the rules on misleading advertising, especially the rules concerning testimonials and endorsements.
    1. Misleading and exaggerated ads:  Ads must not materially mislead or be likely to do so (Rule 3.1 CAP Code; Rule 3.1 BCAP Code), including by exaggerating the capability performance of a product (Rule 3.11 CAP Code; Rule 3.12 BCAP Code). In the beauty and fashion industries, we can look to ASA guidance on  the use of pre and post-production techniques in ads for cosmetics as helpful guidance on the use of deepfakes. The ASA guidance stresses that the use of these technologies and techniques in producing ads should not materially exaggerate the effect the product can achieve (eg, in an ad for mascara, lash inserts should not be used to create a lengthening or volumizing effect beyond what can be achieved by the mascara). ASA guidance  on the use of filters on social media is also relevant. In this guidance, the ASA points to two rulings against ads from  Skinny Tan Ltd and  We Are Luxe Ltd, both of which featured Instagram stories by influencers who were promoting tanning products. In both cases, the influencers had applied filters which altered their skin tone and complexion, making their skin tone appear darker than it would have been without the filters. The ASA found that, because the filters were directly relevant to the performance of the products being advertised, they were likely to have exaggerated the efficacy of the products and misled consumers materially. As with these techniques and technologies, brands using deepfake technology should be careful not to unintentionally exaggerate or misrepresent the effect of a particular product.
    2. Testimonials and endorsements:  Testimonials or endorsements used in ads must be genuine – they must not falsely claim or imply that the marketer is acting as a consumer (Rules 2.3 and 3.45 CAP Code; Rule 3.45 BCAP Code). Brands using deepfakes should be careful not to claim or imply that the influencer who is the subject of the deepfake has used a product or service when this is not the case. Brands must ensure that any testimonials and endorsements they use in ads are real and that they accurately reflect what the person said about the product or service being advertised.
  2. Intellectual property:  English copyright law – which has traditionally existed to protect creative works created by humans – is still playing catch-up to new AI technologies. Separately, unlike in the US, image rights are not formally recognised in the UK. English case law does, however, offer protection where an individual's image is commercially misappropriated. For example, in  Fenty v Arcadia Group, UK fashion brand Topshop featured the singer Rihanna on one of its t-shirts without first obtaining Rihanna's consent. The UK High Court held that a substantial number of consumers would be confused or deceived into believing that the t-shirt had been endorsed by Rihanna and would have bought it for that reason, and that this would be damaging to her goodwill. Taking this case into consideration, we recommend that any brand using a deepfake of an influencer gets a contract in place with that influencer. The contract should set out clearly how the deepfake will be used for the brand campaign, which party will own the intellectual property rights in the deepfake and how (if at all) they may be used in the future.
  3. Defamation:  If the contract with the influencer doesn't set out in sufficient detail how the deepfake will be used, or the brand uses the deepfake in a way that the contract does not contemplate, the influencer may not be best pleased and the relationship between the influencer and the brand can turn sour. If the deepfake depicts the influencer saying or doing something that causes serious harm to that person's reputation, then the brand is at risk of receiving a claim for defamation.
  4. Data protection law:  As a deepfake is created using an influencer's personal data, including their image and voice, the brand creating the deepfake must comply with data protection law and should ensure that their contract with the influencer contains appropriate data protection provisions.
  5. Upcoming AI transparency requirements: After nearly three years of discussions and negotiation, political agreement was finally reached in relation to the EU's AI Act at the end of last year, the first major comprehensive regulation in relation to artificial intelligence. Under the latest version of the EU AI Act, creators of deepfakes are required to “disclose that the content has been artificially generated or manipulated… Where the content forms part of an evidently artistic, creative, satirical, fictional analogous work or programme, [these transparency obligations] are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work“. The AI Office is expected to put together codes of practice which will provide further guidance on the labelling of artificially generated or manipulated content. The transparency obligations aim to empower consumers with knowledge about the content they are encountering and make them less susceptible to manipulation. The AI Act will enter into force shortly after the legal text has been finalised and published in the EU's Official Journal, which may not be for several months' time. The provisions of the AI Act will apply in a phased incremental fashion, depending on the risk associated with the AI-system in question, with the most stringent requirements around prohibited AI-systems coming into force 6 months after publication in the EU's Official Journal. It is worth noting that other jurisdictions have already introduced legislation targeting deepfakes. For example, in 2019, the Chinese government introduced regulations prohibiting the distribution of deepfakes without a clear disclaimer that the content had been artificially generated. For now, regulations in other jurisdictions, including the UK and the US, have primarily sought to target the use of deepfakes with malicious intent, eg, deepfake pornography and deepfakes of political candidates. However, it remains to be seen whether these other jurisdictions will follow the EU lead in due course, as we expect the Department for Science, Innovation & Technology in the UK to launch a call for evidence on AI-related risks to trust in information and related issues such as deepfakes later in the year.

All of these legal risks should be addressed in the production and publication of the ad copy and the contract with the influencer. If you would like to know more about how you can deal with the risks of using deepfakes in your advertising, or contracting with influencers and celebrities more generally, please do reach out.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.