Whether or not you are a TikTok user, you have likely heard about how the platform allows creators to combine music clips with original content to create viral videos. The ability for users to leverage an extensive music database has, in large part, allowed TikTok to become one of the most used social media platforms in the world, and its influence in the global music business has been monumental. Less noticed, however, is how TikTok has increasingly blurred the lines between authentic (i.e., human) content creation and AI generated content. By all outward appearances, TikTok seems to be turning a blind eye to AI generated content proliferating across the platform over the past year and, in some cases, has even provided tools to encourage the creation of more AI images and videos such as the Bold Glamour filter or the TikTok Creative Center. To TikTok's credit, it updated its Community Guidelines requiring creators to label their "[s]ynthetic and manipulated media that shows realistic scenes" so users are not confused or misled.1 These Guidelines may only have a limited effect, however, because the policy relies on the creators to judge whether or not to include the labels and TikTok may give creators more latitude when the "expressive value" outweighs individual harm, especially in cases involving public figures.2 Yet when it comes to music, TikTok has historically relied on licensing deals from music groups, individual artists, and studios to integrate music into the platform. AI, however, is changing that too.

Recently, one of the largest music groups—Universal Music Group ("UMG")—publicly revealed an ongoing contract dispute with TikTok. The parties failed to successfully negotiate a renewal term before the current agreement expired on January 31, 2024, leading to a full contract termination. In an open letter, UMG shares the details of the dispute which features, in large part, UMG's interest in TikTok strengthening its guardrails around the use of AI generated music. Specifically, UMG argues that AI generated music "massively dilutes the royalty pool" for human artists and that TikTok is dismissive towards adopting alternative royalty models, including an "artist-centric" model that UMG CEO Lucian Grainge proposed earlier this year. TikTok, for its part, has responded that UMG is putting "greed" above the interests of artists and songwriters3 and promptly removed UMG's music library on February 1, 2024. What happened next for content creators was striking: many of their past clips went silent and their music options for new clips were drastically reduced. One popular TikToker, Natalie Rose, described feeling "freaked out" and "panicked" that her sounds were no longer available but is working to make the audio available again.4 And while UMG and other music groups are validly concerned that AI can easily torrent their artists' songs5 or, in Drake's case, falsely attribute music to them,6 removing these sounds altogether is a 'nuclear option,' which consequently limits TikTok and its creators from generating more views and revenue.7

The UMG-TikTok dispute highlights four key considerations for businesses heading into 2024 regarding AI use in the workplace:

  1. Vendor Management. Using AI is not necessarily novel technology (consider IBM Watson first commercialized in 2013), but the proliferation of vendors marketing the use of AI is. For example, marketing firms may perceive a competitive advantage by advertising their AI tools as "IP liability-free,"8 which sounds enticing but such offers should be thoroughly vetted. Companies should continue to employ strong vendor management checklists to assess AI use before venturing deeper into the relationship, such as understanding the vendor's data storage/destruction practices or executing a separate security agreement applicable to all future engagements.9 Companies can also negotiate contractual remedies with vendors in case of breaches, such as refunds, audits, or termination rights. Companies may also account for these risks downstream when negotiating terms with their clients or customers by limiting their liability or drafting clear, concise data management and processing policies and agreements.
  2. Integration. Some commentators worry about the possibility of AI supplanting human jobs. Less discussed, however, is the possibility that AI becomes so integral to how jobs are performed that employees are unable or unwilling to work without AI like automating vital but repetitive, time-consuming tasks.10 In that scenario, just like artists now are putting upward pressure on TikTok to resolve the UMG contract dispute, employees will demand management to preserve and expand access to enterprise AI tools. This has a wide range of implications for vendor management. One possibility is that AI vendors may be eager to offer low-cost or no-cost trials in 2024 in an effort to showcase and closely integrate their platform into company workstreams. Once AI tools become so integral to job function, employers may lose bargaining power with vendors and a commensurate loss on contract negotiation leverage (i.e., employers face the "we have to have this tool" dilemma). Employers should implement AI trials and beta tests with caution and ensure they preserve negotiation flexibility if the relationship matures into an enterprise deployment.
  3. Use Cases. Development of internal and operational AI guardrails will continue to be a key issue in 2024. How companies use, deploy, manage, and extract value from AI tools should be appropriately documented in company policies and procedures to ensure consistent and unbiased use. AI guardrails will also ensure employees do not inadvertently exceed the bounds of the licensing scope of use or cause operational impact from inaccurate results. We have previously warned against the threat of overreliance facing an organization that has deployed AI tools in various verticals--TikTok's lack of consistent AI management in its platform demonstrates an example of the perils a company may face by failing to take a rigorous approach in AI deployment. Relatedly, companies should have internal backup plans if the AI tools become unavailable to mitigate the risks of incurring additional expenses or business interruptions.
  4. Input and Output. Companies will increasingly need to be aware of copyright and licensing issues as they engage marketing firms, independent copyrighters, musicians, and other content creators in the promotion of their products and services. Companies face a substantial risk from using outputs developed by creators that rely on (or were derived from) AI outputs that contain unlicensed content that is either embedded in the results or was used to algorithmically train the AI tool. Companies need to be clear about the ownership of the content that is promulgated and establish a clear chain of title through contract and verification measures. This can be accommodated, in part, by ensuring vendor contracts contain appropriate representations, warranties, and indemnities related to intellectual property ownership (both for the AI generated content and the AI platform that developed the content). Similarly, companies need to draw contractual boundaries with vendors on how the AI platform may (or may not) use the inputs or outputs for training purposes or for the benefit of other customers. Likewise, expect to see in 2024 self-regulatory services arise that offer companies the ability to showcase a badge or certifications for use of only "artist-approved" AI tools or "IP conflict-free" tools. It's a true wild west.

In the case of TikTok, it is easy to understand why TikTok uses and encourages users to incorporate AI in their content as a way to boost user engagement with the platform and provide more features (and it's easy still to see why UMG is pushing back). Companies in other verticals will certainly have similar efficiency and engagement gaining needs and interests. As we've discussed in our "AI for GCs: What You Need to Know for 2024" Article, one of the best and simplest ways to objectively determine what AI tools a company should (or should not) adopt is using a practical framework. The considerations outlined above help drive the choices of decision-makers, but the analysis does not end here. On the contrary, decision-makers must weigh the benefits and risks of adopting AI while juggling internal and external pressures and obligations.

Footnotes

1.Community Guidelines – Integrity and Authenticity, TikTok (Mar. 2023), https://www.tiktok.com/community-guidelines/en/integrity-authenticity/#3. TikTok also announced they are testing ways to label this content automatically. Id.

2.Id.

3.New Labels for Disclosing AI-Generated Content, TikTok: News Room (Sept. 19, 2023), https://newsroom.tiktok.com/en-us/tiktok-statement-in-response-to-universal-music-group.

4. Natalie Rose (@nnapples), TikTok (Feb. 2, 2024), https://www.tiktok.com/@nnapples/video/7330747311800323371?_r=1&_t=8jY9BtWGPn9&social_sharing=1.

5. Elias Leight & Kristin Robinson, 5 Ways AI Has Already Changed the Music Industry, Billboard: Tech (Aug. 4, 2023), https://www.billboard.com/lists/ways-ai-has-changed-music-industry-artificial-intelligence/getting-stems.

6. Bill Donahue, Fake Drake & The Weeknd Song — Made With AI — Pulled From Streaming After Going Viral, Billboard: R&B/Hip-Hop (Apr. 17, 2023), https://www.billboard.com/pro/fake-ai-drake-the-weeknd-song-pulled-streaming.

7. See Christopher Kuo, Vocal TikTok Users Navigate a Quieter App, N.Y. Times (Feb. 3, 2024), https://www.nytimes.com/2024/02/03/arts/music/tiktok-umg-songs-music.html.

8. According to the release, C3 will be able to provide its large language model to businesses without concerns over data privacy breaches, hallucinations, and IP liability exposure. Mike Sak, C3.AI Releases Enterprise Generative AI Suite – This Week in AI, MLQ.ai (Sept. 8, 2023), https://www.mlq.ai/c3-ai-releases-generative-ai-entreprise-twai.

9. Kathryn Allen & Kelsey Brandes, Mitigating Your Greater Data Privacy Risk: How to Establish an Effective Vendor Management Process, 9 Pratt's Priv. & Cybersecurity L. Rep. 186, 186–87 (2023).

10. Kai-Fu Lee, AI's Real Impact? Freeing Us From the Tyranny of Repetitive Tasks, Wired (Dec. 12, 2019), https://www.wired.co.uk/article/artificial-intelligence-repetitive-tasks.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.