The rapid development of technologies in the entertainment industry has made the practice of editing photos, video frames and images widespread. Often, it is not possible to distinguish whether what is published online is real or the result of processing by artificial intelligence.

In December 2022, the Cyberspace Administration of China (CAC), together with the Ministry of Industry and Information Technology (MIIT) and Ministry of Public Security of China (MPS), published the "Management Provision on Internet Information Services Depth Synthesis" (Provision), greatly drawing the public interest as the provision will directly strengthen the regulation on the internet deepfakes, which aims to create a safer internet environment.

Whether you admit it or not, deepfakes have changed the experience of internet users, which can become harmful without proper restriction. All entities involved in the process, such as technical supporters, as well as service providers and users, shall pay attention to this Provision to keep the internet environment safer.

1. What are deepfakes

It may be an overlooked phenomenon, but the internet is filled with deepfakes.

The Provision has classified deepfakes in five types, namely texts, images, videos, audios, and VRs, which can be performed in various forms.

Deep synthesis in texts and writing takes on the form of chatting robots like ChatGPT, which can edit and produce text the same way a person would.. Image and video deepfakes are by far the most common ones, using apps or simple technology to create or change human faces at will.

When it comes to audios, voice imitators and generators are the most common tools for deep synthesis. Virtual realities, including the metaverse, are also considered to be deep synthesis technologies.

By using tools such as the ones mentioned above, one is able to fake virtually anything, hence why products generated by such technology are referred to as "deepfakes".

2. How can the deepfakes influence our legal interests?

The infringements of deepfakes can go beyond our imagination.

While initially created for entertainment, deepfakes are being used more and more to deceive and scam people. Audio deepfakes have been used to mimic voices in order to scam companies out of thousands of dollars by falsely authorizing payments.

Identity fraud is also becoming an increasingly popular use of deepfake technology, however by far the most common use is to take images of women and insert them into explicit content. Because of the increasing use and quality of deepfake technology, it is important to remain aware of them while browsing.

3. Who is responsible for the compliance?

Most people know that the service providers are responsible for the compliance, yet the Provision also imposes the responsibility of technical supporters. While not bound from management responsibility such as that of service providers, technical supporters shall bear other fundamental compliance responsibilities such as data compliance and safety evaluations.

4. Deep Synthesis and PIPL

For certain deep synthesis technology, it is necessary to use people's facial and vocal features to fake their faces or voices, which falls into the scope of sensitive personal information. Thus, the provision requires relevant platforms to acquire the "separate consent" of the users before collecting their facial and voice data, which is in accordance with the "Personal Information Protection Law of People's Republic of China" (PIPL).

5. What to do for the compliance?

Conduct safety evaluation

Before uploading the deep synthesis technology online, the technical supporters, service providers or app stores are required to conduct safety evaluation on such technology. Different from the safety evaluation stipulated in the PIPL, this evaluation mainly focuses on ensuring the safety and compliance of the algorithmic mechanism, especially for technologies related to biometric information, national security or public interest.

Update service terms

While the deep synthesis platforms provide services to the public users, the Provision requires them to facilitate the users with comprehensive management rules, platform convention and service terms in order to keep the environment healthy. For example, in the service terms the platforms shall clearly notify the users to bear information on their safety obligations as well.

Mark the Deep Synthesis content

Identifying the content of deep synthesis is the prerequisite for the users to protect their rights. To help the users better identify such content, the Provision requires service providers to mark the most realistic "deepfakes" with highlighted signs; and for less realistic contents, the service provider can also notify the users to voluntarily mark them up when encountering such content.

If we look into the near future after the Provision comes into effect, we can expect that the marked deepfakes will be difficult to hide among the enormous internet content, the individual and public interest will be better protected, and in the end the deep synthesis technology may be prevented from being damaging.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.