AI-Deep Synthesis, also known as deepfake, is a technology that can generate or manipulate image, audio, or video content that appears realistic but is not authentic. It can be used for various purposes, such as entertainment, education, art, journalism, or political satire. However, it can also pose serious risks to individuals and society, such as identity theft, fraud, defamation, misinformation, or cyberbullying.

In this article, we will discuss recent face swap fraud cases in China that have raised public awareness and concern about the potential harms of deepfake technology. We will also examine China's current regulations on AI and deep synthesis and the legal challenges they face in addressing this emerging issue. Finally, we will suggest possible solutions for platforms and apps that enable deepfake functions to prevent or mitigate the negative impacts of this technology.

Recent Case

In April 2023, Chinese influencer "CaroLailai" discovered her face was swapped with a porn actress in a viral video. She reported the illegal industry chain behind deepfake technology to the police. This case reminds people of ZAO, a deepfake app that sparked major privacy concerns in China , the Ministry of Industry and Information Technology (MIIT) ordered its removal from app store in 2019.

Similar cases still happen, some of which involves criminals using deepfake technology to scam victims into transferring money or disclosing personal information.

In May 2023, Mr. Guo, a legal representative of a technology company in Fuzhou, was fooled into transferring 4.3 million yuan (about $612,000) after having a video chat with someone pretending to be his friend through AI-powered face-swapping technology.

In May 2023, Mr. Guo, a legal representative of a technology company in Fuzhou, was fooled into transferring 4.3 million yuan (about $612,000) after having a video chat with someone pretending to be his friend through AI-powered face-swapping technology.

China's Regulations

China is one of the first countries to adopt regulations specifically on AI and deep synthesis technology. Below are recently issued regulations relating to AI and deep synthesis technology. Please refer to Part 3 for more details on these regulations:

  • On December 31, 2021, the CAC, the MIIT, and the Ministry of Public Security (MPS) jointly issued the Administrative Provisions on Algorithm Recommendation for Internet Information Services, which came into effect on March 1, 2022.
  • On November 25, 2022, the CAC, the MIIT, and the MPS jointly issued the Administrative Provisions on Deep Synthesis of Internet Information Services, which came into effect on January 10, 2023.
  • On July 10, 2023, the CAC, the National Development and Reform Commission, the Ministry of Education, the Ministry of Science and Technology, the MIIT, the MPS and the State Administration of Radio and Television jointly issued the Interim Administrative Measures for Generative Artificial Intelligence Services, which will come into effect on Aug 15, 2023.
  • On November 25, 2022, the CAC, the MIIT, and the MPS jointly issued the Administrative Provisions on Deep Synthesis of Internet Information Services, which came into effect on January 10, 2023.
  • On July 10, 2023, the CAC, the National Development and Reform Commission, the Ministry of Education, the Ministry of Science and Technology, the MIIT, the MPS and the State Administration of Radio and Television jointly issued the Interim Administrative Measures for Generative Artificial Intelligence Services, which will come into effect on Aug 15, 2023.

In addition to the specific regulations on AI and deep synthesis, there are various other regulations in place to address concerns related to cybersecurity, data security, personal information protection, intellectual property, and other aspects. These include the Cybersecurity Law, Data Security Law, Personal Information Protection Law, Science and Technology Process Law, Copyright Law, Civil Codes, and other relevant laws.

Specifically, the following requirements under China's Criminal Law and Civil Codes are considered relevant to the recent face swap case:

  • The Criminal Law stipulates that individual who fabricate false information and disseminate it through information networks or other media platforms with the intention of disturbing public order or damaging others' reputation can face penalties of up to three years in prison or fines.
  • The Civil Codes provide protection for personal rights and interests, including privacy, portrait, and reputation. Individuals who infringe upon these rights and interests by utilizing deepfake technology can be held accountable for civil damages.

Platforms and Apps Responsibilities

China's regulations on AI and deep synthesis technology are relatively new and still face challenges in implementation and enforcement.

One of the main challenges is how to balance the innovation and development of AI technology with the protection of individual and social interests. Regulators and authorities worldwide are also having to grapple with the same issues and striking the right balance in forthcoming regulation. On the one side, there are lots of obligations under the current regulations on AI and deep synthesis, which primarily emphasize the responsibilities of platforms and apps, including but not limited to the followings:

  • The Administrative Provisions on Algorithm Recommendation for Internet Information Services requires that when using "algorithmic technology of generation and synthesis" to provide information, such algorithm recommendation service provider shall inform users of the information on their provision of algorithm recommendation services and publicize the basic principles, purposes and main operating mechanisms of algorithm recommendation.
  • The Administrative Provisions on Deep Synthesis of Internet Information Services require deep synthesis service providers to take responsibility for information security and establish rules and systems such as user registration, review of algorithm mechanisms, review of scientific and technological ethics, review of information release, data security, personal information protection, anti-telecommunication fraud, and emergency response, etc. The providers shall also take measures to add mark that does not affect users' use and keep the relevant records in accordance with relevant regulations. For deep synthesis service providers developing and launching new products, applications, or functions with public opinion attributes or social mobilization capabilities, security assessments are also required in accordance with relevant regulations. On June 20, 2023, the CAC released a list of domestic deep synthesis service providers with public opinion attributes or social mobilization capabilities that have completed the security assessment, which include Meituan online intelligent customer service algorithm, Kuaishou short video generation and synthesis algorithm, and Baidu's PLATO large model algorithm and others.
  • The Interim Administrative Measures for Generative Artificial Intelligence Services, effective from August 15, 2023, aim to balance the development and use of AI while ensuring regulation and user protection. The measures applies only to generative AI services available to the public, excluding internal research and use by specific organizations. Key obligations for service providers include signing service agreements with users, conducting security assessment for AI services with public opinion attributes or social mobilization capabilities, and protecting personal information. Service providers are also responsible for monitoring and taking down unlawful content generated by their services. The measures may have cross-border impacts, allowing Chinese regulators to impose technical measures or sanctions on offshore providers violating the measures. Foreign investment in generative AI services must comply with applicable laws and regulations, with future updates expected on foreign investment policies in the AI sector.

On the other hand, the extent of platform liabilities under civil cases remains uncertain, posing a challenge in ensuring the compliance and accountability of platforms and apps that enable deepfake functions. The civil case mentioned above highlights that apps offering direct face swapping capabilities, including template creation for users, can be held responsible for the improper use of deepfake technology. However, it remains unclear whether platforms and apps can be jointly

liable with users for the misuse of these technologies, even if they have taken all necessary legal measures and promptly removed infringing content upon receiving infringement notifications from rights holders.

Discussion on Potential Solutions

To address these challenges and ensure the responsible and ethical use of deepfake technology, platforms and apps that enable such functions should be aware of the potential liabilities and take additional measures and complete the relevant assessment and/or filing, if applicable, according to the relevant regulations. Some possible solutions are:

  • Adopting clear and transparent policies and terms of service that inform users of the risks and consequences of using deepfake functions and obtain their explicit and informed consent before processing their personal data or content.
  • Implementing technical standards and methods to mark or label the deepfake content as such and to trace its origin and authenticity.
  • Establishing effective mechanisms for users to report, flag, or delete the deepfake content that violates their rights or interests or that is illegal or harmful.
  • Cooperating with the authorities and other stakeholders to prevent, detect, and combat deepfake-related crimes and to provide evidence or assistance when needed.
  • Educating and raising awareness among users and the public about the nature and impact of deepfake technology and how to identify and verify deepfake content.

Take Douyin as an example: On May 9, 2023, it announced 11 rules regarding AI-generated content (AIGC). These rules state that publishers must clearly label AIGC to help users distinguish between virtual and real content, especially in confusing scenarios. Publishers are also held responsible for the consequences of AIGC, regardless of its generation process. The rules strictly prohibit the use of generative article intelligence technology to create and publish infringing content, including violations of portrait rights and intellectual property rights. Additionally, virtual humans must be registered on the platform, and users of virtual human technology must undergo real-name authentication. Douyin also published the Watermark and Metadata Specification for AIGC on the same date, which includes a sample of labeled AIGC.

Conclusion

Deepfake technology is a double-edged sword that can bring both benefits and harms to individuals and society. Especially, regulations on AI and deep synthesis technology are still new and the uncertainties remain on whether platforms and apps can shield themselves from liabilities by taking all necessary legal precautions and promptly removing infringing content upon receiving notifications of infringement from rights holders. Given this, the platforms and apps that enable deepfake functions should keep an eye on relevant regulations that have quickly evolved in China recently and are suggested to adopt a proactive and responsible approach to regulate and manage this technology with the professional's advice.

To the extent that deepfake functionality is intended to have international application and be accessible in, target, or have an impact in, other jurisdictions, such as the EU, platform and app providers should also consider whether existing regulatory requirements in those jurisdictions (for example under the EU GDPR in the EU) and any evolving AI-specific legislative frameworks (for example the proposed EU AI Act), apply to the extent these have extra-territorial effect.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.