Introduction

In a recent advertisement, Zomato has launched a hyper-localised advertisement showing actor Hrithik Roshan craving certain dishes from popular restaurants in different cities. This particular advertisement makes use of AI to create deepfakes to customize what the actor says based on the user's GPS location.1 Deepfake has successfully marked its entry in advertising and entertainment industry for years now. However, the line separating genuine AI generated content from that of harmful manipulative deepfake imagery has blurred.

A 2021 Cadbury advertisement starring Shah Rukh Khan aimed at helping local stores across India during the pandemic. It allowed anyone to create an ad for their local store and use the face and voice of the actor to promote their brand without having to spend for the same.2

Webinar on Deepfakes – Associated Risks, Legal Implications and Policy Response organized by S.S. Rana & Co.

Given the alarming concerns revolving around the wide-spread menace of deepfakes, S.S. Rana & Co. had organized a Webinar while bringing together a diverse range of panelists on board to analyse, discuss and provide legal and social mitigation measures demystify the phenomenon of deepfakes.

The speakers talked about the issues and concerns, social, economical, ethical and psychological, rooting from deepfakes. Prof. Mini Srivastava deliberated on the existing and potential risks arising from 'Deepfakes' to individuals, society and nation along with the ethical concerns that have emerged over time. Prof. Triveni Singh, an IPS Officer, Cyber Crime, Uttar Pradesh Police elucidated the transformation of criminal activities with the advancement in technology emerging from real life instances. Mr. Nitin Wali talked about the need to strategize guidelines and framework on Artificial Intelligence as a whole to combat its ill-effects whilst emphasizing on community engagement and establishing specific responsibilities of various stakeholders for content moderation. Mr. Jayant Sundaresan delved into the psychological impact of deepfakes on victims and society while talking speaking out a word of hope for the victims – 'You are not alone'. Mr. Vikrant Rana talked about the legal measures to detection, prevention and forensic analysis of deepfake. Mr. Rana further pointed out the rise in filing of patents with respect to inventions to create as well as tackle the menace of deepfakes across the globe.

[To know more about the Webinar on Deepfakes, click on the link: https://www.linkedin.com/posts/s-s-rana-%26-co-deepfakes-ssrana-webinar-activity-7138752629748252673-kI6Z ]

Consent and Ethics – A line visibly becoming invisible!

Believe nothing of what you hear and half of what you see- Edgar Allan Poe

The saying completely befits the ongoing situation amidst the controversial spread of deepfakes. Ever since its inception, Deepfakes have been associated with practices that are manipulative and deceptive to their very core of existence. The more advanced the technology gets, the simpler it gets to create deepfake images and videos.

The issue of deepfakes or generative artificial intelligence needs to be looked at from the perspective of 'Ethics' which is the less explored area of technology.

The menace of deepfakes is attacking the common population, in a recent news report, a 22 year old woman employee of a BPO firm found about 13000 nude photographs of various women, including herself and other colleagues in the phone gallery of one of her colleague with who she had a relationship.3

[To read more on this, please refer to our article: https://ssrana.in/articles/nobody-is-safe-deepfake/ ]

Voice cloning

An audio deepfake or simply a voice clone is a synthetic audio created using generative AI which are trained on sample audio of a person. These tools can mimic the real voice of an individual with upto 95% accuracy in 29 languages and more than 50 accents. Very recently, the former Prime Minister of Pakistan, Mr. Imran Khan who is presently in jail, had addressed a virtual rally on the night of December 17, 2023 through a voice clone. In this first of its kind use of AI voice clone, the Prime Minister's party used a voice cloning platform, ElevenLabs to clone the politician's voice by feeding the AI model with audio of his previous speeches.4

Weaponisation of Deepfakes

Deepfakes go beyond creating false narratives by infringing the right of privacy of individuals for sadistic pleasures, politically influenced motives and to avenge illegalities. Extended exposure to deepfakes can result in suppression of information and general breakdown of confidence in public authorities and trust.

In May 2018, Belgium's Socialistische Partij Anders became the first political party to use deep fake technology to influence public debate. The party posted a video to Facebook allegedly showing US President Trump encouraging Belgium to withdraw from the Paris Agreement on climate change. The video though included a disclaimer stating it to be fake it was debunked by online communities and news sites.5

The abuse of synthetic media raise serious concern over national security. It is this aspect that is addressed less frequently, however, the harm it entails is beyond comprehension. The National Secuirty Agency and the US Federal Agencies have issued an advisory on synthetic media threat known as deepfakes.6

Political propaganda

In Telegana during the elections, hundreds of thousands of voters received the forwarded video with a minister sitting and appealing them to vote against the present state government. Another instance was where certain videos emerged from the popular Kaun Banega Crorepati, wherein the host of the show, Mr. Amitabh Bachchan was seen asking questions around Madhya Pradesh politics to whip up the anti-incumbency sentiments in the viewers.7 Both the videos are a result of deepfakes technology depicting events that never took place. This unethical use of deepfake is a serious threat to democracy with major political parties resorting to the benefits of deepfakes to satisfy their own political propagandas.

The spread of deceptive AI generated content online poses threat to the democracies across the world. While some countries have come up with the regulations on AI while some are in the process of legislating on the subject. To know more about the realm of synthetic media, read our article: https://ssrana.in/articles/deepfake-technology-navigating-realm-synthetic-media/

Use of deepfake technology posthumously

Every individual has a right to control the commercial use of their likeness. This right to likeness extends the afterlife, in a few states in the United States. However, for public personalities, the question that arises who owns their face and voice posthumously. Synthetic voice and video assistants can efficiently create realistics fakes capable enough to deceive people for monetary and commercial gain. Use of deepfake audios and videos of public personalities and for scams and frauds is unethical.

Gender Implications

Deepfake pornography targeting women increases gender inequality and reduces them to mere sexual objects causing emotional distress, reputation harm, abuse and in some cases, collateral and financial loss. This is an ethical conundrum associated with deepfake pornography, consensual or nonconsensual, that is normalizing synthetic pornography which in turn has been used for revenge pornography.

Is deepfake technology technically value neutral?

Technology in itself is neither liberating nor constraining, the results and moral values it embodies is subject to variation. Just like a knife! Perspectives matter. While one may argue, "guns don't kill, people kill", the ease of access to unregulated weapons is a factor that led to spike in the crime rate. Likewise, values embedded in a technology are a matter of subjective experience, however, the harmful effects of it nullify the benefits attached.

The benefits of deepfakes, majorly in entertainment are dwarfed by their potential and diversified harms. What appears to be entertainment, ultimately is deceit. The goal is only to create fanstasies that cannot be distinguished from reality thereby creates an illusion for one's own sadistic, political or commercial benefits.

Corporate Digital Responsibility

The existing and potential dangers from deepfakes have resulted from inadequately understood AI and a lack of scrutiny of the digital responsibility from organisations and authorities. At present there is no to few safeguards against the potential misuse of technology. However, the nations have stepped up to build a robust legal ecosystem to cater to the impending requirements to deal with the menace of deepfakes. Additionally, organisations need to be more mindful of data and algorithmic decisions making while creating digital tools and must set norms guiding organisations operations.

Here comes in the digital responsibility of corporates. Implementation of stricter verification processes such as multi-factor authentication, and investing in tools to identify deepfakes, organisations can minimize towards the risks to legitimate users.

The impending guidelines on AI and stricter watermarking guidelines can definitely add on the authenticity of the content floating around. There is an urgency to develop ways to detect and combat deepfakes, including stringent laws and regulations to deal with cybercrimes protection of rights and interests of victims.

No matter how developed the existing Information Technology Laws are, they still are a 23 year old legislations that need to be updated and supported through other strong rules and regulations.

To read more: https://www.barandbench.com/law-firms/view-point/the-digital-personal-data-protection-act-2023-a-scenario-of-arising-liabilities-2

https://www.livelaw.in/law-firms/law-firm-articles-/deepfakes-personal-data-artificial-intelligence-machine-learning-ministry-of-electronics-and-information-technology-information-technology-act-242916

https://ssrana.in/articles/deepfakes-financial-fraud/

https://ssrana.in/articles/deepfake-crackdown-motion-india/

https://ssrana.in/articles/nobody-is-safe-deepfake/

Footnotes

1 https://www.outlookindia.com/national/how-advertisements-are-using-deepfake-is-there-a-cause-for-concern--news-333087

2 https://www.outlookindia.com/national/how-advertisements-are-using-deepfake-is-there-a-cause-for-concern--news-333087

3 https://timesofindia.indiatimes.com/city/bengaluru/woman-finds-13000-nude-photos-of-herself-other-women-on-bfs-phone/articleshow/105574022.cms

4 https://economictimes.indiatimes.com/tech/technology/jailhouse-rock-how-imran-khan-used-ai-voice-clone-from-jail/articleshow/106134571.cms

5 https://www.aspi.org.au/report/weaponised-deep-fakes

6 https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-ReleaseView/Article/3523329/nsa-us-federal-agencies-advise-on-deepfake-threats/

7 https://business.outlookindia.com/technology/deepfake-elections-how-indian-politicians-are-using-ai-manipulated-media-to-malign-opponents

Related Posts

How to (NOT) getaway with Deepfakes – Patents Lead The Way

Nobody is Safe: Deepfake

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.