PRIVACY IN THE DEEPFAKE WORLD: IMPACT AND REGULATION

This blog is authored by Srishti Nair, a 5th-year student of Symbiosis Law School, Noida.


INTRODUCTION

The global health pandemic has firmly established 2021 as the year of the internet. From flaming pyres to plasma donation drives, everything in 2021 was viral. It was only through the brawn and the influence of digital platforms that we were able to create viral posts, debunk fake health scares, hold the state accountable and attempt to make sense of the situation. While this is a sign of the progressing times, it is also a mayday call about the challenges that lie ahead since it highlights how 21st-century technology can alter images, videos, and voices to make people believe what they see or hear is genuine.

It’s safe to say that most of us are guilty of using or being intrigued by this manipulative technology. Some of us used it to enhance our social media personas, while others developed interactive educational tools and obtained affordable VFX. Nonetheless, there is one aspect of this coin that we all must be aware of, and that is its potential to disrupt societal peace. 

Imagine this, a deep fake audio recording of you passing rude remarks about your female employees finds its way onto the internet a day before your company goes public. What do you reckon will happen? A loss of market reputation? Online backlash? A loss of social standing? It will be all of that and more. Not only will there be no way to take it back from the digital world, but the damage caused to one’s social, mental and economic well-being would serve as a perfect recipe for catastrophe.

WHAT ARE DEEPFAKES?

What came into the limelight as a popular trend in pornographic videos today is being increasingly used to create audio deep fakes, lip-synching videos and even facial re-enactments. This leads us to the big question: what is a deep fake and how exactly is one created?

Deepfakes are doctored audio-visual media created using deep learning. Deep Learning consists of repeated data processing until the software becomes adept at translating data inputs into desired outputs meaning a picture of you and Jackie Chan is constantly processed until the software replicates a scene from the Drunken Master with no distinguishable differences.

This method of machine learning commonly makes use of the generative adversarial network technique (GAN). Herein, networks generate an output bypassing two sets of data back and forth until the differences between the two are barely noticeable. The data provided is mined from sources such as social media accounts, dating profiles etc. Subsequently, it is combined with relevant guidelines by the algorithm to recreate patterns. Thus, anyone with internet access can utilize your Instagram content to replicate your face onto a non-existing activity, just like the raging Tom Cruise account on TikTok.

DEEPFAKE’S TRYST WITH PRIVACY

With digitization becoming the new normal, it is increasingly important to acknowledge the existential crisis that privacy is undergoing. As an extension of the concept of freedom, privacy reflects every individual’s right to be left alone. It is an inalienable natural right, without which an individual cannot enjoy a dignified life. However, the democratization of data access combined with high-end applications such as face swap has made spreading misinformation extremely easy and threatening individual privacy.

This menace of the non-consensual use of personal data was held to be wrongful as early as 1984 when the non-permitted usage of pictures of a mother-daughter duo for advertising purposes was deemed as a violation of their privacy. The same principle applies to the deep fake space since the majority of them show people in situations that never happened using images that they had not consented to be used for the said purpose. These videos vary from them slurring racial abuses to harassing people, and sometimes even participating in pornographic content. Such content, when shared, can adversely affect a person’s social, mental and physical well-being, as seen in the case of journalist Ranna Ayyub who was harassed after a deepfake pornographic video with her contact information was circulated on the internet.

As a right intrinsic to human existence, the right to privacy includes the concept of sexual privacy, which has been defined in Thornburgh v. American College of Obstetricians and Gynecologists as the moral fact that individuals belong to themselves, not to anyone else or the society. In other words, each individual has the right to make decisions concerning access to and information about their body, sex, sexuality, gender, and intimate activities.

With this definition in mind, it is only fair to conclude that deep fakes that use data from social media platforms or information shared privately rob people of their right to exercise agency over their private lives. In addition to utilizing data without consent, it also depicts individuals in bogus situations, thus taking away their right to control the way their data is used. Furthermore, deep fakes damage an individual’s reputation if those deep fakes are held by rational members of society to be true, humiliating, or contemptible.

DEEPFAKE V. PERSONAL DATA PROTECTION BILL

There is no denying the benefits deep fake technology has brought to us; however, it is the utilization that has raised a few eyebrows due to its legal and ethical implications. While defamation, copyright infringements, impersonation, cyberstalking related laws are likely to remedy the damage caused, they do not compensate for the long-lasting effects. A possible remedy for this situation in India could be the Personal Data Protection Bill, 2019.

As a result of the said law’s focus on personal data protection, the regulation of deep fake pertains specifically to whether the data is personal or not. Section 3(28) of the Bill states that personal data means any identifiable information related to an individual. Even if the deep fake content is not verifiable, using an individual’s face and/or voice could be considered using “identifiable information.” Additionally, creating such content involves collecting, storing, and altering data, which would qualify as a PDP process. Section 12 and Section 5 of the PDP Bill require that any processing of personal data can only be conducted with the consent of the data subject and for specific, clear, and lawful purposes respectively.

However, this principle is not followed in the deep fake space where the great bulk of the data is generated without the knowledge or consent of the targeted individual. Moreover, the bill under Section 18 speaks about explicit consent for the processing of sensitive personal information, which is flagrantly violated by deep fakes that use sexually explicit information. Additionally, unauthorized use of personal data is unlikely to be necessary for responding to medical emergencies, complying with the law, or any other legitimate purpose provided for under Chapter III. So, not only do deep fakes process data without consent, but they also violate the principle of purpose limitation.

When a violation such as this occurs, the PDP outlines remedies for aggrieved individuals, viz-a-viz the right to be forgotten and the right to correction and erasure. Under the right to be forgotten, the data subject can restrict or prevent the disclosure of their personal data if the information is no longer necessary, if he withdraws consent or if it was in violation of any law in force. On the other hand, in case of the right to correction and erasure the data subject has the right to request corrections to any misleading or inaccurate personal information and even seek the erasure of any data that it believes is irrelevant for the purpose for which it was collected.

WAY FORWARD

The fact that emerging data protection and privacy laws are attempting to address complex technologies like machine learning is encouraging. Nevertheless, there is still a lack of clear legal provisions that address the complexities of such dynamic technologies. We must recognize that the old age saying “seeing is believing” has lost its meaning, making it increasingly difficult to distinguish truth from fake. Technology is only going to increase in sophistication, making keeping up with this increasingly difficult mouse and cat game even more challenging. Given this, we must act quickly and cooperatively to be prepared to deal with this new environment. 

It is universally agreed that it is impossible to eliminate the creation of such material, therefore we should focus on identifying and preventing its circulation. Online intermediaries, as one of the most important stakeholders, can be mandated to perform expand their fact-checking programmes to images and videos as done by Facebook. Additionally, we must also work towards increasing media literacy and take initiatives to teach the consumers how to identify and counteract related misinformation.

Considering the fast-paced growth of such technologies, we must understand that real-time prevention and regulation of such violations via legislation is unlikely to be feasible. Thus, we must equip our judicial system to understand these developing technologies and expand the existing domestic legislation and global precedents into solving the emerging privacy debacles so that we can hold these techno offenders accountable for their acts.

Leave a comment

Design a site like this with WordPress.com
Get started