What is Deepfake?
Deepfakes are synthetic media that use AI to manipulate or generate visual and audio content. Deepfakes are created using a technique called generative adversarial networks (GANs). Deep learning technology has enabled positive advancements, such as restoring lost voices and recreating historical figures. Deep learning techniques have been applied in comedy, cinema, music, and gaming to enhance artistic expression. Synthetic avatars of people with physical or mental disabilities will help express themselves online. It enhances medical training and simulation by generating diverse and realistic medical images. It also creates virtual patients and scenarios for simulating medical conditions and procedures, improving training efficiency. It can also be used to enhance the interaction and immersion of augmented reality (AR) and gaming applications. However this article emphasizes on the usual unethical intention of using deepfake technology.
Back in 2020, in the first-ever use of AI-generated deepfakes in political campaigns, a series of videos of a leading political party member were circulated on multiple WhatsApp groups. The videos showed leader hurling allegations against his political opponent from another political party in English and Haryanvi, before the Delhi elections. In a similar incident, a doctored video of a leading political party chief recently went viral, creating confusion over the future of the State government’s most popular scheme.
Last month a video featuring bollywood actor Rashmika Mandanna went viral on social media, that video went viral in few hours and created shock among all of us. The seconds-long clip, which featured Mandanna’s likeness, showed a woman entering a lift in a bodysuit. The original video was of a British Indian influencer named Zara Patel, which was manipulated using deepfake technology. Soon after, the actor express her dismay on the social media platform, writing, ”Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused.” Whereas the latest news of another misuse of deepfake technology came on 14th Dec 2023 in which Infosys founder Mr. Narayan Murthy sir was talking about a project with Elon Musk. Even the deepfake videos of Prime Minister Narendra Modi were circulating over social media.
Prime Minister Narendra Modi on 15 Dec 2023 termed the misuse of artificial intelligence for creating deepfakes as problematic, and as one of the looming issues going ahead asking media to educate people about this phenomenon. He said that many of the deepfakes generated with the assistance of artificial intelligence appeared very real and that the consequent disinformation could lead to much harm. “A new crisis is emerging due to deepfakes produced
through artificial intelligence. There is a very big section of society which does not have a parallel verification system,” he said. He added that just that as products like cigarettes come with health warnings, deepfakes should also carry disclosures. Prime Minister Modi’s remarks come in the backdrop of a viral deepfake video of actress Rashmika Mandanna following which
an advisory had also been issued by the Information Technology Ministry. The IT Ministry warned platforms to take such content down within 36 hours, a requirement outlined in the IT Rules, 2021, and urged that “due diligence be exercised and reasonable efforts made to identify misinformation and deepfakes”.
Current rules, regulations and laws regarding use of Deepfake.
India does not have specific laws or regulations that ban or regulate the use of deepfake technology. India has called for a global framework on the expansion of “ethical” AI tools. Existing laws such as Sections 67 and 67A of the Information Technology Act (2000) have provisions that may be applied to certain aspects of deep fakes, such as defamation and publishing explicit material. The Digital Personal Data Protection Act, provides some protection against the misuse of personal data. The Information Technology Rules, 2021, mandate the removal of content impersonating others and artificially morphed images within 36 hours.
Provisions of the Indian Penal Code, 1860, (IPC) can also be resorted to for cybercrimes associated with deepfakes Sections 509, 499 and 153 (a) and (b). (Indian Penal Code is now terminated and new laws named Bharatiya Nyaya Sanhita, Bharatiya Nagrik Nyaya Sanhita bill are passed by LokSabha and RajyaSabha on 21 Dec 2023).India needs to develop a comprehensive legal framework specifically targeting deepfakes, considering the potential implications for privacy, social stability, national security, and democracy.
The recent world’s first ever AI Safety Summit 2023 involving 28 major countries,
including the US, China, and India, agreed on the need for global action to address AI’s potential risks. The Bletchley Park Declaration at the summit acknowledged the risks of intentional misuse and the loss of control over AI technologies. The European Union’s Code of Practice on Disinformation requires tech companies to counter deep fakes and fake accounts within six months of signing up to the Code. If found non-compliant, tech companies can face fines up to 6% of their annual global turnover. Whereas United States introduced the bipartisan Deepfake Task Force Act to assist the Department of Homeland Security in countering deepfake technology. Big tech companies like Meta and Google have announced measures to address the issue of deep fake content. However, there are still vulnerabilities in their systems that allow the dissemination of such content. Google has introduced tools for identifying synthetic content, including watermarking and metadata. Watermarking embeds information directly into content, making it resistant to editing, while metadata provides additional context to original files.
Proposed reforms for use of Deepfake
Minister of Ministry of Electronics and Information Technology, Ashwini Vaishnaw, emphasized the urgent need to strengthen rules and regulations against the spread of deepfakes. The government will unveil a clear and actionable plan to fight deepfakes within the next few days. The drafted regulations are based on four pillars which include:
1. Detection of deepfakes
2. Prevention of the deepfakes
2. Building of grievance and reporting mechanism
3. Raising awareness
The central government has shown commendable urgency in tackling the menace of deepfakes. Deepfakes, arguably the most dangerous form of misinformation, pose unprecedented threats not just to democracy and its processes but also to the rights of digital peoples in online spaces. MeitY has rightly identified ‘detection, prevention, reporting and awareness’ as the fourpronged approach to curbing deepfakes. Any regulation for deepfakes will have to necessarily ensure that it (i) discourages dissemination, (ii) incentivizes early reporting, (iii) penalizes delay
in addressing complaints and taking down deepfakes and (iv)restricts avenues for creation of deepfakes.
Deepfake and the upcoming 2024 elections
The most important topic of this article is deepfake and its misuse in upcoming 2024 parliament elections. Since India has most number of young voters in the world and India is the largest Democracy in the world, almost every Indian political party has been using their social media platforms effectively nowadays. Social media is already crowded with deepfake content targeting Indian politicians and political parties, with an aim of swaying voter sentiments. According to this year’s State of Deepfakes report, India is the sixth most vulnerable country to deepfake. In the last elections held in Madhya Pradesh, two doctored video clips of superstar
Amitabh Bachchan on ‘Kaun Banega Crorepati’ calling current chief minister a liar, and portraying the another party’s CM in a positive light. India will be no differentiate media consumption on social media contributes significantly to opinion shaping and generative AI will have a major impact on upcoming Indian elections Deepfakes have opened a new front in the election battle between rival parties. One of the leading political party has filed complaints with the Election Commission and the police regarding two deepfake videos related to state.
Microsoft has announced measures for assigning digital watermarks for media content, which let users know how, when and by whom the content was created or edited and if it was generated by AI. It will also assist campaigns to deal with cyber security threats and measures for election watchdogs. Meta, the company that owns Facebook and Instagram, has said that it would require political ads to disclose whether they used AI. Other unpaid posts on the sites that use AI, even if related to politics, would not be required to carry any disclosures. Election propaganda in India has evolved beyond door-to-door campaigns and wall posters to AI generated fake videos. This technology allows a bunch of people sitting in their Delhi NCR offices to deploy deepfake videos that can sway voter sentiments in any poll-bound constituency hundreds of miles away.
All you can do in this digital era and specifically in the environment of upcoming elections is make sure what you are watching, hearing on social media or any other electronic media is true or not and do not fall for any kind of unethical propaganda thing.