KartavyaDesk
news

UPSC Editorial Analysis: The Deepfake Dilemma: India’s Regulatory Path to Digital Trust

Kartavya Desk Staff

*General Studies-3; Topic: Challenges to internal security through communication networks, role of media and social networking sites in internal security challenges, basics of cyber security; money-laundering and its prevention*

Introduction

• In October 2023, India released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. This was a landmark moment, representing India’s first formal attempt to regulate “synthetically generated information.”

• The move was prompted by high-profile deepfake incidents involving public figures and the rising threat of AI-driven misinformation during democratic processes.

About The Deepfake Dilemma: India’s Regulatory Path to Digital Trust

• India is crafting regulations to combat deepfakes, balancing AI innovation with legal safeguards. This approach aims to curb misinformation and protect citizen rights, establishing a secure foundation for digital trust.

The Current Regulatory Mechanism (MeitY’s Approach)

The Ministry of Electronics and Information Technology (MeitY) has placed the primary responsibility on intermediaries (social media platforms).

Identification: Platforms must use technical tools to detect synthetic media.

Labeling: Any content generated or modified by AI must be clearly labeled to inform the user.

Takedown Timelines: Intermediaries are required to remove non-consensual sexual content or misinformation within specified hours of being flagged.

Grievance Redressal: Establishing a mechanism where users can report deepfakes and receive timely resolutions.

Judicial Proactiveness: Filling the Vacuum

In the absence of a specific “AI Law,” Indian courts have stepped in using existing legal principles:

Personality Rights: The Delhi High Court (e.g., Anil Kapoor v. Simply Life India) protected an individual’s right to their voice, likeness, and persona against AI misuse.

• The Delhi High Court (e.g., Anil Kapoor v. Simply Life India) protected an individual’s right to their voice, likeness, and persona against AI misuse.

Right to be Forgotten & Privacy: Courts have increasingly recognized that AI-generated non-consensual imagery violates the fundamental Right to Privacy under Article 21.

• Courts have increasingly recognized that AI-generated non-consensual imagery violates the fundamental Right to Privacy under Article 21.

Copyright & Publicity: Cases involving celebrities like Aishwarya Rai Bachchan highlight how deepfakes infringe upon the commercial value of a person’s identity.

• Cases involving celebrities like Aishwarya Rai Bachchan highlight how deepfakes infringe upon the commercial value of a person’s identity.

Major Challenges and Concerns

While the draft rules are a positive step, they face several structural and ethical hurdles:

The Burden on Intermediaries: Platforms may become “arbiters of truth.” To avoid legal liability, they might over-censor content (pre-emptive takedowns), harming satire and free speech.

• Platforms may become “arbiters of truth.” To avoid legal liability, they might over-censor content (pre-emptive takedowns), harming satire and free speech.

Technical Limitations: Detection tools often lag behind generation tools. By the time a deepfake is detected, the viral damage is often already done.

• Detection tools often lag behind generation tools. By the time a deepfake is detected, the viral damage is often already done.

Lack of Provenance: Simple labels can be cropped out or removed. Without “embedded metadata,” the origin of a file remains untraceable.

• Simple labels can be cropped out or removed. Without “embedded metadata,” the origin of a file remains untraceable.

Enforcement Gap: India lacks specialized “media-forensic labs” at the scale needed to handle millions of pieces of content daily.

• India lacks specialized “media-forensic labs” at the scale needed to handle millions of pieces of content daily.

Global Governance Models: A Comparative View

India’s policy can be strengthened by observing international trends:

The EU AI Act: Focuses on transparency and mandates “machine-readable” markers for AI content.

The US (Take It Down Act): Specifically targets non-consensual intimate imagery with strict criminal liabilities.

The UK (Online Safety Act): Criminalizes the creation of sexually explicit deepfakes, regardless of whether they are shared.

Way Forward

To move from “reactive” to “proactive” governance, India should adopt a four-pillared strategy:

Institutional Oversight (The Single Regulator) India needs an autonomous Digital/AI Regulator. Currently, regulation is fragmented between MeitY (IT Rules) and the MIB (Broadcasting). A central body would harmonize standards and conduct mandatory audits of AI algorithms.

Technological Traceability (The ‘CrediMark’ System) Moving beyond visible watermarks, India should adopt Digital Provenance.

Concept: A “digital birth certificate” for every AI file that travels with the content. Standardization: Aligning with the C2PA (Coalition for Content Provenance and Authenticity) ensures that Indian content is verifiable globally.

Concept: A “digital birth certificate” for every AI file that travels with the content.

Standardization: Aligning with the C2PA (Coalition for Content Provenance and Authenticity) ensures that Indian content is verifiable globally.

Risk-Tiered Accountability Not all deepfakes are equally dangerous. Obligations should be based on risk:

Low Risk: Entertainment, filters, and satire (require basic disclosure). High Risk: Elections, state security, and financial markets (require pre-release certification and rapid “kill-switch” protocols).

Low Risk: Entertainment, filters, and satire (require basic disclosure).

High Risk: Elections, state security, and financial markets (require pre-release certification and rapid “kill-switch” protocols).

Regulatory Sandboxes India should allow startups to develop detection and watermarking tools in a supervised environment. This “safe-innovation loop” ensures that regulation does not kill the domestic AI industry.

Conclusion

• The battle against deepfakes cannot be won by technology or law alone. It requires a “Trust Ecosystem” where platforms, users, and the state share responsibility.

• For India, establishing clear provenance rules and a dedicated regulator will not only curb misinformation but also position the country as a global leader in “Ethical AI” and digital forensic services.

https://www.insightsonindia.com/2025/10/22/deepfakes-represent-the-new-frontier-of-online-harm-identify-the-technological-challenges-involved-in-detecting-synthetic-media-and-assess-indias-preparedness-to-counter-such-threats/

AI-assisted content, editorially reviewed by Kartavya Desk Staff.

About Kartavya Desk Staff

Articles in our archive published before our editorial team was expanded. Legacy content is periodically reviewed and updated by our current editors.

All News