In recent years, India has seen a sharp rise in AI-generated “deepfake” videos and content, from misleading celebrity videos to politically charged manipulations, raising serious concerns about privacy, defamation, public order and electoral integrity. As smartphone and internet usage spreads fast across urban and rural areas, even highly realistic fakes can be shared widely in minutes.
Because of this, deepfakes in India are no longer a hypothetical risk: they are increasingly being used to exploit public trust, damage reputations, mislead voters, and commit fraud.
Recent high-profile cases in India
Celebrity deepfake misuse
A telling example came in 2025, when veteran Telugu actor Chiranjeevi filed a police complaint after certain websites circulated obscene, AI-generated video clips featuring his likeness, albeit entirely fake and deeply manipulated. Similarly, in early 2024, the main accused in a deepfake case involving actress Rashmika Mandanna was arrested. The video, a manipulated clip showing someone entering an elevator, was morphed using AI to superimpose Mandanna’s face.
Such incidents illustrate the misuse of deepfakes to defame or harass public figures, and highlight how quickly they can spread online, exploiting fame, trust, and the public’s lack of technical awareness.
Political manipulation & election misinformation
Deepfakes have also entered India’s political arena. Ahead of recent elections, videos and audio clips surfaced showing prominent politicians making false statements, including resignations or controversial remarks, that were apparently generated or altered using AI.
For example, synthetic-voice clips attributed to politicians allegedly promising misleading promises or making inflammatory statements were later debunked by fact-checkers.
In one extreme case, a man from Navsari was arrested in 2025 for sharing a deepfake video purportedly showing the country’s Prime Minister in a defamatory scenario, the altered video included false content that police said could threaten national unity and public harmony.
These cases show how deepfakes, when disseminated widely during politically sensitive times, can distort public perception, disrupt democratic discourse, and undermine trust in institutions.
Why India is especially vulnerable
Several factors make deepfakes especially dangerous in India:
- Massive reach of social media and messaging apps: With a large and growing base of smartphone users, even a single fake video can reach thousands or millions within minutes.
- High celebrity influence: The popularity of film stars, politicians, and public figures makes their fake endorsements or manipulations especially persuasive. Studies report that in 2025, about 90% of Indians were exposed to AI-driven celebrity-endorsement fakes.
- Low media literacy in many segments: Especially outside urban tech-savvy circles, the ability to critically evaluate video content remains limited. This makes people more susceptible to believing fakes.
- Linguistic and cultural factors: India’s diversity and multilingual media consumption also make it easier for misleading content (in regional languages, with dubbed voices, etc.) to target and influence specific populations. Recent academic research is already creating dedicated datasets (like a Hindi deepfake dataset) to help build detection tools suitable for India’s linguistic context.
Detection, forensics and limits: Can technology save us?
The arms race between deepfake creators and defenders is on. Forensic experts in India have started identifying tell-tale signs: AI-generated faces often lack natural light variation, proper micro-expressions (like blinking or subtle face movements), and the “fingerprint” left by a camera sensor (called PRNU). Audio deepfakes may lack background noise or realistic ambient sound.
Still, detection tools are imperfect. As deep-learning approaches improve, some deepfakes become increasingly hard to detect even for advanced forensic systems. This means technical detection must be part of a broader, multi-layered response– not the only line of defence.
Regulatory & policy steps in India
The Indian government has begun responding. In 2025, proposals were introduced that would require AI and social-media companies to label AI-generated content clearly. Under these draft rules: AI-generated visual media must carry a visible marker, and audio clips must be suitably flagged. Platforms would also need to maintain metadata traceability, to help authorities investigate and remove harmful deepfakes.
Additionally, cyber-crime and law-enforcement agencies have started registering FIRs under relevant laws, such as the Information Technology (IT) Act and criminal statutes, against persons creating or distributing fake videos showing celebrities or politicians.
However, significant gaps remain: enforcement is slow, detection tools are not yet widely deployed, victims often face social stigma, and many users are unaware that content may be fake. A comprehensive, coordinated effort is urgently needed.
What could work
Given the complexity of the problem, here are a set of practical, India-tailored measures that could help curb deepfake harm:
- Mandatory AI-labeling and provenance standards: As proposed, all AI-generated or AI-modified content should carry clear, visible labels indicating their synthetic origin. Metadata traceability must be enforced.
- Rapid takedown mechanisms and platform accountability: Social media and video platforms should be legally mandated to remove non-consensual, defamatory or politically manipulative content quickly. Platforms should be required to build or integrate advanced detection/forensic tools specialized for Indian languages and media contexts.
- Strengthening cyber-crime capacity: Law-enforcement must be equipped with better training, forensic tools, and faster response protocols to investigate deepfake cases. Victims must be supported sensitively, especially in cases of defamation or non-consensual pornographic content.
- Digital literacy campaigns: Government, civil society, educational institutions and media organisations should launch public campaigns, especially targeted at young people and first-time internet users, to help them learn to verify content, check sources, and report suspicious media.
- Language- and region-specific detection datasets: Development of deepfake datasets and detection models for Indian languages and contexts (as some researchers have started) must be encouraged and scaled up.
- International and cross-platform cooperation: Since deepfake media often crosses borders and travels across encrypted platforms, India must cooperate with global stakeholders, share best practices, and contribute to international norms or treaties for synthetic-media governance.
Conclusion
Deepfakes in India are not a distant problem, they are here, now, and expanding fast. From harming individuals’ reputations to influencing political discourse, from defrauding unsuspecting citizens to threatening public order, the misuse of AI-generated media poses serious challenges to privacy, democracy, and trust.
But deepfakes are not an unstoppable force. With a well-rounded approach, combining technology, regulation, public awareness, and cross-sector cooperation, India can significantly reduce their harm. This will require not just legal and technical fixes, but a cultural shift: from passive consumption of digital media to active, critical engagement. As we navigate the generative-AI age, the question is not only “What can be faked?” but also “How do we safeguard truth, dignity and democratic values in a world where video can lie?” Ensuring that the answer is- “with vigilance, responsibility, and ethical governance”, is one of the most vital tasks ahead.



















