Digital Deception and Gendered Harm: The Deepfake Dilemma In Cyber Harassment

Written by Snekha G,
Intern-Lex Lumen Research Journal,
June 2025

What if an high school student finds her photo been edited in suggestive poses she never looked and that’s been circulated on social media. The photos appear as same as her face, her clothes are been digitally stitched into suggestive scenes using AI. It got spread among her known and unknown, she been questioned and whispers turn into judgement and all was unsure how to respond to something that looks so real. The emotional fallout, however is undeniable. And this isn’t a video its an image weaponized by deepfake technology. And this becoming a disturbing form of digital abuse. 

WHAT ARE DEEPFAKES?

Deepfakes refer to digitally manipulated media—videos, images, or audio—produced using artificial intelligence techniques like GANs, which can convincingly mimic real people or scenarios.

Deepfakes refer to digitally manipulated media of video, audio, images produced using Artificial Intelligence(AI) techniques like Generative Adversarial Networks (GANs), where tools can convincingly swap faces, mimic voices and generate entirely fictional scenarios. What makes them dangerous is their realism viewers often cannot distinguish between authentic and manipulated content. Initially deepfake technology was developed for harmless experimentation later it evolved into a weapon of digital abuse. As with free apps and open-source tools even individuals with minimal technological knowledge can create convincingly fakes making the technology both accessible and dangerous.

GENDERED IMPACT: WOMEN AS PRIMARY TARGETS 

Women are disproportionately affected by deepfake technology cyber harassment, bulling. According to a 2021 report by Sensity AI[1], over 90% of deepfake pornography features women who never consented to such depictions.

 These videos are often used to shame, silence, or blackmail women, particularly those in publicfacing roles like journalism, politics, or entertainment.  

Especially the deepfakes are not confined only for high profile peoples such as actress, but even a normal individual particularly women been abused through these technologies. As we are in the technological era day by day we are dealing with different networks it becomes clear that technological growth alone does not define progress; the strength of protection surrounding that technology plays an equally vital role.And thereby a question is raised that whether the advancement in technologies protects the individuals?

This form of deepfake abuse is not just about digital manipulation it’s about patriarchal control. By fabricating explicit content, perpetrators aim to undermine women’s credibility, autonomy, and public presence. Victims often face social ostracism, professional setbacks, and emotional trauma, even after the content is proven fake.

MEN AS VICTIMS: A GROWING CONCERN

Though women remain the primary targets, men are increasingly affected by deepfake although in different ways. Male victims are often impersonated in financial scams, political misinformation, or fabricated confessions. For instance, a deepfake video might show a male executive endorsing a fraudulent investment scheme, leading to reputational damage and legal complications. Deepfakes exploit gendered vulnerabilities differently women through sexualized defamation, men through reputational and financial sabotage. 

 

 

PUBLIC MISINFORMATION AND PANIC

Beyond individual harm, deepfakes are increasingly used to spread misinformation and incite public panic. In India, a fabricated video showing a train derailment near Lucknow went viral on WhatsApp[2]. The visuals were realistic, overturned coaches, injured passengers, and emergency sirens. Panic spread rapidly, and local authorities were overwhelmed with calls. Hours later, it was confirmed that no such incident had occurred. The video was AI-generated. 

Such synthetic disaster content can:

  • Overwhelm emergency services with false alarms.

  • Trigger public panic, especially in rural or low-connectivity areas.

  • Undermine trust in official communication channels.

 PSYCHOLOGICAL TOLL AND SOCIAL CONSEQUENCES

Beyond legal and technological dimensions, the emotional and psychological impact of deepfake harassment is profound, especially for young victims. Survivors often experience anxiety, depression, social withdrawal, and academic or professional setbacks. The fear of being disbelieved or blamed can silence them, compounding the trauma.

In tightly knit communities or conservative settings, the social stigma attached to even fabricated content can be devastating. Victims may face isolation, character assassination, or even forced relocation. For many, the harm persists long after the content is removed because the internet rarely forgets.

Addressing deepfakes, therefore, isn’t just about detection or punishment. It’s about restoring dignity, rebuilding trust, and creating safe digital spaces where individuals, especially women and minors, are not punished for crimes committed against them.

CAN INDIAN LAWS CATCH UP WITH DEEPFAKES?

India’s legal framework is still catching up with the sophistication of deepfake technology. The Information Technology Act, 2000, provides some protection under:

  • Section 67 & 67A: Publishing or transmitting obscene or sexually explicit material.

  • Section 66C: Identity theft (relevant for impersonation via deepfakes).

  • Section 66D: Cheating by personation using computer resources.

  • Section 66E: Violation of privacy (capturing, publishing private images).

The Bharatiya Nyaya Sanhita, 2023, has yet to introduce specific clauses targeting deepfakes, leaving a critical gap in enforcement. Meanwhile, the absence of clear definitions around consent, digital impersonation, and AI-generated harm complicates prosecution.

Courts have begun recognizing the seriousness of digital impersonation, but without statutory backing, enforcement remains inconsistent. The lack of cross-border cooperation and digital evidence protocols further weakens the response to deepfake cyber harassment.

To address the deepfake dilemma holistically we must:

  • Introduce specific legislation criminalizing malicious deepfakes.

  • Establish fast-track cybercrime units trained in AI forensics.

  • Enforce platform accountability through legal mandates requiring the timely detection and takedown of synthetic content.

  • Promote digital literacy and consent-based education, especially in schools and workplaces.

Until then, victims particularly women and minors remain vulnerable in a legal grey zone, where justice is delayed, and dignity is digitally dismantled.

On concluding Deepfakes are not just a technological threat they are a societal mirror, reflecting our biases, vulnerabilities, and gaps in justice. While women remain disproportionately affected, the growing impact on men and teens underscores the need for a gender-inclusive, rights-based approach to digital safety. In the battle against digital deception, we must combine legal reform, technological innovation, and cultural awareness. Only then can we reclaim authenticity, protect dignity, and ensure that truth is not just a casualty of progress.  

[1] https://www.thedeepfake.report/en/09digitalrapeen?utm_source Digital Rape: Women Are Most Likely to

Fall Victim to Deepfakes by Juliane Reuther

[2] “Rumour has it: On the Lucknow-Mumbai Pushpak Express accident”  https://www.thehindu.com/opinion/editorial/rumourhasitonthelucknowmumbaipushpakexpressaccident/article69132602.ece  

 

Leave a Comment

Your email address will not be published. Required fields are marked *