You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/blog/make-it-real.mdx
+12-5Lines changed: 12 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,15 +12,22 @@ import {ExternalLink} from "../components/atomic/TattleLinks"
12
12
13
13

14
14
15
-
We're excited to announce the launch of our new report, developed in collaboration with the RATI Foundation, exploring one of the most urgent digital safety issues of our time — the rise of AI-generated content and threats of creating 'deepfake' for online harassment.
15
+
Today we’re releasing a report, Make it Real, co-authored with the RATI Foundation, which focuses on the rise of AI-generated content, colloquially called ‘deepfakes’, in online harassment.
16
16
17
-
Drawing from cases reported to Rati's helpline Meri Trustline, the report reveals a concerning trend: while the public conversation often centers on celebrities and politicians targeted through AI-enabled abuse, a quieter, more personal crisis is unfolding. Ordinary women and gender minorities are increasingly facing threats involving manipulated images and fabricated videos — often at the hands of acquaintances. These cases seldom reach the media, family circles, or law enforcement, owing to deep stigma, fear, and trauma.
17
+
Drawing from cases reported to Rati's helpline Meri Trustline, the report reveals a concerning trend: while the public conversation often centers on celebrities and politicians targeted through AI, a quieter, more personal crisis is also unfolding. These cases seldom reach the media, family circles, or law enforcement, owing to deep stigma, fear, and trauma.
18
18
19
-
Unlike public figures who face harassment in the open and possess some social resilience, many survivors reaching out to Meri Trustline experience harassment in deeply private spaces. Their struggles underscore the inadequacy of existing protections and the need to rethink how support systems respond to AI-related harassment.
19
+
Unlike celebrities and public figures who face harassment in the open and possess some mediated control over their reputation, many survivors reaching out to Meri Trustline experience harassment in private spaces. Some of the salient findings of the report are:
20
+
- The majority of digitally manipulated cases appear to contain some AI based manipulation.
21
+
- Unlike the cases of NCII, iIn the majority of the cases involving AIgenerated content the perpetrator and target were not close in the physical world. AI is deployed when the perpetrator doesn’t have access to private information about the individual.
22
+
- While digitally manipulated content is also disproportionately targeted towards gender minorities (72% of the cases were targeted towards women), nearly all cases (92%) involving AI were targeted towards women.
23
+
- The Trustline team has found that the greatest challenge lies in the overall reporting architecture provided by a platform. On platforms where reporting all content is difficult, such as X, reporting AI-generated content is also difficult. On the other hand, platforms with more expansive definitions of harmful content are more likely to address AI-generated content. Memelike content, which has been a grey zone in platform policies, remains a grey zone even with AI-generated content.
24
+
- Reporting the content under the DMCA for copyright infringement has often proven to be more effective in taking down offending content than framing and reporting the abuse under the category of gender-based harm.
25
+
- We find the existing legal architecture in India to be sufficient for accounting for the risks of AIGC. The key barriers are in accessing the existing legal provisions. There is a need to build the capacity of personnel across justice and enforcement systems to recognize and respond to manipulated content, in ways that are scientific, sensitive and clear of victimblaming narratives
20
26
21
-
This report aims to humanize these less visible experiences of online abuse and provide new evidence on how emerging technologies are shaping digital vulnerabilities. By centering survivors' experiences, it asks a crucial question: Are our current mechanisms enough to protect and empower the people most affected by AI-driven harm?
22
27
23
-
We invite everyone—researchers, policymakers, and digital rights advocates—to engage with this report and join the conversation on building safer digital futures.
28
+
29
+
If you are being harassed online or know someone who is, reach out and seek support
30
+
Call or WhatsApp Meri Trustline 6363176363 Monday to Friday 9am to 5pm.
0 commit comments