Skip to content

Commit 58ba1a0

Browse files
committed
feat: new content for make it real report
1 parent 30ffe98 commit 58ba1a0

File tree

1 file changed

+22
-11
lines changed

1 file changed

+22
-11
lines changed

src/blog/make-it-real.mdx

Lines changed: 22 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -12,22 +12,33 @@ import {ExternalLink} from "../components/atomic/TattleLinks"
1212

1313
![](../images/hero-rati-report-make-it-real.png)
1414

15-
Today we’re releasing a report, Make it Real, co-authored with the RATI Foundation, which focuses on the rise of AI-generated content, colloquially called ‘deepfakes’, in online harassment.
15+
Today Tattle and Rati are releasing a report, Make it Real, a report that examines how AI-generated content, popularly known as ‘deepfakes’, is impacting and reshaping online harassment.
1616

17-
Drawing from cases reported to Rati's helpline Meri Trustline, the report reveals a concerning trend: while the public conversation often centers on celebrities and politicians targeted through AI, a quieter, more personal crisis is also unfolding. These cases seldom reach the media, family circles, or law enforcement, owing to deep stigma, fear, and trauma.
17+
Drawing from cases reported to Rati's helpline Meri Trustline, the report reveals a concerning trend: while media & headlines often center on celebrities and politicians targeted through AI, a more personal crisis is also unfolding. Ordinary survivors are being targeted through images that are artificially generated but possess the capacity for real harm. Survivors’ reputation is attacked and consent is erased through technology. These violations are muted by shame, fear and trauma- the incidents are rarely revealed to close family circles, let alone feature in larger discourse.
1818

19-
Unlike celebrities and public figures who face harassment in the open and possess some mediated control over their reputation, many survivors reaching out to Meri Trustline experience harassment in private spaces. Some of the salient findings of the report are:
20-
- The majority of digitally manipulated cases appear to contain some AI based manipulation.
21-
- Unlike the cases of NCII, iIn the majority of the cases involving AIgenerated content the perpetrator and target were not close in the physical world. AI is deployed when the perpetrator doesn’t have access to private information about the individual.
22-
- While digitally manipulated content is also disproportionately targeted towards gender minorities (72% of the cases were targeted towards women), nearly all cases (92%) involving AI were targeted towards women.
23-
- The Trustline team has found that the greatest challenge lies in the overall reporting architecture provided by a platform. On platforms where reporting all content is difficult, such as X, reporting AI-generated content is also difficult. On the other hand, platforms with more expansive definitions of harmful content are more likely to address AI-generated content. Memelike content, which has been a grey zone in platform policies, remains a grey zone even with AI-generated content.
24-
- Reporting the content under the DMCA for copyright infringement has often proven to be more effective in taking down offending content than framing and reporting the abuse under the category of gender-based harm.
25-
- We find the existing legal architecture in India to be sufficient for accounting for the risks of AIGC. The key barriers are in accessing the existing legal provisions. There is a need to build the capacity of personnel across justice and enforcement systems to recognize and respond to manipulated content, in ways that are scientific, sensitive and clear of victimblaming narratives
19+
This report is based on the courageous calls that some survivors made to The Trustline. The salient findings of the report are:
2620

21+
- **Majority of Abusive Digital Manipulation Is AI-Generated.** Digital manipulation existed long before AI through manual edits, Photoshop or crude alterations but AI has transformed its speed and realism. Today, the majority of manipulated content reported shows some form of AI-generated or AI-enhanced imagery.
2722

23+
- **AI Creates Access & Violation Where No Offline Contact and Consent Exist.**
24+
In the majority of cases involving AI-generated sexual content, the perpetrator and target had no prior connection. AI was used precisely because the abuser lacked real-world access to the victim’s private images.
2825

29-
If you are being harassed online or know someone who is, reach out and seek support
30-
Call or WhatsApp Meri Trustline 6363176363 Monday to Friday 9am to 5pm.
26+
- **AI Amplifies Misogyny placing Women and Marginalized Genders at Greater Risk.**
27+
While digitally manipulated content is also disproportionately targeted towards gender minorities (72% of the cases were targeted towards women), nearly all cases (92%) involving AI were targeted towards women.
28+
29+
- **Platform Safety Systems as Barriers, Not Safeguards.**
30+
The biggest obstacle in securing a conducive response to a safety issue is the overall reporting system provided by a platform. On platforms where reporting any content is difficult such as X, reporting AI-generated content is also difficult. Platforms that define harm more “broadly”are more likely to address AI-generated content. Meme-like content, sitting in platform policy grey zone, continues to thrive even with AI-generated content
31+
32+
- **Copyright as a Personal Safety Tool.**
33+
Reporting the content under the DMCA for copyright infringement has often proven to be more effective in taking down offending content than framing and reporting the abuse under the category of gender-based harm.
34+
35+
- **Law Is Not the Gap. Access to Justice Is.**
36+
Existing legal architecture in India can account for the risks of AI generated content. The key barriers are in applying existing legal provisions to gain recourse. There is a need to build the capacity of personnel across justice and enforcement systems to recognize and respond to manipulated content, in ways that are scientific, sensitive and clear of victim-blaming narratives.
37+
38+
This report aims to humanize these less visible experiences of AI Generated online abuse and provide new evidence on how the emerging technology is shaping digital vulnerabilities.
39+
40+
41+
If you are being harassed online or know someone who is, reach out and seek support. Call or WhatsApp Meri Trustline - 6363176363 Monday to Friday 9am to 5pm.
3142

3243

3344
<Box className="w-fit mt-4 px-4 py-2 rounded-lg" background="visuals-1">

0 commit comments

Comments
 (0)