Skip to content

repository for the paper "AI-Induced False Memories in Simulated Witness Interviews with Large Language Model"

Notifications You must be signed in to change notification settings

mitmedialab/ai-false-memories

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Conversational AI Powered by Large Language Models Amplifies Human False Memories

Samantha Chan* (MIT Media Lab), Pat Pataranutaporn* (MIT Media Lab), Aditya Suri* (MIT Media Lab), Wazeer Zulfikar (MIT Media Lab), Pattie Maes (MIT Media Lab), and Elizabeth Loftus (University of California, Irvine)

*Equal contributions

Corresponding Authors

Samantha Chan ([email protected]) & Pat Pataranutaporn ([email protected])

Abstract

This paper investigates AI's impact on false memories --- recollections of events that did not occur or deviate from actual occurrences. The study explores false memory induction through suggestive questioning in Human-AI interactions, simulating crime witness interviews by AI systems. Four experimental conditions were used: a control, a survey-based condition, a pre-scripted chatbot condition, and a generative chatbot condition using a large language model (LLM). Participants (N=200) were randomly assigned to conditions in a two-phase study. In Phase 1, they watched a crime scene video, then interacted with their assigned AI interviewer or survey, answering questions about the video with five misleading ones. False memories were assessed immediately after. Phase 2, conducted a week later, evaluated false memory persistence. Results showed the generative chatbot condition led to significantly higher false memory formation rates. It induced over 3 times more immediate false memories than the control and nearly 1.7 times more than the survey-based method. The study also explored moderating factors influencing false memory formation. Findings highlight the potential risks of using advanced AI systems in sensitive contexts like police interviews and emphasize the need for further research and ethical considerations.

Repository Structure

├── Data/
│   ├── Raw/
│   ├── Processed/
│   └── Code/
├── Prototype/
│   ├── Survey/
│   ├── Pre-Scripted_Chatbot/
│   └── Generative_Chatbot/
└── Supplementary/
    ├── Survey/
    └── Video/

Repository Contents

Data

  • Raw: Original, unprocessed, and de-identified data collected during the study.
  • Processed: Cleaned and formatted data used for analysis.
  • Code: Scripts and notebooks used for data analysis and visualization.

Prototype

  • Survey: Materials for the survey-based condition.
  • Static Chatbot: Implementation of the pre-scripted chatbot.
  • Generative Chatbot: Implementation of the LLM-based generative chatbot.

Supplementary

About

repository for the paper "AI-Induced False Memories in Simulated Witness Interviews with Large Language Model"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published