A short, hands-on crash course on Responsible AI created for Girlset (Concordia).
slides/— workshop slide deckactivities/and facilitation notes
- Demystify where AI shows up in everyday life
- Introduce key risk areas (healthcare bias, hiring algorithms, facial recognition)
- Facilitate reflection on who builds AI and why that matters
- End with a quick activity inspired by Affecting Machines to humanize algorithm design
- Audience: high-school / undergrad beginners
- Group size: 10–40
- Room: projector + speakers + sticky notes
- Materials: printed “If I built AI…” slips (Slide 28), trading cards (Slide 21, optional)
- Flow:
- Icebreaker & “Can you trust AI?” (Slides 1–5, 10 min)
- Everyday AI + GAN demo (Slides 6–16, 10–12 min)
- Case studies & small reflections (Slides 17–20, 15–20 min)
- Targeted ads + design-a-fix (Slides 24–25, 10–12 min)
- Airport screening fairness (Slides 26–27, 8–10 min)
- Personal pledges (Slides 28–29, 5–8 min)
- Facial recognition & wrongful arrest: ACLU case page + settlement (Robert Williams).
- https://www.aclu.org/cases/williams-v-city-of-detroit-face-recognition-false-arrest
- Settlement docs + updates.
- Hiring algorithm bias: Reuters on Amazon’s resume screener (2018).
- Healthcare triage bias: Obermeyer et al., Science (2019) + explainer.
- Original Girlset materials © 2025 Sara Jameel — licensed under CC BY 4.0.
- Adapted content from third parties is used with permission and may carry different terms.