Skip to content

olethrosdc/ml-society-science

Repository files navigation

Artificial Intelligence, Science and Society

  • ECTS Credits: 5

Course content

Classic approaches in data analysis are use a static procedure for both collecting and processing data. Modern approaches deal with the adaptive procedures which in practice almost always are used.

In this course you will learn how to design systems that adaptively collect and process data in order to make decisions autonomously or in collaboration with humans.

The course applies core principles from machine learning, artificial intelligence and databases to real-world problems in safety, reproducibility, causal reasoning, privacy and fairness.

Prerequisites

Essential

  • Mathematics R1+R2
  • Python programming (e.g. IN1900 – Introduction to Programming with Scientific Applications).

Recommended

  • Elementary knowledge of probability and statistics (STK1000/STK1100)
  • Elementary calculus and linear algebra (MAT1100 or MAT1110)

Learning outcomes

There are two types of learning outcomes. Firstly, those that are the core of the course, and secondly methodologies that are used as part of the course.

Core learning outcomes:

  1. Ensuring reproducibility in both science and AI development.
  2. Recognising privacy issues and be able to mitigate them using appropriate formalisms.
  3. Mitigating issues with potential fairness and discrimination when algorithms are applied at scale.
  4. Performing inference when there are causal elements.
  5. Developing adaptive experimental design protocols for online and scientific applications.
  6. Understanding when it is possible to provide performance guarantees for AI algorithms.

AI learning outcomes:

  1. Understanding how to use data for learning, estimation and testing to create reproducible research.
  2. Understanding Bayesian inference and decision theory and being able to describe dependencies with graphical models.
  3. Understanding neural networks and how to apply stochastic optimisation algorithms.
  4. Understanding and using differential privacy as a formalism.
  5. Understanding causal inference, interventions and counterfactuals.
  6. Understanding the recommendation problem in terms of both modelling and decision making.

Prerequisites

Course content

The course is split in 6 modules, which should be taken in sequence.

Module 1. Reproducibility: bootstrapping, Bayesian inference, decision problems, false discovery, confidence bounds. Module 2. Privacy: Databases, k-anonymity, graphical models, differential privacy Module 3. Fairness: Decision diagrams, conditional independence, meritocracy, discrimination. Module 4. The web: Recommendation systems, clustering, latent variable models. Module 5. Causality: Interventions and counterfactuals. Module 6. Adaptive experiment design: Bandit problems, stochastic optimisation, Markov decision processes, dynamic programming.

Examination

There are 2 projects (formally take-home exams), split into 3 parts each. Each one takes 2-4 hours and is partly done in a tutorial session.

Each question is weighted equally in each home exam, so that by correctly answering the elementary parts of each question, students can be guaranteed a passing grade. Each exam counts for 40% of the score. A final exam is also given by the students. This counts for 20% of the final score.

Criteria for full marks in each part of the exam are the following.

  1. Documenting of the work in a way that enables reproduction.
  2. Technical correctness of their analysis.
  3. Demonstrating that they have understood the assumptions underlying their analysis.
  4. Addressing issues of reproducibility in research.
  5. Addressing ethical questions where applicable, and if not, clearly explain why they are not.
  6. Consulting additional resources beyond the source material with proper citations.

The follow marking guidelines are what one would expect from students attaining each grade.

A

  1. Submission of a detailed report from which one can definitely reconstruct their work without referring to their code. There should be no ambiguities in the described methodology. Well-documented code where design decisions are explained.
  2. Extensive analysis and discussion. Technical correctness of their analysis. Nearly error-free implementation.
  3. The report should detail what models are used and what the assumptions are behind them. The conclusions of the should include appropriate caveats. When the problem includes simple decision making, the optimality metric should be well-defined and justified. Simiarly, when well-defined optimality criteria should given for the experiment design, when necessary. The design should be (to some degree of approximation, depending on problem complexity) optimal according to this criteria.
  4. Appropriate methods to measure reproducibility. Use of cross-validation or hold-out sets to measure performance. Use of an unbiased methodology for algorithm, model or parameter selection. Appropriate reporting of a confidence level (e.g. using bootstrapping) in their analytical results. Relevant assumptions are mentioned when required.
  5. When dealing with data relating to humans, privacy and/or fairness should be addressed. A formal definition of privacy and/or should be selected, and the resulting policy should be examined.
  6. The report contains some independent thinking, or includes additional resources beyond the source material with proper citations. The students go beyond their way to research material and implement methods not discussed in the course.

B

  1. Submission of a report from which one can plausibly reconstruct their work without referring to their code. There should be no major ambiguities in the described methodology.
  2. Technical correctness of their analysis, with a good discussion. Possibly minor errors in the implementation.
  3. The report should detail what models are used, as well as the optimality criteria, including for the experiment design. The conclusions of the report must contain appropriate caveats.
  4. Use of cross-validation or hold-out sets to measure performance. Use of an unbiased methodology for algorithm, model or parameter selection.
  5. When dealing with data relating to humans, privacy and/or fairness should be addressed. While an analysis of this issue may not be performed, there is a substantial discussion of the issue that clearly shows understanding by the student.
  6. The report contains some independent thinking, or the students mention other methods beyond the source material, with proper citations, but do not further investigate them.

C

  1. Submission of a report from which one can partially reconstruct most of their work without referring to their code. There might be some ambiguities in parts of the described methodology.
  2. Technical correctness of their analysis, with an adequate discussion. Some errors in a part of the implementation.
  3. The report should detail what models are used, as well as the optimality criteria and the choice of experiment design. Analysis caveats are not included.
  4. Either use of cross-validation or hold-out sets to measure performance, or use of an unbiased methodology for algorithm, model or parameter selection - but in a possibly inconsistent manner.
  5. When dealing with data relating to humans, privacy and/or fairness are addressed superficially.
  6. There is little mention of methods beyond the source material or independent thinking.

D

  1. Submission of a report from which one can partially reconstruct most of their work without referring to their code. There might be serious ambiguities in parts of the described methodology.
  2. Technical correctness of their analysis with limited discussion. Possibly major errors in a part of the implementation.
  3. The report should detail what models are used, as well as the optimality criteria. Analysis caveats are not included.
  4. Either use of cross-validation or hold-out sets to measure performance, or use of an unbiased methodology for algorithm, model or parameter selection - but in a possibly inconsistent manner.
  5. When dealing with data relating to humans, privacy and/or fairness are addressed superficially or not at all.
  6. There is little mention of methods beyond the source material or independent thinking.

E

  1. Submission of a report from which one can obtain a high-level idea of their work without referring to their code. There might be serious ambiguities in all of the described methodology.
  2. Technical correctness of their analysis with very little discussion. Possibly major errors in only a part of the implementation.
  3. The report might mention what models are used or the optimality criteria, but not in sufficient detail and caveats are not mentioned.
  4. Use of cross-validation or hold-out sets to simultaneously measure performance and optimise hyperparameters, but possibly in a way that introduces some bias.
  5. When dealing with data relating to humans, privacy and/or fairness are addressed superficially or not at all.
  6. There is no mention of methods beyond the source material or independent thinking.

F

  1. The report does not adequately explain their work.
  2. There is very little discussion and major parts of the analysis are technically incorrect, or there are errors in the implementation.
  3. The models used might be mentioned, but not any other details.
  4. There is no effort to ensure reproducibility or robustness.
  5. When applicable: Privacy and fairness are not mentioned.
  6. There is no mention of methods beyond the source material or independent thinking.

Motivation

Algorithms from Artificial Intelligence are becoming ever more complicated and are used in manifold ways in today’s society: from prosaic applications like web advertising to scientific research. Their indiscriminate use creates many externalities that can be, however, precisely quantified and mitigated against.

The purpose of this course is to familiarise students with societal and scientific effects due to the use of artificial intelligence at scale. It will equip students with all the requisite knowledge to apply state-of-the-art machine learning tools to a problem, while recognising potential pit-falls. The focus of the course is not on explaining a large set of models. It uses three basic types of models for illustration: k nearest-neighbour, neural networks and probabilistic graphical models, with an emphasis on the latter for interpretability and the first for lab work. It is instead on the issues of reproducibility, data colletion and experiment design, privacy, fairness and safety when applying machine learning algorithms. For that reason, we will cover technical topics not typically covered in an AI course: false discovery rates, differential privacy, fairness, causality and risk. Some familiarity with machine learning concepts and artificial intelligence is expected, but not necessary.

External resources:

Schedule

2024

20 SepL1. Reproducibility, kNN
27 SepL1. Python, scikitlearn, classification, holdouts, overfitting
4 OctL2. Scikitlearn: Bootstrapping, XV, project introduction
11 OctL2. Classification, Decision Problems
18 OctL3. Decisions, inference, optimisation.
25 Oct
9 SepL4. Bayesian inference tutorial
10 SepA4. Project Lab
16 SepL5. Databases, anonymity, privacy
17 SepA5. DB tutorial/distributed computing
23 SepL6. Differential privacy
24 SepA6. Project DP tutorial: Laplace mechanism
25 SepProject 1 Deadline 1
30 SepL7. Fairness and graphical models
1 OctA7. Production ML: SageMaker/Pipelines
7 OctL8. Estimating conditional independence
8 OctA8. Project: fairness
9 OctProject 1 Deadline 2
14 OctL9. Recommendation systems [can be skipped?]
15 OctA9. Restful APIs
21 OctL10. Latent variables and importance sampling
22 OctA10. An example latent variable model?
23 OctProject 1 Final Deadline
28 OctL11. Causality
29 OctA11. Causality lab
4 NovL12. Interventions and Counterfactuals
5 NovA12. Interventions lab
6 NovProject 2 Deadline 1
11 NovL13. Bandit problems
12 NovA13. Bandit optimisationm lab
18 NovL14. Experiment design
19 NovA14. Experiment design lab
20 NovProject 2 Deadline 2
23 NovExam
6 DecProject 2 Final Deadline

2023

DateLectureExerciseExtra Reading
22.9Algorithmsk-anonymityNetflix paper
Privacy
Fairness
Reproducibility
29.09Differential privacyRandomized responseRandomized-Response
Randomized response
Neighbourhoods
6.10Laplace MechanismProject ProposalStaircase mechanism
Exponential Mechanism
13.10Exponential MechanismRenyi DP
Approximate DP
Gaussian Mechanism
20.10Privacy AmplificationShuffle privacy
Federated learning
03.11Group fairnessKleinberg paper
Equalised odds
10.11BalanceProject Report
Calibration
17.11Meritocracytop-k
24.11SmoothnessFairness through Awareness
01.12Reproducibility5. ReproGWA
Train/Test
08.12Application
15.12Project presentations
22.12

2022

DateLectureExercisePaper
27.9AlgorithmsMath TestThe randomised response mechanism
Privacy
Fairness
Reproducibility
04.10Privacy and anonymityNetflix paper
k-anonymityk-anonymity
11.10Differential PrivacyProject1Staircase mechanism
Randomised response
Laplace Mechanism
18.10Approximate DPRenyi DP
Gaussian Mechanism
25.10Exponential mechanismShuffle privacy
Privacy amplificationFederated learning
01.11Group fairnessKleinberg paper
Equalised odds
08.11BalanceProject2
Calibration
15.11Meritocracytop-k
22.11SmoothnessFairness through Awareness
29.11Reproducibility5. ReproGWA
Train/Test
06.12GWAS
13.12Project presentationsProjectP
20.12Project3

Papers

  1. Randomised Response: A Survey Technique for Eliminating Evasive Answer Bias, Warner, 1965.
  2. Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, Ohm, 2009.
  3. Robust De-anonymization of Large Sparse Datasets. Narayanan and Shmatikov.
  4. Calibrating noise to sensitivity in private data analysis. Dwork et al. 2006. (Approximate DP: See also https://github.com/frankmcsherry/blog/blob/master/posts/2017-02-08.md )
  5. Our Data, Ourselves: Privacy Via Distributed Noise Generation, Dwork et al. 2006.
  6. The staircase mechanism in differential privacy. Geng et al. 2015.
  7. Renyi Differential Privacy, Mironov, 2017.
  8. Distributed Differential Privacy via Shuffling. Cheu et al, 2019.
  9. Federated Naive Bayes under Differential Privacy. Marchioro et al.
  10. Big Data’s Disparate Impact. Barocas and Selbst, 2016.
  11. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments, Chouldechova, 2017.
  12. Inherent Trade-Offs in the Fair Determination of Risk Scores, Kleinberg et al. 2016.
  13. Meritocratic Fairness for Cross-Population Selection, Kearns et al. 2017.
  14. Fairness through awareness, Dwork et al. 2011.
  15. Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays, Homer et al. 2008.
  16. Needles in the Haystack: Identifying Individuals Present in Pooled Genomic Data, Braun et al. 2009.
  17. Privacy Preserving GWAS Data Sharing. Uhlerop et al. 2013.
  18. A New Analysis of Differential Privacy’s Generalization Guarantees, Jung et al. 2019.

2021

TheoryPractice
24.8Decision problems25.8Expected utility
Probability and UtilityConditional probability
Decision problems in ML
27.8Assignment 1DEADLINE
31.8Infinite Decision Spaces1.9Experiment pipeline
Stochastic GradientBasic experiment design
Tensor Flow Keras
7.9Conditional Probability8.9n-Meteorologists
Conjugate priorsBeta/Bernoulli
10.9Assignment 2DEADLINE
14.9Bayes-optimal decisions15.9Beta/Bernoulli for hypothesis testing
Hypothesis testingHierarchical models
[Project introduction]
21.9Non-Conjugate PriorsTFP Graphical Models
28.9Privacy and anonymity29.9SQL, DB tutorial
30.9Project 1PRELIMINARY REPORT
5.09Lab: Randomised Response6.09Laplace and Exponential Mechanisms
12.10Lab: Exponential vs Laplace Mechanism.13.10Fairness
OpenDP (optionally)
(Dirk)Conditional Independence
19.10Fairness20.10Balance
Calibration
Meritocracy
22.10Project 1DEADLINE
26.10Fairness27.10Latent Variable Models
Recommender Systems
2.11Latent Variable Models3.11Lab: Latent Variables with TFP
Recommender Systems
Group work
5.11Project 2Deadline #1
9.11Causality10.11Group work
Interventions
Counterfactuals
16.11Markov Decision Processes17.11Lab: Project work
18.10Project 2Deadline #2
23.11Group work24.11Group work
3.12Project 2Final Deadline

Module 1: Decision problems, probability and utility.

Reading: Chapter 1

Here the students get familiar with the concept of expected utility. They perform simple exercises in python. We define utility in terms of the classification accuracy for individual decisions and in terms of the generalisation performance in terms of choosing a specific classifier model.

src/decision-problems/expected-utility.py

Module 2: Experiment design and decision analysis

Reading: Sec. 2.4.1, 2.2, 2.1, 2.6

This includes how data will be collected and processed, focusing on automation of the process. I will encourage students to develop an automated pipeline mainly through simulation, where all the variables can be perfectly controlled.

  • Optimal decisions in continuous cases: stochastic gradient descent and Bayesian quadrature.

Module 3: Bayesian inference

Reading; Sec 2.3

Introduction to BI through the meteorolgical prediction problem. Dicsussion of simple conjugate priors (Beta, Normal).

Day 1, Part 1

  • Graphical model recap (5’)
  • Conditional probability (5’)
  • Bayes Theorem (5’)
  • Marginal distributions (5’)
  • The n-meteorologists problem (25’)

Day 1, Part 2

  • Suffficient Statistics / Conjugate priors (15’)
  • The Beta-Bernoulli conjugate pair (15’)
  • The Normal-Normal conjugate pair (15’)

Day 2, Part 1

  • Estimating which classifier is best (45’)

– Beta-Bernoulli (15’) – Bootstrapping (15’)

  • Assignment 2 discussion (45’)

Module 4: Bayes-optimal Decisions

Reading: Sec. 2.4-2.6, 4.1.3

  • Bayesian decisions for models.
  • Hypothesis testing: Hierarchical Bayesian models
  • Contrast credible intervals with bootstrapping.

Module 5: Non-conjugate priors

Reading: None

Here we will focus on logistic regression as an example, the module will be mainly practical and focus on TF probability.

See https://arxiv.org/pdf/2001.11819.pdf

Module 6: Databases and privacy

Reading: Chapter 3.

Introduction to databases, SQL and k-anonymity, consent, and the GDPR. Various mechanisms for DP. Pointers to the opendp.org framework for differential privacy. Comparison of various mechanisms in an ML task.

Module 7: Fairness

Reading: Chapter 4.

Introduction to fairness and condtional independence. Fairness as parity, balance, calibration, meritocracy or smoothness. Measuring conditional independence. Balancing performance with fairness constraints through constrained or penalised optimisation, or Bayesian methods.

Module 8: Latent variable models

Reading: Chapter 5.

Examples: (a) Gaussian mixture model (b) epsilon-contamination model and outliers (c) preferences and attributes in recommendation systems

Practical work with Tensorflow probability, including outlier detection etc.

Module 9: Causality

Reading: Chapter 6.

Confounders, Instrumental variables, Interventions, Counterfactuals. Hands-on: Importance sampling for estimating the impact of decisions. Lab: tfcausalimpact

Module 10: Adaptive experiment design

Reading: Chapter 7.

Here we discuss experiment design in the adaptive setting, where our future experiments depend on data we have not seen yet. Two interesting cases are bandits (e.g. for recommendation systems) and active learning (e.g. for classification).

2020

19 AugL1. Reproducibility, kNNChristos
20 AugA1. Python, scikitlearn, classification, holdouts, overfittingDirk
26 AugA2. Bootstrapping, XV, project #1 introductionDirk
27 AugL2. Classification, Decision ProblemsChristos
2 AugL3. Decisions, inference, optimisation.Christos
3 SepA3. Compare kNN/MLP, discover interesting featuresDirk
9 SepL4. Bayesian inference tutorialChristos
10 SepA4. Project LabDirk
16 SepL5. Databases, anonymity, privacyChristos
17 SepA5. DB tutorial/distributed computingDirk
23 SepL6. Differential privacyChristos
24 SepA6. Project DP tutorial: Laplace mechanismDirk
25 SepProject 1 Deadline 1
30 SepL7. Fairness and graphical modelsChristos
1 OctA7. Production ML: SageMaker/PipelinesDirk
7 OctL8. Estimating conditional independenceChristos
8 OctA8. Project: fairnessDirk
9 OctProject 1 Deadline 2
14 OctL9. Recommendation systems [can be skipped?]Christos
15 OctA9. Restful APIsDirk
21 OctL10. Latent variables and importance samplingChristos
22 OctA10. An example latent variable model?Dirk
23 OctProject 1 Final Deadline
28 OctL11. CausalityChristos
29 OctA11. Causality labDirk
4 NovL12. Interventions and CounterfactualsChristos
5 NovA12. Interventions labDirk
6 NovProject 2 Deadline 1
11 NovL13. Bandit problemsChristos
12 NovA13. Bandit optimisationm labDirk
18 NovL14. Experiment designChristos
19 NovA14. Experiment design labDirk
20 NovProject 2 Deadline 2
23 NovExam
6 DecProject 2 Final Deadline

2019

21 AugL1. Reproducibility, kNNChristos
22 AugL2. Classification, Decision Problems, Project OverviewChristos
29 AugA1. Python, scikitlearn, classification, holdouts, overfittingDirk
29 AugA2. Bootstrapping, XV, project #1 introductionDirk
30 AugMini-assigment
4 SepL3. Bayesian inference, Networks, SGDChristos
5 SepL4. Bayesian inference tutorial; neural networksChristos
12 SepA3. Compare kNN/MLP, discover interesting featuresDirk
12 SepA4. Project LabDirk
18 SepProject 1 1st Deadline
18 SepL5. Databases, anonymity, privacyChristos
19 SepL6. Differential privacyChristos
26 SepA5. DB tutorial/distributed computingDirk
26 SepA6. Project DP tutorial: Laplace mechanismDirk
2 OctProject 1 2nd Deadline
2 OctL7. Fairness and graphical modelsChristos
3 OctL8. Estimating conditional independenceChristos
10 OctA7. Production ML: SageMaker/PipelinesDirk
10 OctA8. Project: fairnessDirk
16 OctProject 1 Final Deadline
16 OctL9. Recommendation systems [can be skipped?]Christos
17 OctL10. Latent variables and importance samplingChristos
24 OctA9. Restful APIsDirk
24 OctA10. An example latent variable model?Dirk
30 OctL11. CausalityChristos
31 OctL12. Interventions and CounterfactualsChristos
7 NovA11. Causality labDirk
7 OctA12. Causality labDirk
13 NovL13. Bandit problemsChristos
14 NovL14. Experiment designChristos
20 NovA13. Experiment design labDirk
21 NovA14. Experiment design labDirk
2 DecExam: 9AM Lessart Lesesal A Eilert Sundts hus, A-blokka
11 DecProject 2 Deadline
  1. kNN, Reproducibility
  2. Bayesian Inference, Decision Problems, Hypothesis Testing
  3. Neural Networks, Stochastic Gradient Descent
  4. Databases, k-anonymity, differential privacy
  5. Fairness, Graphical models
  6. Recommendation systems, latent variables, importance sampling
  7. Causality, intereventions, counterfactuals
  8. Bandit problems and experiment design
  9. Markov decision processes
  10. Reinforcement learning

About

Notes and slides for a course on social and scientific aspects of machine learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •