- ECTS Credits: 6.
- Expected development time: 400 hours (200 for the lecture notes and 200 for the assignments, tutorials and projects)
- Teaching time: 264-384 hours (48 contact, 96 preparation, 120-240 corrections for 20-40 students)
Classic approaches in data analysis are use a static procedure for both collecting and processing data. Modern approaches deal with the adaptive procedures which in practice almost always are used.
In this course you will learn how to design systems that adaptively collect and process data in order to make decisions autonomously or in collaboration with humans.
The course applies core principles from machine learning, artificial intelligence and databases to real-world problems in safety, reproducibility, causal reasoning, privacy and fairness.
- Mathematics R1+R2
- Python programming (e.g. IN1900 – Introduction to Programming with Scientific Applications).
- Elementary knowledge of probability and statistics (STK1000/STK1100)
- Elementary calculus and linear algebra (MAT1100 or MAT1110)
There are two types of learning outcomes. Firstly, those that are the core of the course, and secondly methodologies that are used as part of the course.
Core learning outcomes:
- Ensuring reproducibility in both science and AI development.
- Recognising privacy issues and be able to mitigate them using appropriate formalisms.
- Mitigating issues with potential fairness and discrimination when algorithms are applied at scale.
- Performing inference when there are causal elements.
- Developing adaptive experimental design protocols for online and scientific applications.
- Understanding when it is possible to provide performance guarantees for AI algorithms.
AI learning outcomes:
- Understanding how to use data for learning, estimation and testing to create reproducible research.
- Understanding Bayesian inference and decision theory and being able to describe dependencies with graphical models.
- Understanding neural networks and how to apply stochastic optimisation algorithms.
- Understanding and using differential privacy as a formalism.
- Understanding causal inference, interventions and counterfactuals.
- Understanding the recommendation problem in terms of both modelling and decision making.
The course is split in 6 modules, which should be taken in sequence.
Module 1. Reproducibility: bootstrapping, Bayesian inference, decision problems, false discovery, confidence bounds. Module 2. Privacy: Databases, k-anonymity, graphical models, differential privacy Module 3. Fairness: Decision diagrams, conditional independence, meritocracy, discrimination. Module 4. The web: Recommendation systems, clustering, latent variable models. Module 5. Causality: Interventions and counterfactuals. Module 6. Adaptive experiment design: Bandit problems, stochastic optimisation, Markov decision processes, dynamic programming.
There are 2 projects (formally take-home exams), split into 3 parts each. Each one takes 2-4 hours and is partly done in a tutorial session.
Each question is weighted equally in each home exam, so that by correctly answering the elementary parts of each question, students can be guaranteed a passing grade. Each exam counts for 40% of the score. A final exam is also given by the students. This counts for 20% of the final score.
Criteria for full marks in each part of the exam are the following.
- Documenting of the work in a way that enables reproduction.
- Technical correctness of their analysis.
- Demonstrating that they have understood the assumptions underlying their analysis.
- Addressing issues of reproducibility in research.
- Addressing ethical questions where applicable, and if not, clearly explain why they are not.
- Consulting additional resources beyond the source material with proper citations.
The follow marking guidelines are what one would expect from students attaining each grade.
- Submission of a detailed report from which one can definitely reconstruct their work without referring to their code. There should be no ambiguities in the described methodology. Well-documented code where design decisions are explained.
- Extensive analysis and discussion. Technical correctness of their analysis. Nearly error-free implementation.
- The report should detail what models are used and what the assumptions are behind them. The conclusions of the should include appropriate caveats. When the problem includes simple decision making, the optimality metric should be well-defined and justified. Simiarly, when well-defined optimality criteria should given for the experiment design, when necessary. The design should be (to some degree of approximation, depending on problem complexity) optimal according to this criteria.
- Appropriate methods to measure reproducibility. Use of cross-validation or hold-out sets to measure performance. Use of an unbiased methodology for algorithm, model or parameter selection. Appropriate reporting of a confidence level (e.g. using bootstrapping) in their analytical results. Relevant assumptions are mentioned when required.
- When dealing with data relating to humans, privacy and/or fairness should be addressed. A formal definition of privacy and/or should be selected, and the resulting policy should be examined.
- The report contains some independent thinking, or includes additional resources beyond the source material with proper citations. The students go beyond their way to research material and implement methods not discussed in the course.
- Submission of a report from which one can plausibly reconstruct their work without referring to their code. There should be no major ambiguities in the described methodology.
- Technical correctness of their analysis, with a good discussion. Possibly minor errors in the implementation.
- The report should detail what models are used, as well as the optimality criteria, including for the experiment design. The conclusions of the report must contain appropriate caveats.
- Use of cross-validation or hold-out sets to measure performance. Use of an unbiased methodology for algorithm, model or parameter selection.
- When dealing with data relating to humans, privacy and/or fairness should be addressed. While an analysis of this issue may not be performed, there is a substantial discussion of the issue that clearly shows understanding by the student.
- The report contains some independent thinking, or the students mention other methods beyond the source material, with proper citations, but do not further investigate them.
- Submission of a report from which one can partially reconstruct most of their work without referring to their code. There might be some ambiguities in parts of the described methodology.
- Technical correctness of their analysis, with an adequate discussion. Some errors in a part of the implementation.
- The report should detail what models are used, as well as the optimality criteria and the choice of experiment design. Analysis caveats are not included.
- Either use of cross-validation or hold-out sets to measure performance, or use of an unbiased methodology for algorithm, model or parameter selection - but in a possibly inconsistent manner.
- When dealing with data relating to humans, privacy and/or fairness are addressed superficially.
- There is little mention of methods beyond the source material or independent thinking.
- Submission of a report from which one can partially reconstruct most of their work without referring to their code. There might be serious ambiguities in parts of the described methodology.
- Technical correctness of their analysis with limited discussion. Possibly major errors in a part of the implementation.
- The report should detail what models are used, as well as the optimality criteria. Analysis caveats are not included.
- Either use of cross-validation or hold-out sets to measure performance, or use of an unbiased methodology for algorithm, model or parameter selection - but in a possibly inconsistent manner.
- When dealing with data relating to humans, privacy and/or fairness are addressed superficially or not at all.
- There is little mention of methods beyond the source material or independent thinking.
- Submission of a report from which one can obtain a high-level idea of their work without referring to their code. There might be serious ambiguities in all of the described methodology.
- Technical correctness of their analysis with very little discussion. Possibly major errors in only a part of the implementation.
- The report might mention what models are used or the optimality criteria, but not in sufficient detail and caveats are not mentioned.
- Use of cross-validation or hold-out sets to simultaneously measure performance and optimise hyperparameters, but possibly in a way that introduces some bias.
- When dealing with data relating to humans, privacy and/or fairness are addressed superficially or not at all.
- There is no mention of methods beyond the source material or independent thinking.
- The report does not adequately explain their work.
- There is very little discussion and major parts of the analysis are technically incorrect, or there are errors in the implementation.
- The models used might be mentioned, but not any other details.
- There is no effort to ensure reproducibility or robustness.
- When applicable: Privacy and fairness are not mentioned.
- There is no mention of methods beyond the source material or independent thinking.
Algorithms from Artificial Intelligence are becoming ever more complicated and are used in manifold ways in today’s society: from prosaic applications like web advertising to scientific research. Their indiscriminate use creates many externalities that can be, however, precisely quantified and mitigated against.
The purpose of this course is to familiarise students with societal and scientific effects due to the use of artificial intelligence at scale. It will equip students with all the requisite knowledge to apply state-of-the-art machine learning tools to a problem, while recognising potential pit-falls. The focus of the course is not on explaining a large set of models. It uses three basic types of models for illustration: k nearest-neighbour, neural networks and probabilistic graphical models, with an emphasis on the latter for interpretability and the first for lab work. It is instead on the issues of reproducibility, data colletion and experiment design, privacy, fairness and safety when applying machine learning algorithms. For that reason, we will cover technical topics not typically covered in an AI course: false discovery rates, differential privacy, fairness, causality and risk. Some familiarity with machine learning concepts and artificial intelligence is expected, but not necessary.
21 Aug | L1. Reproducibility, kNN | Christos |
22 Aug | L2. Classification, Decision Problems, Project Overview | Christos |
29 Aug | A1. Python, scikitlearn, classification, holdouts, overfitting | Dirk |
29 Aug | A2. Bootstrapping, XV, project #1 introduction | Dirk |
30 Aug | Mini-assigment | |
---|---|---|
4 Sep | L3. Decision Problems, Classification, Neural Networks, SGD | Christos |
5 Sep | L4. Bayesian inference tutorial; neural networks | Christos |
12 Sep | A3. Compare kNN/MLP, discover interesting features | Dirk |
12 Sep | A4. Project Lab | Dirk |
18 Sep | Project 1 1st Deadline | |
18 Sep | L5. Databases, anonymity, privacy | Christos |
19 Sep | L6. Differential privacy | Christos |
26 Sep | A5. DB tutorial/distributed computing | Dirk |
26 Sep | A6. Project DP tutorial: Laplace mechanism | Dirk |
2 Oct | Project 1 2nd Deadline | |
2 Oct | L7. Fairness and graphical models | Christos |
3 Oct | L8. Estimating conditional independence | Christos |
10 Oct | A7. Production ML: SageMaker/Pipelines | Dirk |
10 Oct | A8. Project: fairness | Dirk |
16 Oct | Project 1 Final Deadline | |
16 Oct | L9. Recommendation systems [can be skipped?] | Christos |
17 Oct | L10. Latent variables and importance sampling | Christos |
24 Oct | A9. Restful APIs | Dirk |
24 Oct | A10. An example latent variable model? | Dirk |
30 Oct | L11. Causality | Christos |
31 Oct | L12. Interventions and Counterfactuals | Christos |
7 Nov | A11. Causality lab | Dirk |
7 Oct | A12. Causality lab | Dirk |
13 Nov | L13. Bandit problems | Christos |
14 Nov | L14. Experiment design | Christos |
20 Nov | A13. Experiment design lab | Dirk |
21 Nov | A14. Experiment design lab | Dirk |
2 Dec | Exam: 9AM Lessart Lesesal A Eilert Sundts hus, A-blokka | |
11 Dec | Project 2 Deadline |
- kNN, Reproducibility
- Bayesian Inference, Decision Problems, Hypothesis Testing
- Neural Networks, Stochastic Gradient Descent
- Databases, k-anonymity, differential privacy
- Fairness, Graphical models
- Recommendation systems, latent variables, importance sampling
- Causality, intereventions, counterfactuals
- Bandit problems and experiment design
- Markov decision processes
- Reinforcement learning
- Reproducibility
- KNN.
- Bootstrapping
- Linear models
- Neural networks
- Confidence and
$p$ -values - Naive Bayes: Model mismatch
-
$p$ -values, cross-validation and model mismatch
The purpose of this lecture is to familiarise students with all the decisions made from the beginning to the end of the data science process, and with the possible externalities when an algorithm is applied to real data.
- Decision hierarchies
- Bayesian inference
- Optimisation and SGD.
- Privacy in databases.
- k-anonymity.
- Differential Privacy.
- The Random Response Mechanism.
- Laplace Mechanism.
- Exponential mechanism.
The purpose of this lecture is to introduce the students to basic database concepts, as well as to privacy problems that can occur when allowing access to a database to a third party.
- Graphical Models.
- Fairness as independence.
- Decision diagrams.
- Fairness as smoothness.
- Fairness as meritocracy.
- Bayesian notions of fairness.
Unstructured databases. Clustering / Anomaly detection.
The purpose of this lecture is to talk about non-matrix data, like graphs, and make a link to graphical models and simple problems like anomaly detection.
DNA testing and HMMs.
Here we talk more about unstructured data, in this case about DNA data.
Web data, ontologies, crawling. Knowledge representation.
This is web-structured data, which typically has some meta-information.
Matrix Factorisation / LDA: Recommendation systems I (user similarity)
This lecture introduces analysis of text data, and an application to recommendation systems.
This lecture introduces the concept of online data collection, rather than going through existing data. The applications considered are manual labelling via AMT or advertising.
Markov decision processes and Dynamic Programming (active learning and experiment design more generally)
The optimal data collection procedure can be formalised as an MDP, and this is explained here.
Sometimes we are risk averse… what do we mean by this, and what algorithms can we use? When we have developed an algorithm, how sure can we be that it works well in the real world?
Here are some example questions for the exam. Answers can range from simple one-liners to relatively complex designs. Half of the points will come from 10 1-point questions and the remaining from 2 or 3 2-5-point questions.
You are given a set of clinical data
(Many approaches are possible, the main thing I want to see is that you can validate your findings)
From a statistical point of view, we want to see the strength of the dependence between an individual feature (or set of features) and the data.
The strictest possible test is to see whether or not the labels are completely independent of a feature
If this is the case, then $P(y_t \mid xt) = P(y_t \mid xt,-i)$. One possible method is to fit the classification model of choice
- If individually redundant features are correlated, then removing all of them may be difficult. For that reason, we may want to also test the performance of models which remove combinations of featutes.
- Since probably no feature is completely useless, one reason for the apparent lack of predictive ability of some features maybe the amount of data we have. In the limit, if $y_t ⊥ xt,i \mid xt,-i$ then our estimators will satisfy $\hat{P}(y_t \mid xt) = \hat{P}(y_t \mid xt,-i)$. However, it is hard to verify this condition when the amount of data is little. Conversely, with a lot of data, even weakly dependent features will not satisfy independence.
A prosecutor claims that the defendant is guilty because they have found DNA matching them on the scene of the crime. He claims that DNA testing has a false positive rate of one in a million ($10-6$$). While this is indeed evidence for the prosecution, it does not mean that the probability that the defendant is innocent is $10-6$. What other information would you need to calculate the probability of the defendant being guilty given the evidence, and how would you incorporate it?
Let us define the fact that the defendant committed a crime as
T & \textrm{~is true}
\end{align}
In order to predict whether somebody has actually committed the crime given the information, we must calculate
As you can see, we are missing four important quantities.
-
$Pr(M)$ , the a priori probability that this is the defendant’s DNA -
$Pr(T \mid M)$ the probability of a test being positive if the DNA fragments come from the same person. -
$Pr(C \mid M)$ , the probability that the defendant committed the crime if the DNA was really theirs. -
$Pr(C \mid ¬ M)$ , the probability that the defendant committed the crime if the DNA was not theirs.
So the false positive rate is far from sufficient evidence for a conviction and must be combined with other evidence.
If
A simple example is when
Id | IQ | XP |
---|---|---|
a | 120 | 5 |
b | 130 | 4 |
c | 140 | 3 |
In this example, we can set
Note that if we mapped these to a utility function, i.e.
Consider a system where we obtain data
In general, DP algorithms must be stochastic, so that this algorithm cannot satisfy DP at all.
In more detail, differential privacy requires that
\[
π(a \mid x) = ∏_i π(a_i \mid x_i)
= π(a_t \mid x_t)] ∏i ≠ t π(a_i \mid x_i)
\]
\[
π(a \mid x’) = ∏_i π(a_i \mid x_i)
= [1 - π(a_t \mid x_t)] ∏i ≠ t π(a_i \mid x_i)
\]
Dividing the two, we get
\[
π(a \mid x) = π(a \mid x’) π(a_t \mid x_t)] / [1 - π(a_t \mid x_t)].
\]
However, the ratio on the right is not bounded (i.e. it can be
A patient is coming to the doctor complaining of chest pains. The doctor recommends that the patient undergoes EEG examination in order to diagnose the patient’s underlying condition and observes the result. Describe apropriate decision variables and random variables corresponding to this problem and draw a graphical model detailing their relationship.
Variables:- C: Chest pain
- H: Underlying health condition
- P: Doctor policy
- X: examination decision
- Y: test result.
{H}->(C) | | v v (Y)<-(X)<-[P]
[ ] indicates decision variables, ( ) observed random variables, { } latent variables
Consider four random variables
(a) means that there is no path from
So a graphical model representing this is:
(z)--\ ^ | | v (x) (w) | ^ v | (y)--/
Consider a decision problem where a decision maker (DM) takes actions affecting a set of individuals. Let the DM’s action be
- Complete the following formula to show how the DM would maximise expected utility, assuming she observes
$x$ :
\[
maxa \E [U \mid a, x]
\]
Note that
- Assume each individual
$i$ also receives some utility from the DM’s actions. This is specified through a collection of utility functions$v_i : A × Y → \Reals$ . Two typical definitions of fairness from social choice theory concentrate on maximising a social welfare function that depends on the utilities of the whole population. There are two typical such functions (a) The (expected) total utility of the population (b) The (expected) utility of the worst-off member of the population.
Formalise those definitions within our framework.
(a) Can be described as
- Describe a method whereby the DM can trade-off maximising her own utility and social welfare. Under which conditions do the to objectives coincide?
A simple idea is to combine the social welfare linearly with the DM’s utility. Then we can try to maximise
\[
\E[(1 - α) U + α V \mid x, a].
\]
The two objectives obviously coincide when
Patients arrive at a hospital and receive a treatment that depends on their symptoms. The first table shows how many people receive each treatment. Assume that the number of people with each symptom is representative of the population.
Applications | Symptom 1 | Symptom 2 |
---|---|---|
Treatment A | 20 | 90 |
Treatment B | 180 | 10 |
Table 1: Number of treatment applications
The second table describes the number of people that were cured after the treatment was applied.
Cured | Symptom 1 | Symptom 2 |
---|---|---|
Treatment A | 15 | 60 |
Treatment B | 90 | 4 |
Table 2: Effect of treatment
1 .Draw a graphical model with the following four variables:
-
$π$ : Treatment policy -
$x_t$ : Symptoms -
$a_t$ : Treatment -
$y_t$ : Treatment effect
- What would the expected curing rate of a policy that uniformly randomly assigned treatments have been? (It is OK to provide a simple point estimate)
- Given the above data, what would be the treatment policy
$\hat{π}^*$ that maximises the curing rate? and how much would the curing rate of$\hat{π}^*$ be? - Is there some reason why the original policy
$π$ would be preferred to$\hat{π}^*$ ?
- Note that typically the symptoms and treatment effect depend on an underlying medical condition, but the question did not ask about this.
[$\pi$] ---> ($a_t$) ^ \ | \($y_t$) | / | / ($x_t$)
- For S1, Treatment A works 15/20=3/4 and B: 90/180=1/2. Randomly assigning treatments: 3/8+1/4 = (3+2)/8 = 5/8
For S2, Treatment B works 60/90=2/3 and B: 4/10=2/5. Randomly assigning treatments: 1/3+1/5 = (3+5)/15 = 8/15 S1 has 200 patients and S2 has 100 patients, so 2/3 of people have S1. So the overall treatment rate would have been 5/8 * 2/3 + 8/15*1/3 = 10 / 24 + 8 / 45 ~ 5 / 12 + 2 / 11 ~ 7 / 12
- It appears that Treatment A always works best, i.e. 3/4 of the time and 1/2 for each symptom.
So the overall curing rate based on the data would be 3/4 * 2/3 + 1/2*1/3 = 6/12 + 1/6 = 3/6+1/6 = 4/6=2/3.
- Firstly, there could be hidden medical or financial costs. One treatment might be more expensive than the other, or may have more side-effects. In addition, one type of symptoms might be less acute or life-threatening than the other, thus requiring less aggressive treatment. Secondly, the new policy always uses the same treatment, and this means that we do not get information about the effectiveness of alternative treatments. This may be important in the initial stages of executing a treatment.
Consider a Markov decision process with two actions
We also receive a deterministic reward:
\[
r_t = \begin{cases}
0 & s_t = 0
1 & s_t = 1\
-1 & s_t = 2
\]
Since
We always start in state 1. Taking action 0, we end up in state 1 again, with reward 0. So $\E[∑t=1^2 r_t \mid a_1 = 0] = 0 + 0$.
Taking action 1, we end up in state 2 w.p 0.2 and state 1 w.p. 0.8. So $\E[∑t=1^2 r_t \mid a_1 = 1] = 1 × 0.8 - 1 × 0.2 = 0.6$
So it is better to take action 1 in state 0.