Book slides:
|
Slides roughly corresponding to the content of the book, and used in PhD and master courses.
- Introduction to the course
- Introduction to data privacy (Chapter 1). Privacy in the context of machine learning and statistics, two motivating examples, privacy and society. Terminology (anonymity set, unlinkability, disclosure, plausible deniability). Transparency (transparency principle). Privacy by Design.
- Disclosure risk measures (Chapter 3, Sec. 3.1 - 3.4). Disclosure. Attribute disclosure risk measures (interval disclosure, membership inference attacks). Identity disclosure risk measures (uniqueness, re-identification). Worst-case scenario in record linkage. Summary of privacy models.
- Privacy models (Chapter 3, Sec. 3.4). Discussion includes: privacy from re-identification, k-anonymity (and variants including k-confusion), differential privacy (and variants including local differential privacy), homomorphic encryption, secure multiparty computation, result privacy, integral privacy.
- Classification of privacy mechanisms (Chapter 3, Section 3.5). They are classified according to (i) on whose privacy is being sought (i.e., respondent/data subject, holder/data controller, and user), (ii) our knowledge on the computations to be done (i.e., known/unknown), and (iii) the number of data sources (single, multiple).
- Short summary of privacy models (Chapter 3, Sec. 3.4). Among other privacy models, we mention the following ones: k-anonymity, reidentification, secure multiparty computation, differential privacy, integral privacy, homomorphic encryption.
- User privacy, some methods (Content in Chapter 4, includes PIR)
- Differential privacy and secure multiparty computation (Chapter 5)
- Masking methods (Chapter 6, part of Chapter 7)
- Result-driven approaches (Chapter 8.1) Methods to avoid disclosure of rules in rule mining.
- Tabular data protection (Chapter 8.2) Data protection mechanisms for tabular data.
|