Differential privacy
July 28, 2016 — January 24, 2020
Another thing I won’t have time to blog or fully understand, but will collect a few explanatory blog posts about for emergency cribbing.
Learning Statistics with Privacy, aided by the Flip of a Coin:
Let’s say you wanted to count how many of your online friends were dogs, while respecting the maxim that, on the Internet, nobody should know you’re a dog. To do this, you could ask each friend to answer the question “Are you a dog?” in the following way. Each friend should flip a coin in secret, and answer the question truthfully if the coin came up heads; but, if the coin came up tails, that friend should always say “Yes” regardless. Then you could get a good estimate of the true count from the greater-than-half fraction of your friends that answered “Yes”. However, you still wouldn’t know which of your friends was a dog: each answer “Yes” would most likely be due to that friend’s coin flip coming up tails.
NB this would need to be a weighted coin, or you don’t learn anything.
This has recently become particularly publicly interesting because the US census has fingered mathematical differential privacy methods for preserving literal citizen privacy. This has spawned some good layperson’s introductions:
Alexandra Wood et al, Differential Privacy: A Primer for a Non-Technical Audience, and Mark Hansen has written an illustrated explanation.
There is a fun paper (Dimitrakakis et al. 2013) arguing that Bayesian posterior sampling has certain differential privacy guarantees.
Practical: see Google’s differential privacy library for miscellaneous reporting. PPRL, Privacy-Preserving-Record-Linkage is an R package for probabilistically connecting data sets in an (optionally) privacy-compatible way. There is a review of several libraries by Nils Amiet.