Machine learning helps us distill the unreasonable complexity of the world around us into (relatively) simple models. In theory, models should learn facts about the population from which their training dataset is sampled. In practice, models often learn about the idiosyncrasies of the data they are fed. As a result, there is a concern that machine learning models could leak sensitive information in unpredictable ways. The goal of this project is to understanding when, how, and why this can occur and what can be done about it.