A Pragmatic Approach to Membership Inferences on Machine Learning Models

In: IEEE European Symposium on Security and Privacy (2020)

Type: Conference
Venue: EuroS&P
Year
2020
Acceptance Rate
14.6%

Abstract:
Membership Inference Attacks (MIAs) aim to determine the presence of a record in a machine learning model's training data by querying the model. Recent work has demonstrated the effectiveness of MIA on various machine learning models and corresponding defenses have been proposed. However, both attacks and defenses have focused on an adversary that indiscriminately attacks all the records without regard to the cost of false positives or negatives. In this work, we revisit membership inference attacks from the perspective of a pragmatic adversary who carefully selects targets and make predictions conservatively. We design a new evaluation methodology that allows us to evaluate the membership privacy risk at the level of individuals and not only in aggregate. We experimentally demonstrate that highly vulnerable records exist even when the aggregate attack precision is close to 50% (baseline). Specifically, on the MNIST dataset, our pragmatic adversary achieves a precision of 95.05% whereas the prior attack only achieves a precision of 51.7%.