Managing Your Private and Public Data: Bringing Down Inference Attacks Against Your Privacy

被引:36
|
作者
Salamatian, Salman [1 ]
Zhang, Amy [2 ]
Calmon, Flavio du Pin [1 ]
Bhamidipati, Sandilya [3 ]
Fawaz, Nadia [3 ]
Kveton, Branislav [4 ]
Oliveira, Pedro [5 ]
Taft, Nina [6 ]
机构
[1] MIT, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
[2] SET Media, San Francisco, CA 94108 USA
[3] Technicolor, Los Altos, CA 94022 USA
[4] Adobe Res, San Jose, CA 95113 USA
[5] Disqus, San Francisco, CA 94105 USA
[6] Google, Mountain View, CA 94043 USA
关键词
Data Privacy; information theory; mutual information; INFORMATION;
D O I
10.1109/JSTSP.2015.2442227
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We propose a practical methodology to protect a user's private data, when he wishes to publicly release data that is correlated with his private data, to get some utility. Our approach relies on a general statistical inference framework that captures the privacy threat under inference attacks, given utility constraints. Under this framework, data is distorted before it is released, according to a probabilistic privacy mapping. This mapping is obtained by solving a convex optimization problem, which minimizes information leakage under a distortion constraint. We address practical challenges encountered when applying this theoretical framework to real world data. On one hand, the design of optimal privacy mappings requires knowledge of the prior distribution linking private data and data to be released, which is often unavailable in practice. On the other hand, the optimization may become untractable when data assumes values in large size alphabets, or is high dimensional. Our work makes three major contributions. First, we provide bounds on the impact of a mismatched prior on the privacy-utility tradeoff. Second, we show how to reduce the optimization size by introducing a quantization step, and how to generate privacy mappings under quantization. Third, we evaluate our method on two datasets, including a new dataset that we collected, showing correlations between political convictions and TV viewing habits. We demonstrate that good privacy properties can be achieved with limited distortion so as not to undermine the original purpose of the publicly released data, e.g., recommendations.
引用
收藏
页码:1240 / 1255
页数:16
相关论文
共 50 条
  • [1] PRIVACY IN AMERICA - IS YOUR PRIVATE LIFE IN THE PUBLIC EYE - LINOWES,DF
    DANZIGER, JN
    [J]. AMERICAN POLITICAL SCIENCE REVIEW, 1992, 86 (01) : 246 - 247
  • [2] Privacy-Preserving Network Embedding Against Private Link Inference Attacks
    Han, Xiao
    Yang, Yuncong
    Wang, Leye
    Wu, Junjie
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (02) : 847 - 859
  • [3] Managing your public image
    Hoover, D
    Rempel, C
    Wilson, K
    [J]. ADVANCES IN PORK PRODUCTION, VOL 10, 1999, : 153 - 160
  • [4] PRIVACY IN AMERICA - IS YOUR PRIVATE LIFE IN THE PUBLIC-EYE - LINOWES,DF
    RULE, JB
    [J]. NEW YORK TIMES BOOK REVIEW, 1990, : 34 - 34
  • [5] Managing your privacy in Mobile Applications with Mockingbird
    Ferrari, Alan
    Puccinelli, Daniele
    Giordano, Silvia
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATION WORKSHOPS (PERCOM WORKSHOPS), 2015, : 288 - 291
  • [6] Managing your privacy in an on-line world
    Summers, PD
    [J]. IEEE EXPERT-INTELLIGENT SYSTEMS & THEIR APPLICATIONS, 1997, 12 (01): : 76 - 77
  • [7] GO PUBLIC, YOUR PRIVACY IS DEAD
    MORRIS, RB
    [J]. AMERICAN BAR ASSOCIATION JOURNAL, 1977, 63 (NOV): : 1558 - 1561
  • [8] Bringing psychology to the public: Have your say
    Jones, Fiona
    [J]. PSYCHOLOGIST, 2012, 25 (04) : 272 - 272
  • [9] Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
    Mehnai, Shagufta
    Dibbo, Sayanton, V
    Kabir, Ehsanul
    Li, Ninghui
    Bertino, Elisa
    [J]. PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 4579 - 4596
  • [10] Inference Attacks against Kin Genomic Privacy
    Ayday, Erman
    Humbert, Mathias
    [J]. IEEE SECURITY & PRIVACY, 2017, 15 (05) : 29 - 37