Differentially Private Hypothesis Transfer Learning

被引:7
|
作者
Wang, Yang [1 ]
Gu, Quanquan [2 ]
Brown, Donald [1 ]
机构
[1] Univ Virginia, Dept Syst & Informat Engn, Charlottesville, VA 22904 USA
[2] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90024 USA
基金
美国国家科学基金会;
关键词
Differential privacy; Transfer learning;
D O I
10.1007/978-3-030-10928-8_48
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the focus of machine learning has been shifting to the paradigm of transfer learning where the data distribution in the target domain differs from that in the source domain. This is a prevalent setting in real-world classification problems and numerous well-established theoretical results in the classical supervised learning paradigm will break down under this setting. In addition, the increasing privacy protection awareness restricts access to source domain samples and poses new challenges for the development of privacy-preserving transfer learning algorithms. In this paper, we propose a novel differentially private multiple-source hypothesis transfer learning method for logistic regression. The target learner operates on differentially private hypotheses and importance weighting information from the sources to construct informative Gaussian priors for its logistic regression model. By leveraging a publicly available auxiliary data set, the importance weighting information can be used to determine the relationship between the source domain and the target domain without leaking source data privacy. Our approach provides a robust performance boost even when high quality labeled samples are extremely scarce in the target data set. The extensive experiments on two real-world data sets confirm the performance improvement of our approach over several baselines. Data related to this paper is available at: http://qwone.com/similar to jason/20Newsgroups/ and https://www.cs.jhu.edu/similar to mdredze/datasets/sentiment/index2.html.
引用
收藏
页码:811 / 826
页数:16
相关论文
共 50 条
  • [1] Differentially private knowledge transfer for federated learning
    Tao Qi
    Fangzhao Wu
    Chuhan Wu
    Liang He
    Yongfeng Huang
    Xing Xie
    [J]. Nature Communications, 14
  • [2] Differentially private knowledge transfer for federated learning
    Qi, Tao
    Wu, Fangzhao
    Wu, Chuhan
    He, Liang
    Huang, Yongfeng
    Xie, Xing
    [J]. NATURE COMMUNICATIONS, 2023, 14 (01)
  • [3] A Knowledge Transfer Framework for Differentially Private Sparse Learning
    Wang, Lingxiao
    Gu, Quanquan
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 6235 - 6242
  • [4] Differentially Private Nonparametric Hypothesis Testing
    Couch, Simon
    Kazan, Zeki
    Shi, Kaiyan
    Bray, Andrew
    Groce, Adam
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 737 - 751
  • [5] On Differentially Private Gaussian Hypothesis Testing
    Degue, Kwassi H.
    Le Ny, Jerome
    [J]. 2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 842 - 847
  • [6] Differentially Private Hypothesis Testing for Linear Regression
    Alabi, Daniel G.
    Vadhan, Salil P.
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [7] Hypothesis Testing for Differentially Private Linear Regression
    Alabi, Daniel
    Vadhan, Salil
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [8] FL-PATE: Differentially Private Federated Learning with Knowledge Transfer
    Pan, Yanghe
    Ni, Jianbing
    Su, Zhou
    [J]. 2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [9] Differentially Private Distributed Learning
    Zhou, Yaqin
    Tang, Shaojie
    [J]. INFORMS JOURNAL ON COMPUTING, 2020, 32 (03) : 779 - 789
  • [10] Differentially Private Fair Learning
    Jagielski, Matthew
    Kearns, Michael
    Mao, Jieming
    Oprea, Alina
    Roth, Aaron
    Sharifi-Malvajerdi, Saeed
    Ullman, Jonathan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97