PORE: Provably Robust Recommender Systems against Data Poisoning Attacks

被引:0
|
作者
Jia, Jinyuan [1 ]
Liu, Yupei [2 ]
Hu, Yuepeng [2 ]
Gong, Neil Zhenqiang [2 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] Duke Univ, Durham, NC 27706 USA
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data poisoning attacks spoof a recommender system to make arbitrary, attacker-desired recommendations via injecting fake users with carefully crafted rating scores into the recommender system. We envision a cat-and-mouse game for such data poisoning attacks and their defenses, i.e., new defenses are designed to defend against existing attacks and new attacks are designed to break them. To prevent such cat-and-mouse game, we propose PORE, the first framework to build provably robust recommender systems in this work. PORE can transform any existing recommender system to be provably robust against any untargeted data poisoning attacks, which aim to reduce the overall performance of a recommender system. Suppose PORE recommends top-N items to a user when there is no attack. We prove that PORE still recommends at least r of the N items to the user under any data poisoning attack, where r is a function of the number of fake users in the attack. Moreover, we design an efficient algorithm to compute r for each user. We empirically evaluate PORE on popular benchmark datasets.
引用
收藏
页码:1703 / 1720
页数:18
相关论文
共 50 条
  • [41] Data Poisoning Attacks and Defenses to Crowdsourcing Systems
    Fang, Minghong
    Sun, Minghao
    Li, Qi
    Gong, Neil Zhenqiang
    Tian, Jin
    Liu, Jia
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 969 - 980
  • [42] Data poisoning attacks against machine learning algorithms
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [43] Securing Machine Learning Against Data Poisoning Attacks
    Allheeib, Nasser
    INTERNATIONAL JOURNAL OF DATA WAREHOUSING AND MINING, 2024, 20 (01)
  • [44] Robust federated contrastive recommender system against targeted model poisoning attack
    Wei Yuan
    Chaoqun Yang
    Liang Qu
    Guanhua Ye
    Quoc Viet Hung Nguyen
    Hongzhi Yin
    Science China Information Sciences, 2025, 68 (4)
  • [45] Robust federated contrastive recommender system against targeted model poisoning attack
    Wei YUAN
    Chaoqun YANG
    Liang QU
    Guanhua YE
    Quoc Viet Hung NGUYEN
    Hongzhi YIN
    Science China(Information Sciences), 2025, 68 (04) : 50 - 65
  • [46] Data Poisoning Attack against Recommender System Using Incomplete and Perturbed Data
    Zhang, Hengtong
    Tian, Changxin
    Li, Yaliang
    Su, Lu
    Yang, Nan
    Zhao, Wayne Xin
    Gao, Jing
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2154 - 2164
  • [47] Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor Attacks
    Yang, Wenhan
    Gao, Jingdong
    Mirzasoleiman, Baharan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [48] Robust Estimation Method against Poisoning Attacks for Key-Value Data with Local Differential Privacy
    Horigome, Hikaru
    Kikuchi, Hiroaki
    Fujita, Masahiro
    Yu, Chia-Mu
    APPLIED SCIENCES-BASEL, 2024, 14 (14):
  • [49] Local Differential Privacy Protocol for Making Key-Value Data Robust Against Poisoning Attacks
    Horigome, Hikaru
    Kikuchi, Hiroaki
    Yu, Chia-Mu
    MODELING DECISIONS FOR ARTIFICIAL INTELLIGENCE, MDAI 2023, 2023, 13890 : 241 - 252
  • [50] Learning a robust foundation model against clean-label data poisoning attacks at downstream tasks
    Zhou, Ting
    Yan, Hanshu
    Zhang, Jingfeng
    Liu, Lei
    Han, Bo
    NEURAL NETWORKS, 2024, 169 : 756 - 763