Perturbation-Invariant Adversarial Training for Neural Ranking Models: Improving the Effectiveness-Robustness Trade-Off

被引:0
|
作者
Liu, Yu-An [1 ,2 ]
Zhang, Ruqing [1 ,2 ]
Zhang, Mingkun [1 ,2 ]
Chen, Wei [1 ,2 ]
de Rijke, Maarten [3 ]
Guo, Jiafeng [1 ,2 ]
Cheng, Xueqi [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, ICAS Key Lab Network Data Sci & Technol, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Univ Amsterdam, Amsterdam, Netherlands
基金
中国国家自然科学基金; 荷兰研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural ranking models (NRMs) have shown great success in information retrieval (IR). But their predictions can easily be manipulated using adversarial examples, which are crafted by adding imperceptible perturbations to legitimate documents. This vulnerability raises significant concerns about their reliability and hinders the widespread deployment of NRMs. By incorporating adversarial examples into training data, adversarial training has become the de facto defense approach to adversarial attacks against NRMs. However, this defense mechanism is subject to a trade-off between effectiveness and adversarial robustness. In this study, we establish theoretical guarantees regarding the effectiveness-robustness tradeoff in NRMs. We decompose the robust ranking error into two components, i.e., a natural ranking error for effectiveness evaluation and a boundary ranking error for assessing adversarial robustness. Then, we define the perturbation invariance of a ranking model and prove it to be a differentiable upper bound on the boundary ranking error for attainable computation. Informed by our theoretical analysis, we design a novel perturbation-invariant adversarial training (PIAT) method for ranking models to achieve a better effectiveness-robustness trade-off. We design a regularized surrogate loss, in which one term encourages the effectiveness to be maximized while the regularization term encourages the output to be smooth, so as to improve adversarial robustness. Experimental results on several ranking models demonstrate the superiority of PITA compared to existing adversarial defenses.
引用
收藏
页码:8832 / 8840
页数:9
相关论文
共 7 条
  • [1] GAAT: Group Adaptive Adversarial Training to Improve the Trade-Off Between Robustness and Accuracy
    Qian, Yaguan
    Liang, Xiaoyu
    Kang, Ming
    Wang, Bin
    Gu, Zhaoquan
    Wang, Xing
    Wu, Chunming
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (13)
  • [2] Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural Networks
    Lee, Kyungmi
    Chandrakasan, Anantha P.
    [J]. 2021 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS 2021), 2021, : 46 - 51
  • [3] Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural Networks
    Lee, Kyungmi
    Chandrakasan, Anantha P.
    [J]. IEEE OPEN JOURNAL OF CIRCUITS AND SYSTEMS, 2021, 2 : 843 - 855
  • [4] Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks
    Kamath, Sandesh
    Deshpande, Amit
    Subrahmanyam, K. V.
    Balasubramanian, Vineeth N.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [5] Trade-Off Between Robustness and Rewards Adversarial Training for Deep Reinforcement Learning Under Large Perturbations
    Huang, Jeffrey
    Choi, Ho Jin
    Figueroa, Nadia
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (12) : 8018 - 8025
  • [6] FLOAT: Fast Learnable Once-for-All Adversarial Training for Tunable Trade-off between Accuracy and Robustness
    Kundu, Souvik
    Sundaresan, Sairam
    Pedram, Massoud
    Beerel, Peter A.
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2348 - 2357
  • [7] Towards Lifting the Trade-off between Accuracy and Adversarial Robustness of Deep Neural Networks with Application on COVID 19 CT Image Classification and Medical Image Segmentation
    Ma, Linhai
    Liang, Liang
    [J]. MEDICAL IMAGING 2023, 2023, 12464