Adversarial Attack and Defense in Deep Ranking

被引:2
|
作者
Zhou, Mo [1 ]
Wang, Le [3 ,1 ]
Niu, Zhenxing [2 ]
Zhang, Qilin [3 ]
Zheng, Nanning
Hua, Gang
机构
[1] Jiaotong Univ, Inst Artificial Intelligence & Robot, Xian 710049, Peoples R China
[2] Xidian Univ, Xidian 710071, CA, Peoples R China
[3] Apple, Cupertino, CA 95014 USA
基金
国家重点研发计划;
关键词
Robustness; Perturbation methods; Glass box; Training; Face recognition; Adaptation models; Task analysis; Adversarial attack; adversarial defense; deep metric learning; deep ranking; ranking model robustness; IMAGE SIMILARITY;
D O I
10.1109/TPAMI.2024.3365699
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Network classifiers are vulnerable to adversarial attacks, where an imperceptible perturbation could result in misclassification. However, the vulnerability of DNN-based image ranking systems remains under-explored. In this paper, we propose two attacks against deep ranking systems, i.e., Candidate Attack and Query Attack, that can raise or lower the rank of chosen candidates by adversarial perturbations. Specifically, the expected ranking order is first represented as a set of inequalities. Then a triplet-like objective function is designed to obtain the optimal perturbation. Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks, where the model learns to prevent the adversarial attack from pulling the positive and negative samples close to each other. To comprehensively measure the empirical adversarial robustness of a ranking model with our defense, we propose an empirical robustness score, which involves a set of representative attacks against ranking models. Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196, and Stanford Online Products datasets. Experimental results demonstrate that our attacks can effectively compromise a typical deep ranking system. Nevertheless, our defense can significantly improve the ranking system's robustness and simultaneously mitigate a wide range of attacks.
引用
收藏
页码:5306 / 5324
页数:19
相关论文
共 50 条
  • [31] Generative Adversarial Networks: A Survey on Attack and Defense Perspective
    Zhang, Chenhan
    Yu, Shui
    Tian, Zhiyi
    Yu, James J. Q.
    ACM COMPUTING SURVEYS, 2024, 56 (04)
  • [32] CRank: Reusable Word Importance Ranking for Text Adversarial Attack
    Chen, Xinyi
    Liu, Bo
    APPLIED SCIENCES-BASEL, 2021, 11 (20):
  • [33] Conditional Generative Adversarial Networks with Adversarial Attack and Defense for Generative Data Augmentation
    Baek, Francis
    Kim, Daeho
    Park, Somin
    Kim, Hyoungkwan
    Lee, SangHyun
    JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2022, 36 (03)
  • [34] Practical Relative Order Attack in Deep Ranking
    Zhou, Mo
    Wang, Le
    Niu, Zhenxing
    Zhang, Qilin
    Xu, Yinghui
    Zheng, Nanning
    Hua, Gang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16393 - 16402
  • [35] Targeted Attack and Defense for Deep Hashing
    Wang, Xunguang
    Zhang, Zheng
    Lu, Guangming
    Xu, Yong
    SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 2298 - 2302
  • [36] Deep adversarial attack on target detection systems
    Osahor, Uche M.
    Nasrabadi, Nasser M.
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [37] Bounded Adversarial Attack on Deep Content Features
    Xu, Qiuling
    Tao, Guanhong
    Zhang, Xiangyu
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15182 - 15191
  • [38] ADVERSARIAL WATERMARKING TO ATTACK DEEP NEURAL NETWORKS
    Wang, Gengxing
    Chen, Xinyuan
    Xu, Chang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1962 - 1966
  • [39] A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks
    Deldjoo, Yashar
    Di Noia, Tommaso
    Merra, Felice Antonio
    ACM COMPUTING SURVEYS, 2021, 54 (02)
  • [40] Adversarial Defense on Harmony: Reverse Attack for Robust AI Models Against Adversarial Attacks
    Kim, Yebon
    Jung, Jinhyo
    Kim, Hyunjun
    So, Hwisoo
    Ko, Yohan
    Shrivastava, Aviral
    Lee, Kyoungwoo
    Hwang, Uiwon
    IEEE ACCESS, 2024, 12 : 176485 - 176497