Adversarial Attack and Defense in Deep Ranking

被引:2
|
作者
Zhou, Mo [1 ]
Wang, Le [3 ,1 ]
Niu, Zhenxing [2 ]
Zhang, Qilin [3 ]
Zheng, Nanning
Hua, Gang
机构
[1] Jiaotong Univ, Inst Artificial Intelligence & Robot, Xian 710049, Peoples R China
[2] Xidian Univ, Xidian 710071, CA, Peoples R China
[3] Apple, Cupertino, CA 95014 USA
基金
国家重点研发计划;
关键词
Robustness; Perturbation methods; Glass box; Training; Face recognition; Adaptation models; Task analysis; Adversarial attack; adversarial defense; deep metric learning; deep ranking; ranking model robustness; IMAGE SIMILARITY;
D O I
10.1109/TPAMI.2024.3365699
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Network classifiers are vulnerable to adversarial attacks, where an imperceptible perturbation could result in misclassification. However, the vulnerability of DNN-based image ranking systems remains under-explored. In this paper, we propose two attacks against deep ranking systems, i.e., Candidate Attack and Query Attack, that can raise or lower the rank of chosen candidates by adversarial perturbations. Specifically, the expected ranking order is first represented as a set of inequalities. Then a triplet-like objective function is designed to obtain the optimal perturbation. Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks, where the model learns to prevent the adversarial attack from pulling the positive and negative samples close to each other. To comprehensively measure the empirical adversarial robustness of a ranking model with our defense, we propose an empirical robustness score, which involves a set of representative attacks against ranking models. Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196, and Stanford Online Products datasets. Experimental results demonstrate that our attacks can effectively compromise a typical deep ranking system. Nevertheless, our defense can significantly improve the ranking system's robustness and simultaneously mitigate a wide range of attacks.
引用
收藏
页码:5306 / 5324
页数:19
相关论文
共 50 条
  • [1] Adversarial Attack and Defense in Breast Cancer Deep Learning Systems
    Li, Yang
    Liu, Shaoying
    BIOENGINEERING-BASEL, 2023, 10 (08):
  • [2] Adversarial attack and defense strategies for deep speaker recognition systems
    Jati, Arindam
    Hsu, Chin-Cheng
    Pal, Monisankha
    Peri, Raghuveer
    AbdAlmageed, Wael
    Narayanan, Shrikanth
    COMPUTER SPEECH AND LANGUAGE, 2021, 68
  • [3] Adversarial Examples for Graph Data: Deep Insights into Attack and Defense
    Wu, Huijun
    Wang, Chen
    Tyshetskiy, Yuriy
    Docherty, Andrew
    Lu, Kai
    Zhu, Liming
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4816 - 4823
  • [4] Adversarial Attack Defense Based on the Deep Image Prior Network
    Sutanto, Richard Evan
    Lee, Sukho
    INFORMATION SCIENCE AND APPLICATIONS, 2020, 621 : 519 - 526
  • [5] Sinkhorn Adversarial Attack and Defense
    Subramanyam, A. V.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 4039 - 4049
  • [6] Adversarial Attack and Defense: A Survey
    Liang, Hongshuo
    He, Erlu
    Zhao, Yangyang
    Jia, Zhe
    Li, Hao
    ELECTRONICS, 2022, 11 (08)
  • [7] Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks
    Liu, Qi
    Liu, Tao
    Wen, Wujie
    CYBER SENSING 2018, 2018, 10630
  • [8] Adversarial Attack and Defense on Deep Learning for Air Transportation Communication Jamming
    Liu, Mingqian
    Zhang, Zhenju
    Chen, Yunfei
    Ge, Jianhua
    Zhao, Nan
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (01) : 973 - 986
  • [9] Adversarial Deep Learning for Cognitive Radio Security: Jamming Attack and Defense Strategies
    Shi, Yi
    Sagduyu, Yalin E.
    Erpek, Tugba
    Davaslioglu, Kemal
    Lu, Zhuo
    Li, Jason H.
    2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2018,
  • [10] Gradient-Based Adversarial Ranking Attack
    Wu C.
    Zhang R.
    Guo J.
    Fan Y.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2022, 35 (03): : 254 - 261