Adversarial Attack and Defense in Deep Ranking

被引:2
|
作者
Zhou, Mo [1 ]
Wang, Le [3 ,1 ]
Niu, Zhenxing [2 ]
Zhang, Qilin [3 ]
Zheng, Nanning
Hua, Gang
机构
[1] Jiaotong Univ, Inst Artificial Intelligence & Robot, Xian 710049, Peoples R China
[2] Xidian Univ, Xidian 710071, CA, Peoples R China
[3] Apple, Cupertino, CA 95014 USA
基金
国家重点研发计划;
关键词
Robustness; Perturbation methods; Glass box; Training; Face recognition; Adaptation models; Task analysis; Adversarial attack; adversarial defense; deep metric learning; deep ranking; ranking model robustness; IMAGE SIMILARITY;
D O I
10.1109/TPAMI.2024.3365699
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Network classifiers are vulnerable to adversarial attacks, where an imperceptible perturbation could result in misclassification. However, the vulnerability of DNN-based image ranking systems remains under-explored. In this paper, we propose two attacks against deep ranking systems, i.e., Candidate Attack and Query Attack, that can raise or lower the rank of chosen candidates by adversarial perturbations. Specifically, the expected ranking order is first represented as a set of inequalities. Then a triplet-like objective function is designed to obtain the optimal perturbation. Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks, where the model learns to prevent the adversarial attack from pulling the positive and negative samples close to each other. To comprehensively measure the empirical adversarial robustness of a ranking model with our defense, we propose an empirical robustness score, which involves a set of representative attacks against ranking models. Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196, and Stanford Online Products datasets. Experimental results demonstrate that our attacks can effectively compromise a typical deep ranking system. Nevertheless, our defense can significantly improve the ranking system's robustness and simultaneously mitigate a wide range of attacks.
引用
收藏
页码:5306 / 5324
页数:19
相关论文
共 50 条
  • [21] Understanding Universal Adversarial Attack and Defense on Graph
    Wang, Tianfeng
    Pan, Zhisong
    Hu, Guyu
    Duan, Yexin
    Pan, Yu
    INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS, 2022, 18 (01)
  • [22] Cycle-Consistent Adversarial GAN: The Integration of Adversarial Attack and Defense
    Jiang, Lingyun
    Qiao, Kai
    Qin, Ruoxi
    Wang, Linyuan
    Yu, Wanting
    Chen, Jian
    Bu, Haibing
    Yan, Bin
    SECURITY AND COMMUNICATION NETWORKS, 2020, 2020 (2020)
  • [23] Adversarial Attack and Defense on Deep Neural Network-Based Voice Processing Systems: An Overview
    Chen, Xiaojiao
    Li, Sheng
    Huang, Hao
    APPLIED SCIENCES-BASEL, 2021, 11 (18):
  • [24] A Comprehensive Review and Analysis of Deep Learning-Based Medical Image Adversarial Attack and Defense
    Muoka, Gladys W.
    Yi, Ding
    Ukwuoma, Chiagoziem C.
    Mutale, Albert
    Ejiyi, Chukwuebuka J.
    Mzee, Asha Khamis
    Gyarteng, Emmanuel S. A.
    Alqahtani, Ali
    Al-antari, Mugahed A.
    MATHEMATICS, 2023, 11 (20)
  • [25] Destabilizing Attack and Robust Defense for Inverter-Based Microgrids by Adversarial Deep Reinforcement Learning
    Wang, Yu
    Pal, Bikash C.
    IEEE TRANSACTIONS ON SMART GRID, 2023, 14 (06) : 4839 - 4850
  • [26] Person re-identification using adversarial haze attack and defense: A deep learning framework
    Kanwal, Shansa
    Shah, Jamal Hussain
    Khan, Muhammad Attique
    Nisa, Maryam
    Kadry, Seifedine
    Sharif, Muhammad
    Yasmin, Mussarat
    Maheswari, M.
    COMPUTERS & ELECTRICAL ENGINEERING, 2021, 96
  • [27] Attack as Defense: Characterizing Adversarial Examples using Robustness
    Zhao, Zhe
    Chen, Guangke
    Wang, Jingyi
    Yang, Yiwei
    Song, Fu
    Sun, Jun
    ISSTA '21: PROCEEDINGS OF THE 30TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, 2021, : 42 - 55
  • [28] ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio
    Das, Nilaksh
    Shanbhogue, Madhuri
    Chen, Shang-Tse
    Chen, Li
    Kounavis, Michael E.
    Chau, Duen Horng
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2018, PT III, 2019, 11053 : 677 - 681
  • [29] Review of Artificial Intelligence Adversarial Attack and Defense Technologies
    Qiu, Shilin
    Liu, Qihe
    Zhou, Shijie
    Wu, Chunjiang
    APPLIED SCIENCES-BASEL, 2019, 9 (05):
  • [30] Adversarial Attack and Defense on Discrete Time Dynamic Graphs
    Zhao, Ziwei
    Yang, Yu
    Yin, Zikai
    Xu, Tong
    Zhu, Xi
    Lin, Fake
    Li, Xueying
    Chen, Enhong
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 7600 - 7611