Semantic-Aware Adaptive Binary Search for Hard-Label Black-Box Attack

被引:0
|
作者
Ma, Yiqing [1 ]
Lucke, Kyle [2 ]
Xian, Min [2 ]
Vakanski, Aleksandar [2 ,3 ]
机构
[1] Univ Utah, Huntsman Canc Inst, Salt Lake City, UT 84112 USA
[2] Univ Idaho, Dept Comp Sci, Idaho Falls, ID 83402 USA
[3] Univ Idaho, Dept Nucl Engn & Ind Management, Idaho Falls, ID 83402 USA
关键词
adversarial attack; hard-label black-box attack; adaptive binary search; breast ultrasound; semantic-aware search;
D O I
10.3390/computers13080203
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Despite the widely reported potential of deep neural networks for automated breast tumor classification and detection, these models are vulnerable to adversarial attacks, which leads to significant performance degradation on different datasets. In this paper, we introduce a novel adversarial attack approach under the decision-based black-box setting, where the attack does not have access to the model parameters, and the returned information from querying the target model consists of only the final class label prediction (i.e., hard-label attack). The proposed attack approach has two major components: adaptive binary search and semantic-aware search. The adaptive binary search utilizes a coarse-to-fine strategy that applies adaptive tolerance values in different searching stages to reduce unnecessary queries. The proposed semantic mask-aware search crops the search space by using breast anatomy, which significantly avoids invalid searches. We validate the proposed approach using a dataset of 3378 breast ultrasound images and compare it with another state-of-the-art method by attacking five deep learning models. The results demonstrate that the proposed approach generates imperceptible adversarial samples at a high success rate (between 99.52% and 100%), and dramatically reduces the average and median queries by 23.96% and 31.79%, respectively, compared with the state-of-the-art approach.
引用
收藏
页数:14
相关论文
共 36 条
  • [1] Hard-label Black-box Universal Adversarial Patch Attack
    Tao, Guanhong
    An, Shengwei
    Cheng, Siyuan
    Shen, Guangyu
    Zhang, Xiangyu
    PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 697 - 714
  • [2] PAT: Geometry-Aware Hard-Label Black-Box Adversarial Attacks on Text
    Ye, Muchao
    Chen, Jinghui
    Miao, Chenglin
    Liu, Han
    Wang, Ting
    Ma, Fenglong
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 3093 - 3104
  • [3] HQA-Attack: Toward High Quality Black-Box Hard-Label Adversarial Attack on Text
    Liu, Han
    Xu, Zhi
    Zhang, Xiaotong
    Zhang, Feng
    Ma, Fenglong
    Chen, Hongyang
    Yu, Hong
    Zhang, Xianchao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] DFDS: Data-Free Dual Substitutes Hard-Label Black-Box Adversarial Attack
    Jiang, Shuliang
    He, Yusheng
    Zhang, Rui
    Kang, Zi
    Xia, Hui
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2024, 2024, 14886 : 274 - 285
  • [5] Black-Box Dissector: Towards Erasing-Based Hard-Label Model Stealing Attack
    Wang, Yixu
    Li, Jie
    Liu, Hong
    Wang, Yan
    Wu, Yongjian
    Huang, Feiyue
    Ji, Rongrong
    COMPUTER VISION - ECCV 2022, PT V, 2022, 13665 : 192 - 208
  • [6] Query-Efficient Hard-Label Black-Box Attacks Using Biased Sampling
    Liu, Sijia
    Sun, Jian
    Li, Jun
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 3872 - 3877
  • [7] HyGloadAttack: Hard-label black-box textual adversarial attacks via hybrid optimization
    Liu, Zhaorong
    Xiong, Xi
    Li, Yuanyuan
    Yu, Yan
    Lu, Jiazhong
    Zhang, Shuai
    Xiong, Fei
    NEURAL NETWORKS, 2024, 178
  • [8] Fuzzing-based hard-label black-box attacks against machine learning models
    Qin, Yi
    Yue, Chuan
    COMPUTERS & SECURITY, 2022, 117
  • [9] A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
    Mu, Jiaming
    Wang, Binghui
    Li, Qi
    Sun, Kun
    Xu, Mingwei
    Liu, Zhuotao
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 108 - 125