Black-box Attacks Against Neural Binary Function Detection

被引:0
|
作者
Bundt, Joshua [1 ,2 ]
Davinroy, Michael [1 ]
Agadakos, Ioannis [1 ,3 ]
Oprea, Alina [1 ]
Robertson, William [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] Army Cyber Inst, West Point, NY 10996 USA
[3] Amazon, Seattle, WA USA
基金
美国国家科学基金会;
关键词
binary analysis; disassembly; deep neural network; function boundary detection; CODE;
D O I
10.1145/3607199.3607200
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Binary analyses based on deep neural networks (DNNs), or neural binary analyses (NBAs), have become a hotly researched topic in recent years. DNNs have been wildly successful at pushing the performance and accuracy envelopes in the natural language and image processing domains. Thus, DNNs are highly promising for solving binary analysis problems that are hard due to a lack of complete information resulting from the lossy compilation process. Despite this promise, it is unclear that the prevailing strategy of repurposing embeddings and model architectures originally developed for other problem domains is sound given the adversarial contexts under which binary analysis often operates. In this paper, we empirically demonstrate that the current state of the art in neural function boundary detection is vulnerable to both inadvertent and deliberate adversarial attacks. We proceed from the insight that current generation NBAs are built upon embeddings and model architectures intended to solve syntactic problems. We devise a simple, reproducible, and scalable black-box methodology for exploring the space of inadvertent attacks - instruction sequences that could be emitted by common compiler toolchains and configurations - that exploits this syntactic design focus. We then show that these inadvertent misclassifications can be exploited by an attacker, serving as the basis for a highly effective black-box adversarial example generation process. We evaluate this methodology against two state-of-the-art neural function boundary detectors: XDA and DeepDi. We conclude with an analysis of the evaluation data and recommendations for how future research might avoid succumbing to similar attacks.
引用
收藏
页码:1 / 16
页数:16
相关论文
共 50 条
  • [21] Black-box Attacks to Log-based Anomaly Detection
    Huang, Shaohan
    Liu, Yi
    Fung, Carol
    Yang, Hailong
    Luan, Zhongzhi
    [J]. 2022 18TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT (CNSM 2022): INTELLIGENT MANAGEMENT OF DISRUPTIVE NETWORK TECHNOLOGIES AND SERVICES, 2022, : 310 - 316
  • [22] Black-box adversarial attacks on XSS attack detection model
    Wang, Qiuhua
    Yang, Hui
    Wu, Guohua
    Choo, Kim-Kwang Raymond
    Zhang, Zheng
    Miao, Gongxun
    Ren, Yizhi
    [J]. COMPUTERS & SECURITY, 2022, 113
  • [23] Black-Box Adversarial Attacks Against Deep Learning Based Malware Binaries Detection with GAN
    Yuan, Junkun
    Zhou, Shaofang
    Lin, Lanfen
    Wang, Feng
    Cui, Jia
    [J]. ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 2536 - 2542
  • [24] Simple Black-box Adversarial Attacks
    Guo, Chuan
    Gardner, Jacob R.
    You, Yurong
    Wilson, Andrew Gordon
    Weinberger, Kilian Q.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [25] Malware Detection Using Black-Box Neural Method
    Pieczynski, Dominik
    Jedrzejek, Czeslaw
    [J]. MULTIMEDIA AND NETWORK INFORMATION SYSTEMS, 2019, 833 : 180 - 189
  • [26] An Adaptive Black-box Defense against Trojan Attacks on Text Data
    Alsharadgah, Fatima
    Khreishah, Abdallah
    Al-Ayyoub, Mahmoud
    Jararweh, Yaser
    Liu, Guanxiong
    Khalil, Issa
    Almutiry, Muhannad
    Saeed, Nasir
    [J]. 2021 EIGHTH INTERNATIONAL CONFERENCE ON SOCIAL NETWORK ANALYSIS, MANAGEMENT AND SECURITY (SNAMS), 2021, : 155 - 162
  • [27] Efficient Label Contamination Attacks Against Black-Box Learning Models
    Zhao, Mengchen
    An, Bo
    Gao, Wei
    Zhang, Teng
    [J]. PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3945 - 3951
  • [28] Ensemble adversarial black-box attacks against deep learning systems
    Hang, Jie
    Han, Keji
    Chen, Hui
    Li, Yun
    [J]. PATTERN RECOGNITION, 2020, 101
  • [29] Black-Box Data Poisoning Attacks on Crowdsourcing
    Chen, Pengpeng
    Yang, Yongqiang
    Yang, Dingqi
    Sun, Hailong
    Chen, Zhijun
    Lin, Peng
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 2975 - 2983
  • [30] Toward Visual Distortion in Black-Box Attacks
    Li, Nannan
    Chen, Zhenzhong
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 6156 - 6167