Black-box Attacks Against Neural Binary Function Detection

被引:0
|
作者
Bundt, Joshua [1 ,2 ]
Davinroy, Michael [1 ]
Agadakos, Ioannis [1 ,3 ]
Oprea, Alina [1 ]
Robertson, William [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] Army Cyber Inst, West Point, NY 10996 USA
[3] Amazon, Seattle, WA USA
基金
美国国家科学基金会;
关键词
binary analysis; disassembly; deep neural network; function boundary detection; CODE;
D O I
10.1145/3607199.3607200
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Binary analyses based on deep neural networks (DNNs), or neural binary analyses (NBAs), have become a hotly researched topic in recent years. DNNs have been wildly successful at pushing the performance and accuracy envelopes in the natural language and image processing domains. Thus, DNNs are highly promising for solving binary analysis problems that are hard due to a lack of complete information resulting from the lossy compilation process. Despite this promise, it is unclear that the prevailing strategy of repurposing embeddings and model architectures originally developed for other problem domains is sound given the adversarial contexts under which binary analysis often operates. In this paper, we empirically demonstrate that the current state of the art in neural function boundary detection is vulnerable to both inadvertent and deliberate adversarial attacks. We proceed from the insight that current generation NBAs are built upon embeddings and model architectures intended to solve syntactic problems. We devise a simple, reproducible, and scalable black-box methodology for exploring the space of inadvertent attacks - instruction sequences that could be emitted by common compiler toolchains and configurations - that exploits this syntactic design focus. We then show that these inadvertent misclassifications can be exploited by an attacker, serving as the basis for a highly effective black-box adversarial example generation process. We evaluate this methodology against two state-of-the-art neural function boundary detectors: XDA and DeepDi. We conclude with an analysis of the evaluation data and recommendations for how future research might avoid succumbing to similar attacks.
引用
收藏
页码:1 / 16
页数:16
相关论文
共 50 条
  • [1] Binary Black-Box Adversarial Attacks with Evolutionary Learning against IoT Malware Detection
    Wang, Fangwei
    Lu, Yuanyuan
    Wang, Changguang
    Li, Qingru
    [J]. WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021
  • [2] Towards Lightweight Black-Box Attacks Against Deep Neural Networks
    Sun, Chenghao
    Zhang, Yonggang
    Wan Chaoqun
    Wang, Qizhou
    Li, Ya
    Liu, Tongliang
    Han, Bo
    Tian, Xinmei
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Black-box attacks against log anomaly detection with adversarial examples
    Lu, Siyang
    Wang, Mingquan
    Wang, Dongdong
    Wei, Xiang
    Xiao, Sizhe
    Wang, Zhiwei
    Han, Ningning
    Wang, Liqiang
    [J]. INFORMATION SCIENCES, 2023, 619 : 249 - 262
  • [4] PRADA: Practical Black-box Adversarial Attacks against Neural Ranking Models
    Wu, Chen
    Zhang, Ruqing
    Guo, Jiafeng
    De Rijke, Maarten
    Fan, Yixing
    Cheng, Xueqi
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (04)
  • [5] Adversarial Black-Box Attacks Against Network Intrusion Detection Systems: A Survey
    Alatwi, Huda Ali
    Aldweesh, Amjad
    [J]. 2021 IEEE WORLD AI IOT CONGRESS (AIIOT), 2021, : 34 - 40
  • [6] Impossibility of Black-Box Simulation Against Leakage Attacks
    Ostrovsky, Rafail
    Persiano, Giuseppe
    Visconti, Ivan
    [J]. ADVANCES IN CRYPTOLOGY, PT II, 2015, 9216 : 130 - 149
  • [7] Practical Black-Box Attacks against Machine Learning
    Papernot, Nicolas
    McDaniel, Patrick
    Goodfellow, Ian
    Jha, Somesh
    Celik, Z. Berkay
    Swami, Ananthram
    [J]. PROCEEDINGS OF THE 2017 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (ASIA CCS'17), 2017, : 506 - 519
  • [8] Boundary Defense Against Black-box Adversarial Attacks
    Aithal, Manjushree B.
    Li, Xiaohua
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2349 - 2356
  • [9] Black-Box Complexity of the Binary Value Function
    Bulanova, Nina
    Buzdalov, Maxim
    [J]. PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION (GECCCO'19 COMPANION), 2019, : 423 - 424
  • [10] Topic-oriented Adversarial Attacks against Black-box Neural Ranking Models
    Liu, Yu-An
    Zhang, Ruqing
    Guo, Jiafeng
    de Rijke, Maarten
    Chen, Wei
    Fan, Yixing
    Cheng, Xueqi
    [J]. PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 1700 - 1709