Black-box Attacks Against Neural Binary Function Detection

被引:0
|
作者
Bundt, Joshua [1 ,2 ]
Davinroy, Michael [1 ]
Agadakos, Ioannis [1 ,3 ]
Oprea, Alina [1 ]
Robertson, William [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] Army Cyber Inst, West Point, NY 10996 USA
[3] Amazon, Seattle, WA USA
基金
美国国家科学基金会;
关键词
binary analysis; disassembly; deep neural network; function boundary detection; CODE;
D O I
10.1145/3607199.3607200
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Binary analyses based on deep neural networks (DNNs), or neural binary analyses (NBAs), have become a hotly researched topic in recent years. DNNs have been wildly successful at pushing the performance and accuracy envelopes in the natural language and image processing domains. Thus, DNNs are highly promising for solving binary analysis problems that are hard due to a lack of complete information resulting from the lossy compilation process. Despite this promise, it is unclear that the prevailing strategy of repurposing embeddings and model architectures originally developed for other problem domains is sound given the adversarial contexts under which binary analysis often operates. In this paper, we empirically demonstrate that the current state of the art in neural function boundary detection is vulnerable to both inadvertent and deliberate adversarial attacks. We proceed from the insight that current generation NBAs are built upon embeddings and model architectures intended to solve syntactic problems. We devise a simple, reproducible, and scalable black-box methodology for exploring the space of inadvertent attacks - instruction sequences that could be emitted by common compiler toolchains and configurations - that exploits this syntactic design focus. We then show that these inadvertent misclassifications can be exploited by an attacker, serving as the basis for a highly effective black-box adversarial example generation process. We evaluate this methodology against two state-of-the-art neural function boundary detectors: XDA and DeepDi. We conclude with an analysis of the evaluation data and recommendations for how future research might avoid succumbing to similar attacks.
引用
收藏
页码:1 / 16
页数:16
相关论文
共 50 条
  • [31] Resiliency of SNN on Black-Box Adversarial Attacks
    Paudel, Bijay Raj
    Itani, Aashish
    Tragoudas, Spyros
    [J]. 20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 799 - 806
  • [32] Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning
    Liu, Guanlin
    Lai, Lifeng
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] Spectral Privacy Detection on Black-box Graph Neural Networks
    Yang, Yining
    Lu, Jialiang
    [J]. 2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,
  • [34] SoK: Pitfalls in Evaluating Black-Box Attacks
    Suya, Fnu
    Suri, Anshuman
    Zhang, Tingwei
    Hong, Jingtao
    Tian, Yuan
    Evans, David
    [J]. IEEE CONFERENCE ON SAFE AND TRUSTWORTHY MACHINE LEARNING, SATML 2024, 2024, : 387 - 407
  • [35] Random Noise Defense Against Query-Based Black-Box Attacks
    Qin, Zeyu
    Fan, Yanbo
    Zha, Hongyuan
    Wu, Baoyuan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [36] Beating White-Box Defenses with Black-Box Attacks
    Kumova, Vera
    Pilat, Martin
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [37] MalDBA: Detection for Query-Based Malware Black-Box Adversarial Attacks
    Kong, Zixiao
    Xue, Jingfeng
    Liu, Zhenyan
    Wang, Yong
    Han, Weijie
    [J]. ELECTRONICS, 2023, 12 (07)
  • [38] Black-Box Attacks on Graph Neural Networks via White-Box Methods With Performance Guarantees
    Yang, Jielong
    Ding, Rui
    Chen, Jianyu
    Zhong, Xionghu
    Zhao, Huarong
    Xie, Linbo
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (10): : 18193 - 18204
  • [39] Mitigation of Black-Box Attacks on Intrusion Detection Systems-Based ML
    Alahmed, Shahad
    Alasad, Qutaiba
    Hammood, Maytham M.
    Yuan, Jiann-Shiun
    Alawad, Mohammed
    [J]. COMPUTERS, 2022, 11 (07)
  • [40] Automated Side-Channel Attacks using Black-Box Neural Architecture Search
    Gupta, Pritha
    Drees, Jan Peter
    Huellermeier, Eyke
    [J]. 18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,