Black-box Attacks Against Neural Binary Function Detection

被引:0
|
作者
Bundt, Joshua [1 ,2 ]
Davinroy, Michael [1 ]
Agadakos, Ioannis [1 ,3 ]
Oprea, Alina [1 ]
Robertson, William [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] Army Cyber Inst, West Point, NY 10996 USA
[3] Amazon, Seattle, WA USA
基金
美国国家科学基金会;
关键词
binary analysis; disassembly; deep neural network; function boundary detection; CODE;
D O I
10.1145/3607199.3607200
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Binary analyses based on deep neural networks (DNNs), or neural binary analyses (NBAs), have become a hotly researched topic in recent years. DNNs have been wildly successful at pushing the performance and accuracy envelopes in the natural language and image processing domains. Thus, DNNs are highly promising for solving binary analysis problems that are hard due to a lack of complete information resulting from the lossy compilation process. Despite this promise, it is unclear that the prevailing strategy of repurposing embeddings and model architectures originally developed for other problem domains is sound given the adversarial contexts under which binary analysis often operates. In this paper, we empirically demonstrate that the current state of the art in neural function boundary detection is vulnerable to both inadvertent and deliberate adversarial attacks. We proceed from the insight that current generation NBAs are built upon embeddings and model architectures intended to solve syntactic problems. We devise a simple, reproducible, and scalable black-box methodology for exploring the space of inadvertent attacks - instruction sequences that could be emitted by common compiler toolchains and configurations - that exploits this syntactic design focus. We then show that these inadvertent misclassifications can be exploited by an attacker, serving as the basis for a highly effective black-box adversarial example generation process. We evaluate this methodology against two state-of-the-art neural function boundary detectors: XDA and DeepDi. We conclude with an analysis of the evaluation data and recommendations for how future research might avoid succumbing to similar attacks.
引用
收藏
页码:1 / 16
页数:16
相关论文
共 50 条
  • [41] Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms
    Bhagoji, Arjun Nitin
    He, Warren
    Li, Bo
    Song, Dawn
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 158 - 174
  • [42] KENKU: Towards Efficient and Stealthy Black-box Adversarial Attacks against ASR Systems
    Wu, Xinghui
    Ma, Shiqing
    Shen, Chao
    Lin, Chenhao
    Wang, Qian
    Li, Qi
    Rao, Yuan
    [J]. PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 247 - 264
  • [43] Black-Box Reward Attacks Against Deep Reinforcement Learning Based on Successor Representation
    Cai, Kanting
    Zhu, Xiangbin
    Hu, Zhao-Long
    [J]. IEEE ACCESS, 2022, 10 : 51548 - 51560
  • [44] Understanding Black-Box Attacks Against Object Detectors from a User's Perspective
    Midtlid, Kim Andre
    Asheim, Johannes
    Li, Jingyue
    [J]. QUALITY OF INFORMATION AND COMMUNICATIONS TECHNOLOGY, QUATIC 2022, 2022, 1621 : 266 - 280
  • [45] Your Voice is Not Yours? Black-Box Adversarial Attacks Against Speaker Recognition Systems
    Ye, Jianbin
    Lin, Fuqiang
    Liu, Xiaoyuan
    Liu, Bo
    [J]. 2022 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING, ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM, 2022, : 692 - 699
  • [46] MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
    Jia, Jinyuan
    Salem, Ahmed
    Backes, Michael
    Zhang, Yang
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 259 - 274
  • [47] B3: Backdoor Attacks against Black-box Machine Learning Models
    Gong, Xueluan
    Chen, Yanjiao
    Yang, Wenbin
    Huang, Huayang
    Wang, Qian
    [J]. ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2023, 26 (04)
  • [48] Black-box adversarial attacks by manipulating image attributes
    Wei, Xingxing
    Guo, Ying
    Li, Bo
    [J]. INFORMATION SCIENCES, 2021, 550 : 285 - 296
  • [49] Black-box Adversarial Attacks on Video Recognition Models
    Jiang, Linxi
    Ma, Xingjun
    Chen, Shaoxiang
    Bailey, James
    Jiang, Yu-Gang
    [J]. PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 864 - 872
  • [50] Black-box Adversarial Attacks in Autonomous Vehicle Technology
    Kumar, K. Naveen
    Vishnu, C.
    Mitra, Reshmi
    Mohan, C. Krishna
    [J]. 2020 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR): TRUSTED COMPUTING, PRIVACY, AND SECURING MULTIMEDIA, 2020,