Evasion Attacks on Deep Learning-Based Helicopter Recognition Systems

被引:0
|
作者
Lee, Jun [1 ]
Kim, Taewan [2 ]
Bang, Seungho [3 ]
Oh, Sehong [4 ]
Kwon, Hyun [4 ]
机构
[1] Hoseo Univ, Dept Game Software, Asan 31066, South Korea
[2] ROK Joint Chief Staff, Seoul 04383, South Korea
[3] Hanwha Aerosp, Seoul 07345, South Korea
[4] Korea Mil Acad, Dept Artificial Intelligence & Data Sci, Seoul 01805, South Korea
基金
新加坡国家研究基金会;
关键词
NEURAL-NETWORKS;
D O I
10.1155/2024/1124598
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Identifying objects in surveillance and reconnaissance systems with the human eye can be challenging, underscoring the growing importance of employing deep learning models for the recognition of enemy weapon systems. These systems, leveraging deep neural networks known for their strong performance in image recognition and classification, are currently under extensive research. However, it is crucial to acknowledge that surveillance and reconnaissance systems utilizing deep neural networks are susceptible to vulnerabilities posed by adversarial examples. While prior adversarial example research has mainly utilized publicly available internet data, there has been a significant absence of studies concerning adversarial attacks on data and models specific to real military scenarios. In this paper, we introduce an adversarial example designed for a binary classifier tasked with recognizing helicopters. Our approach generates an adversarial example that is misclassified by the model, despite appearing unproblematic to the human eye. To conduct our experiments, we gathered real attack and transport helicopters and employed TensorFlow as the machine learning library of choice. Our experimental findings demonstrate that the average attack success rate of the proposed method is 81.9%. Additionally, when epsilon is 0.4, the attack success rate is 90.1%. Before epsilon reaches 0.4, the attack success rate increases rapidly, and then we can see that epsilon increases little by little thereafter.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Invisible Adversarial Attacks on Deep Learning-Based Face Recognition Models
    Lin, Chih-Yang
    Chen, Feng-Jie
    Ng, Hui-Fuang
    Lin, Wei-Yang
    [J]. IEEE ACCESS, 2023, 11 : 51567 - 51577
  • [2] Adversarial Attacks on Deep Learning-Based UAV Navigation Systems
    Mynuddin, Mohammed
    Khan, Sultan Uddin
    Mahmoud, Nabil Mahmoud
    Alsharif, Ahmad
    [J]. 2023 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS, 2023,
  • [3] Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses
    Deng, Yao
    Zhang, Tiehua
    Lou, Guannan
    Zheng, Xi
    Jin, Jiong
    Han, Qing-Long
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (12) : 7897 - 7912
  • [4] Countering Evasion Attacks for Smart Grid Reinforcement Learning-Based Detectors
    El-Toukhy, Ahmed T.
    Mahmoud, Mohamed M. E. A.
    Bondok, Atef H.
    Fouda, Mostafa M.
    Alsabaan, Maazen
    [J]. IEEE ACCESS, 2023, 11 : 97373 - 97390
  • [5] Exploring Data and Model Poisoning Attacks to Deep Learning-Based NLP Systems
    Marulli, Fiammetta
    Verde, Laura
    Campanile, Lelio
    [J]. KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KSE 2021), 2021, 192 : 3570 - 3579
  • [6] Deep Learning-Based Automatic Modulation Recognition in OTFS and OFDM systems
    Zhou, Jinggan
    Liao, Xuewen
    Gao, Zhenzhen
    [J]. 2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
  • [7] African foods for deep learning-based food recognition systems dataset
    Ataguba, Grace
    Ezekiel, Rock
    Daniel, James
    Ogbuju, Emeka
    Orji, Rita
    [J]. DATA IN BRIEF, 2024, 53
  • [8] Evasion and Causative Attacks with Adversarial Deep Learning
    Shi, Yi
    Sagduyu, Yalin E.
    [J]. MILCOM 2017 - 2017 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2017, : 243 - 248
  • [9] Deep Learning-based Attacks on Masked AES Implementation
    Daehyeon, Bae
    Hwang, Jongbae
    Ha, Jaecheol
    [J]. JOURNAL OF INTERNET TECHNOLOGY, 2022, 23 (04): : 897 - 902
  • [10] How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers?
    Wang, Su
    Sahay, Rajeev
    Brinton, Christopher G.
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 2376 - 2381