Deep Learning Models as Moving Targets to Counter Modulation Classification Attacks

被引:0
|
作者
Hoque, Naureen [1 ]
Rahbari, Hanif [1 ]
机构
[1] Rochester Inst Technol, Rochester, NY 14623 USA
关键词
Moving target defense; modulation classification;
D O I
10.1109/INFOCOM52122.2024.10621413
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Malicious entities abuse advanced modulation classification (MC) techniques to launch traffic analysis, selective jamming, evasion, and poison attacks. Recent studies show that current defenses against such attacks are static in nature and vulnerable to persistent adversaries who invest time and resources into learning the defenses, thereby being able to design and execute more sophisticated attacks to circumvent them. In this paper, we present a moving-target defense framework to support a novel modulation-masking mechanism we develop against advanced and persistent MC attacks. The modulated symbols are first masked using small perturbations to make them appear to an adversary in a state of ambiguity about the model as if they are from another modulation scheme. By deploying a pool of deep learning models and perturbation-generating techniques, our defense strategy keeps changing (moving) them as needed, making it difficult (cubic time complexity) for adversaries to keep up with the evolving defense system over time. We show that the overall system performance remains unaffected under our technique. We further demonstrate that, over time, a persistent adversary can learn and eventually circumvent our masking technique, along with other existing defenses, unless a moving target defense approach is adopted.
引用
收藏
页码:1601 / 1610
页数:10
相关论文
共 50 条
  • [1] Minimum Power Adversarial Attacks in Communication Signal Modulation Classification with Deep Learning
    Da Ke
    Xiang Wang
    Kaizhu Huang
    Haoyuan Wang
    Zhitao Huang
    Cognitive Computation, 2023, 15 : 580 - 589
  • [2] Minimum Power Adversarial Attacks in Communication Signal Modulation Classification with Deep Learning
    Ke, Da
    Wang, Xiang
    Huang, Kaizhu
    Wang, Haoyuan
    Huang, Zhitao
    COGNITIVE COMPUTATION, 2023, 15 (02) : 580 - 589
  • [3] Generative UAP attacks against deep-learning based modulation classification
    Li, Xiong
    Rao, Wengui
    Chen, Shaoping
    IET COMMUNICATIONS, 2023, 17 (09) : 1091 - 1102
  • [4] Adversarial Attacks and Defense on Deep Learning Classification Models using YCbCr Color Images
    Pestana, Camilo
    Akhtar, Naveed
    Liu, Wei
    Glance, David
    Mian, Ajmal
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [5] Automatic Modulation Classification in Deep Learning
    Alnajjar, Khawla A.
    Ghunaim, Sara
    Ansari, Sam
    2022 5TH INTERNATIONAL CONFERENCE ON COMMUNICATIONS, SIGNAL PROCESSING, AND THEIR APPLICATIONS (ICCSPA), 2022,
  • [6] Adversarial Attacks and Defenses for Deep Learning Models
    Li M.
    Jiang P.
    Wang Q.
    Shen C.
    Li Q.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (05): : 909 - 926
  • [7] The Classification of DDoS Attacks Using Deep Learning Techniques
    Boonchai, Jirasin
    Kitchat, Kotcharat
    Nonsiri, Sarayut
    2022 7TH INTERNATIONAL CONFERENCE ON BUSINESS AND INDUSTRIAL RESEARCH (ICBIR2022), 2022, : 544 - 550
  • [8] Evaluating Pretrained Deep Learning Models for Image Classification Against Individual and Ensemble Adversarial Attacks
    Rahman, Mafizur
    Roy, Prosenjit
    Frizell, Sherri S.
    Qian, Lijun
    IEEE ACCESS, 2025, 13 : 35230 - 35242
  • [9] Shift-invariant universal adversarial attacks to avoid deep-learning-based modulation classification
    Lu, Keyu
    Qian, Zhisheng
    Wang, Manxi
    Wang, Dewang
    Ma, Pengfei
    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2023, 36 (10)
  • [10] Sketch Classification with Deep Learning Models
    Eyiokur, Fevziye Irem
    Yaman, Dogucan
    Ekenel, Hazim Kemal
    2018 26TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2018,