SafeAMC: Adversarial training for robust modulation classification models

被引:0
|
作者
Maroto, Javier [1 ]
Bovet, Gerome [2 ]
Frossard, Pascal [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Signal Proc Lab LTS4, Lausanne, Switzerland
[2] Cyber Def Campus, Armasuisse Sci & Technol, Zurich, Switzerland
关键词
Modulation classification; robustness; adversarial training; deep learning; security;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In communication systems, there are many tasks, like modulation classification, for which Deep Neural Networks (DNNs) have obtained promising performance. However, these models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification. This raises questions about the security but also about the general trust in model predictions. We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation classification (AMC) models. We show that current state-of-the-art models can effectively benefit from adversarial training, which mitigates the robustness issues for some families of modulations. We use adversarial perturbations to visualize the learned features, and we found that the signal symbols are shifted towards the nearest classes in constellation space, like maximum likelihood methods when adversarial training is enabled. This confirms that robust models are not only more secure, but also more interpretable, building their decisions on signal statistics that are actually relevant to modulation classification.
引用
收藏
页码:1636 / 1640
页数:5
相关论文
共 50 条
  • [1] Robust Automatic Modulation Classification in the Presence of Adversarial Attacks
    Sahay, Rajeev
    Love, David J.
    Brinton, Christopher G.
    [J]. 2021 55TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2021,
  • [2] Effects of Adversarial Training on the Safety of Classification Models
    Kim, Handong
    Han, Jongdae
    [J]. SYMMETRY-BASEL, 2022, 14 (07):
  • [3] A Pruning Method Combined with Resilient Training to Improve the Adversarial Robustness of Automatic Modulation Classification Models
    Han, Chao
    Wang, Linyuan
    Li, Dongyang
    Cui, Weijia
    Yan, Bin
    [J]. MOBILE NETWORKS & APPLICATIONS, 2024,
  • [4] Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks
    Wang, Jianyu
    Zhang, Haichao
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6628 - 6637
  • [5] Blind Adversarial Training: Towards Comprehensively Robust Models Against Blind Adversarial Attacks
    Xie, Haidong
    Xiang, Xueshuang
    Dong, Bin
    Liu, Naijin
    [J]. ARTIFICIAL INTELLIGENCE, CICAI 2023, PT II, 2024, 14474 : 15 - 26
  • [6] Accelerate adversarial training with loss guided propagation for robust image classification
    Xu, Changkai
    Zhang, Chunjie
    Yang, Yanwu
    Yang, Huaizhi
    Bo, Yijun
    Li, Danyong
    Zhang, Riquan
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (01)
  • [7] On the limitations of adversarial training for robust image classification with convolutional neural networks
    Carletti, Mattia
    Sinigaglia, Erto
    Terzi, Matteo
    Susto, Gian Antonio
    [J]. INFORMATION SCIENCES, 2024, 675
  • [8] Semisupervised Radar Intrapulse Signal Modulation Classification With Virtual Adversarial Training
    Cai, Jingjing
    He, Minghao
    Cao, Xianghai
    Gan, Fengming
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (06) : 9929 - 9940
  • [9] Adversarial training for signal modulation classification based on Ulam stability theory
    Yan, Kun
    Ren, Wenjuan
    Yang, Zhanpeng
    [J]. Digital Signal Processing: A Review Journal, 2024, 153
  • [10] Toward Robust Networks against Adversarial Attacks for Radio Signal Modulation Classification
    Manoj, B. R.
    Santos, Pablo Millan
    Sadeghi, Meysam
    Larsson, Erik G.
    [J]. 2022 IEEE 23RD INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATION (SPAWC), 2022,