A knowledge distillation strategy for enhancing the adversarial robustness of lightweight automatic modulation classification models

被引:0
|
作者
Xu, Fanghao [1 ]
Wang, Chao [1 ]
Liang, Jiakai [1 ]
Zuo, Chenyang [1 ]
Yue, Keqiang [1 ,2 ]
Li, Wenjun [1 ]
机构
[1] Hangzhou Dianzi Univ, Zhejiang Integrated Circuits & Intelligent Hardwar, Hangzhou, Peoples R China
[2] Hangzhou Dianzi Univ, Hangzhou 310018, Peoples R China
基金
中国国家自然科学基金;
关键词
cognitive radio; wireless channels; SIGNAL;
D O I
10.1049/cmu2.12793
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Automatic modulation classification models based on deep learning models are at risk of being interfered by adversarial attacks. In an adversarial attack, the attacker causes the classification model to misclassify the received signal by adding carefully crafted adversarial interference to the transmitted signal. Based on the requirements of efficient computing and edge deployment, a lightweight automatic modulation classification model is proposed. Considering that the lightweight automatic modulation classification model is more susceptible to interference from adversarial attacks and that adversarial training of the lightweight auto-modulation classification model fails to achieve the desired results, an adversarial attack defense system for the lightweight automatic modulation classification model is further proposed, which can enhance the robustness when subjected to adversarial attacks. The defense method aims to transfer the adversarial robustness from a trained large automatic modulation classification model to a lightweight model through the technique of adversarial robust distillation. The proposed method exhibits better adversarial robustness than current defense techniques in feature fusion based automatic modulation classification models in white box attack scenarios. We propose an adversarial attack defense system for the lightweight automatic modulation classification model, which can enhance the robustness when subjected to adversarial attacks. image
引用
收藏
页码:827 / 845
页数:19
相关论文
共 50 条
  • [21] Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond
    Mu, Ronghui
    Marcolino, Leandro
    Ni, Qiang
    Ruan, Wenjie
    NEURAL NETWORKS, 2024, 171 : 127 - 143
  • [22] Robustness in Supervised Learning Based Blind Automatic Modulation Classification
    Rahman, Md Abdur
    Haniz, Azril
    Kim, Minseok
    Takada, Jun-ichi
    IEICE TRANSACTIONS ON COMMUNICATIONS, 2013, E96B (04) : 1030 - 1038
  • [23] Lightweight convolutional neural network with knowledge distillation for cervical cells classification
    Chen, Wen
    Gao, Liang
    Li, Xinyu
    Shen, Weiming
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 71
  • [24] ENHANCING ADVERSARIAL ROBUSTNESS FOR IMAGE CLASSIFICATION BY REGULARIZING CLASS LEVEL FEATURE DISTRIBUTION
    Yu, Cheng
    Xue, Youze
    Chen, Jiansheng
    Wang, Yu
    Ma, Huimin
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 494 - 498
  • [25] SafeAMC: Adversarial training for robust modulation classification models
    Maroto, Javier
    Bovet, Gerome
    Frossard, Pascal
    2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 1636 - 1640
  • [26] HFAD: Homomorphic Filtering Adversarial Defense Against Adversarial Attacks in Automatic Modulation Classification
    Zhang, Sicheng
    Lin, Yun
    Yu, Jiarun
    Zhang, Jianting
    Xuan, Qi
    Xu, Dongwei
    Wang, Juzhen
    Wang, Meiyu
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (03) : 880 - 892
  • [27] Improving Robustness of Compressed Models with Weight Sharing through Knowledge Distillation
    Gourtani, Saeed Khalilian
    Meratnia, Nirvana
    2024 IEEE 10TH INTERNATIONAL CONFERENCE ON EDGE COMPUTING AND SCALABLE CLOUD, EDGECOM 2024, 2024, : 13 - 21
  • [28] A Lightweight Pig Face Recognition Method Based on Automatic Detection and Knowledge Distillation
    Ma, Ruihan
    Ali, Hassan
    Chung, Seyeon
    Kim, Sang Cheol
    Kim, Hyongsuk
    APPLIED SCIENCES-BASEL, 2024, 14 (01):
  • [29] Lightweight Models for Traffic Classification: A Two-Step Distillation Approach
    Lu, Min
    Zhou, Bin
    Bu, Zhiyong
    Zhao, Yu
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 2108 - 2113
  • [30] KNOWLEDGE DISTILLATION INSPIRED FINE-TUNING OF TUCKER DECOMPOSED CNNs AND ADVERSARIAL ROBUSTNESS ANALYSIS
    Sadhukhan, Ranajoy
    Saha, Avinab
    Mukhopadhyay, Jayanta
    Patra, Amit
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1876 - 1880