HFAD: Homomorphic Filtering Adversarial Defense Against Adversarial Attacks in Automatic Modulation Classification

被引:0
|
作者
Zhang, Sicheng [1 ]
Lin, Yun [1 ]
Yu, Jiarun [1 ]
Zhang, Jianting [2 ]
Xuan, Qi [3 ]
Xu, Dongwei [3 ]
Wang, Juzhen [4 ]
Wang, Meiyu [5 ]
机构
[1] Harbin Engn Univ, Coll Informat & Commun Engn, Harbin 150000, Peoples R China
[2] China Peoples Liberat Army Gen Equipment Dept, Unit Peoples Liberat Army China 91977, Beijing 100036, Peoples R China
[3] Zhejiang Univ Technol, Inst Cyberspace Secur, Hangzhou 310023, Peoples R China
[4] Wuhan Univ, Sch Elect Informat, Wuhan 430072, Peoples R China
[5] Hangzhou Dianzi Univ, Coll Commun Engn, Hangzhou 310018, Peoples R China
基金
中国国家自然科学基金;
关键词
Automatic modulation classification; adversarial attacks; adversarial defense; frequency domain; homomorphic filtering;
D O I
10.1109/TCCN.2024.3360514
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Deep neural networks provide intelligent solutions for Automatic Modulation Classification (AMC) tasks in the field of communication. However, their susceptibility to adversarial examples due to the interpretability problem presents a challenge as it leads to anomalous decisions. Emerging studies suggest that the high-frequency constituents within signals constitute a fundamental source of adversarial vulnerability. To address this issue, this paper introduces a Homomorphic Filtering Adversarial Defense (HFAD) algorithm that aims to effectively defend against adversarial examples by applying frequency domain filtering on the signal. This approach enhances the security and reliability of the AMC model by attenuating high-frequency components of the signal through homomorphic filtering, thereby reducing errors caused by adversarial perturbations on model outputs. The robustness of the AMC model is further enhanced through the integration of HFAD with data augmentation strategies. Experimental results demonstrate that the proposed defense algorithm not only maintains high signal recognition accuracy but also preserves communication signal transmission quality. Moreover, HFAD effectively withstands a wide range of white-box adversarial attacks and demonstrates resilience against black-box adversarial attacks, thereby enhancing the robustness of the AMC model against adversarial examples and exhibiting strong transfer performance.
引用
下载
收藏
页码:880 / 892
页数:13
相关论文
共 50 条
  • [1] Robust Automatic Modulation Classification in the Presence of Adversarial Attacks
    Sahay, Rajeev
    Love, David J.
    Brinton, Christopher G.
    2021 55TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2021,
  • [2] Sparse Adversarial Attacks against DL-Based Automatic Modulation Classification
    Jiang, Zenghui
    Zeng, Weijun
    Zhou, Xingyu
    Feng, Peilun
    Chen, Pu
    Yin, Shenqian
    Han, Changzhi
    Li, Lin
    ELECTRONICS, 2023, 12 (18)
  • [3] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350
  • [4] Deblurring as a Defense against Adversarial Attacks
    Duckworth, William, III
    Liao, Weixian
    Yu, Wei
    2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET, 2023, : 61 - 67
  • [5] Mixture GAN For Modulation Classification Resiliency Against Adversarial Attacks
    Shtaiwi, Eyad
    El Ouadrhiri, Ahmed
    Moradikia, Majid
    Sultana, Salma
    Abdelhadi, Ahmed
    Han, Zhu
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1472 - 1477
  • [6] Encoding Generative Adversarial Networks for Defense Against Image Classification Attacks
    Perez-Bravo, Jose M.
    Rodriguez-Rodriguez, Jose A.
    Garcia-Gonzalez, Jorge
    Molina-Cabello, Miguel A.
    Thurnhofer-Hemsi, Karl
    Lopez-Rubio, Ezequiel
    BIO-INSPIRED SYSTEMS AND APPLICATIONS: FROM ROBOTICS TO AMBIENT INTELLIGENCE, PT II, 2022, 13259 : 163 - 172
  • [7] The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial Attacks
    Frosio, Iuri
    Kautz, Jan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 4067 - 4076
  • [8] Defense Against Adversarial Attacks Using Topology Aligning Adversarial Training
    Kuang, Huafeng
    Liu, Hong
    Lin, Xianming
    Ji, Rongrong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3659 - 3673
  • [9] Transferable Adversarial Attacks against Automatic Modulation Classifier in Wireless Communications
    Hu, Lin
    Jiang, Han
    Li, Wen
    Han, Hao
    Yang, Yang
    Jiao, Yutao
    Wang, Haichao
    Xu, Yuhua
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [10] A Hybrid Training-Time and Run-Time Defense Against Adversarial Attacks in Modulation Classification
    Zhang, Lu
    Lambotharan, Sangarapillai
    Zheng, Gan
    Liao, Guisheng
    Demontis, Ambra
    Roli, Fabio
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (06) : 1161 - 1165