Adversarial Attacks on Deep Neural Networks Based Modulation Recognition

被引:1
|
作者
Liu, Mingqian [1 ]
Zhang, Zhenju [1 ]
Zhao, Nan [2 ]
Chen, Yunfei [3 ]
机构
[1] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[2] Dalian Univ Technol, Sch Informat & Commun Engn, Dalian 116024, Peoples R China
[3] Univ Warwick, Sch Engn, Coventry CV4 7AL, W Midlands, England
基金
中国国家自然科学基金;
关键词
Adversarial attacks; adversarial examples; deep neural network; modulation recognition;
D O I
10.1109/INFOCOMWKSHPS54753.2022.9798389
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Modulation recognition models based on deep neural network (DNN) have the advantages of automatic feature extraction, fast recognition and high accuracy. However, due to the interpretability defects, DNN models are vulnerable to adversarial examples designed by attackers. Most existing researches focus on the accuracy of modulation recognition models, while ignoring the huge threat of adversarial examples to the safety and reliability of the models. In the field of modulation recognition, many existing attack methods have good attack performance for simple neural networks, but poor performance for more complicated DNNs. Therefore, this paper proposes an adversarial attack method based on dynamic iterative. The proposed method uses a dynamic iterative step that changes with iteration instead of being fixed. Simulation results show that the proposed attack method has better attack performance when the disturbance is specified than the traditional attack methods.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Adversarial Attacks in Modulation Recognition With Convolutional Neural Networks
    Lin, Yun
    Zhao, Haojun
    Ma, Xuefei
    Tu, Ya
    Wang, Meiyu
    [J]. IEEE TRANSACTIONS ON RELIABILITY, 2021, 70 (01) : 389 - 401
  • [2] Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign Recognition: A Feasibility Study
    Woitschek, Fabian
    Schneider, Georg
    [J]. 2021 32ND IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2021, : 481 - 487
  • [3] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [4] Detecting adversarial example attacks to deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Fumarola, Roberta
    Becarelli, Rudy
    [J]. PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2017,
  • [5] POSITION-INVARIANT ADVERSARIAL ATTACKS ON NEURAL MODULATION RECOGNITION
    Yu, Zhen
    Xiong, Yifeng
    He, Kun
    Huang, Shao
    Zhao, Yaodong
    Gu, Jie
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3483 - 3487
  • [6] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [7] An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks
    Zhao, Pu
    Liu, Sijia
    Wang, Yanzhi
    Lin, Xue
    [J]. PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 1065 - 1073
  • [8] Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks
    Das, Nilaksh
    Park, Haekyu
    Wang, Zijie J.
    Hohman, Fred
    Firstman, Robert
    Rogers, Emily
    Chau, Duen Horng
    [J]. 2020 IEEE VISUALIZATION CONFERENCE - SHORT PAPERS (VIS 2020), 2020, : 271 - 275
  • [9] Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks
    Guo, Haoqiang
    Peng, Lu
    Zhang, Jian
    Qi, Fang
    Duan, Lide
    [J]. 2019 TENTH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE (IGSC), 2019,
  • [10] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    [J]. Progress in Artificial Intelligence, 2022, 11 : 131 - 141