Adaptive Normalized Attacks for Learning Adversarial Attacks and Defenses in Power Systems

被引:5
|
作者
Tian, Jiwei [1 ]
Li, Tengyao [1 ]
Shang, Fute [1 ]
Cao, Kunrui [2 ]
Li, Jing [3 ]
Ozay, Mete
机构
[1] Air Force Engn Univ, Informat & Nav Coll, Xian, Peoples R China
[2] Natl Univ Def Technol, Sch Informat & Commun, Xian, Peoples R China
[3] Henan Univ Technol, Sch Design & Art, Zhengzhou, Peoples R China
关键词
D O I
10.1109/smartgridcomm.2019.8909713
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Vulnerability of various machine learning methods to adversarial examples has been recently explored in the literature. Power systems which use these vulnerable methods face a huge threat against adversarial examples. To this end, we first propose a more accurate and computationally efficient method called Adaptive Normalized Attack (ANA) to attack power systems using generate adversarial examples. We then adopt adversarial training to defend against attacks of adversarial examples. Experimental analyses demonstrate that our attack method provides less perturbation compared to the state-of-the-art FGSM (Fast Gradient Sign Method) and DeepFool, while our proposed method increases misclassification rate of learning methods for attacking power systems. In addition, the results show that the proposed adversarial training improves robustness of power systems to adversarial examples compared to using state-of-the-art methods.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] On Adaptive Attacks to Adversarial Example Defenses
    Tramer, Florian
    Carlini, Nicholas
    Brendel, Wieland
    Madry, Aleksander
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [2] Adversarial Attacks and Defenses in Deep Learning
    Ren, Kui
    Zheng, Tianhang
    Qin, Zhan
    Liu, Xue
    [J]. ENGINEERING, 2020, 6 (03) : 346 - 360
  • [3] Automated Discovery of Adaptive Attacks on Adversarial Defenses
    Yao, Chengyuan
    Bielik, Pavol
    Tsankov, Petar
    Vechev, Martin
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [4] Adversarial Attacks and Defenses for Deep Learning Models
    Li, Minghui
    Jiang, Peipei
    Wang, Qian
    Shen, Chao
    Li, Qi
    [J]. Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (05): : 909 - 926
  • [5] Adversarial Examples: Attacks and Defenses for Deep Learning
    Yu, Xiaoyong
    He, Pan
    Zhu, Qile
    Li, Xiaolin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) : 2805 - 2824
  • [6] A Survey on Adversarial Attacks and Defenses for Deep Reinforcement Learning
    Liu, Ai-Shan
    Guo, Jun
    Li, Si-Min
    Xiao, Yi-Song
    Liu, Xiang-Long
    Tao, Da-Cheng
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2023, 46 (08): : 1553 - 1576
  • [7] Adversarial attacks and defenses in Speaker Recognition Systems: A survey
    Lan, Jiahe
    Zhang, Rui
    Yan, Zheng
    Wang, Jie
    Chen, Yu
    Hou, Ronghui
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 127
  • [8] Ensemble Adversarial Defenses and Attacks in Speaker Verification Systems
    Chen, Zesheng
    Li, Jack
    Chen, Chao
    [J]. IEEE Internet of Things Journal, 2024, 11 (20) : 32645 - 32655
  • [9] Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity
    Zhou, Shuai
    Liu, Chi
    Ye, Dayong
    Zhu, Tianqing
    Zhou, Wanlei
    Yu, Philip S.
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (08)
  • [10] Adversarial attacks and defenses in deep learning for image recognition: A survey
    Wang, Jia
    Wang, Chengyu
    Lin, Qiuzhen
    Luo, Chengwen
    Wu, Chao
    Li, Jianqiang
    [J]. NEUROCOMPUTING, 2022, 514 : 162 - 181