DeepEC: Adversarial attacks against graph structure prediction models

被引:18
|
作者
Xian, Xingping [1 ]
Wu, Tao [1 ]
Qiao, Shaojie [2 ]
Wang, Wei [3 ]
Wang, Chao [4 ]
Liu, Yanbing [5 ]
Xu, Guangxia [6 ]
机构
[1] Chongqing Univ Posts & Telecommun, Sch Cybersecur & Informat Law, Chongqing, Peoples R China
[2] Chengdu Univ Informat Technol, Sch Software Engn, Chengdu, Peoples R China
[3] Sichuan Univ, Inst Cybersecur, Chengdu, Peoples R China
[4] Chongqing Univ, Inst Elect Engn, Chongqing, Peoples R China
[5] Chongqing Univ Posts & Telecommun, Chongqing Engn Lab Internet & Informat Secur, Chongqing, Peoples R China
[6] Chongqing Univ Posts & Telecommun, Dept Software Engn, Chongqing, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Graph data; Adversarial attacks; Link prediction; Structural perturbation; Deep ensemble coding; LINK-PREDICTION; COMPLEX NETWORKS;
D O I
10.1016/j.neucom.2020.07.126
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Inspired by the practical importance of graph structured data, link prediction, one of the most frequently applied tasks on graph data, has garnered considerable attention in recent years, and they have been widely applied in item recommendation, privacy inference attack, knowledge graph completion, fraud detection, and other fields. However, recent studies show that machine learning-based intelligent systems are vulnerable to adversarial attacks, which has recently inspired much research on the security problems of machine learning in the context of computer vision, natural language processing, physical world, etc. Nonetheless, there is a lack of understanding of the vulnerability of link prediction methods in face of adversarial attacks. To unveil the weaknesses and aid in the development of robust link prediction methods, we propose a deep architecture-based adversarial attack method, called Deep Ensemble Coding, against link prediction. In particular, based on the assumption that links play different structural roles in structure organization, we propose a deep linear coding-based structure enhancement mechanism to generate adversarial examples. We also empirically investigate other adversarial attack methods for graph data, including heuristic and evolutionary perturbation methods. Based on the comprehensive experiments conducted on various real-world networks, we can conclude that the proposed adversarial attack method has satisfactory performance for link prediction. Moreover, we can observe that state-of-the-art link prediction algorithms are vulnerable to adversarial attacks and, for adversarial defense, the attack can be viewed as a robustness evaluation for the construction of robust link prediction methods. Inspired by the practical importance of graph structured data, link prediction, one of the most fre-quently applied tasks on graph data, has garnered considerable attention in recent years, and they have been widely applied in item recommendation, privacy inference attack, knowledge graph completion, fraud detection, and other fields. However, recent studies show that machine learning-based intelligent systems are vulnerable to adversarial attacks, which has recently inspired much research on the secu-rity problems of machine learning in the context of computer vision, natural language processing, phys-ical world, etc. Nonetheless, there is a lack of understanding of the vulnerability of link prediction methods in face of adversarial attacks. To unveil the weaknesses and aid in the development of robust link prediction methods, we propose a deep architecture-based adversarial attack method, called Deep Ensemble Coding, against link prediction. In particular, based on the assumption that links play differ-ent structural roles in structure organization, we propose a deep linear coding-based structure enhancement mechanism to generate adversarial examples. We also empirically investigate other adversarial attack methods for graph data, including heuristic and evolutionary perturbation methods. Based on the comprehensive experiments conducted on various real-world networks, we can conclude that the proposed adversarial attack method has satisfactory performance for link prediction. Moreover, we can observe that state-of-the-art link prediction algorithms are vulnerable to adversarial attacks and, for adversarial defense, the attack can be viewed as a robustness evaluation for the construction of robust link prediction methods. (c) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:168 / 185
页数:18
相关论文
共 50 条
  • [31] Adversarial Attacks on Deep Graph Matching
    Zhang, Zijie
    Zhang, Zeru
    Zhou, Yang
    Shen, Yelong
    Jin, Ruoming
    Dou, Dejing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [32] NetFense: Adversarial Defenses Against Privacy Attacks on Neural Networks for Graph Data
    Hsieh, I-Chung
    Li, Cheng-Te
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 796 - 809
  • [33] Defending against adversarial attacks on graph neural networks via similarity property
    Yao, Minghong
    Yu, Haizheng
    Bian, Hong
    AI COMMUNICATIONS, 2023, 36 (01) : 27 - 39
  • [34] Causal Robust Trajectory Prediction Against Adversarial Attacks for Autonomous Vehicles
    Duan, Ang
    Wang, Ruyan
    Cui, Yaping
    He, Peng
    Chen, Luo
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (22): : 35762 - 35776
  • [35] Black-Box Adversarial Attacks against Audio Forensics Models
    Jiang, Yi
    Ye, Dengpan
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [36] A Systematic Evaluation of Adversarial Attacks against Speech Emotion Recognition Models
    Facchinetti, Nicolas
    Simonetta, Federico
    Ntalampiras, Stavros
    Intelligent Computing, 2024, 3
  • [37] Defending malware detection models against evasion based adversarial attacks
    Rathore, Hemant
    Sasan, Animesh
    Sahay, Sanjay K.
    Sewak, Mohit
    PATTERN RECOGNITION LETTERS, 2022, 164 : 119 - 125
  • [38] Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks
    Wang, Jianyu
    Zhang, Haichao
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6628 - 6637
  • [39] A Multi-View Graph Contrastive Learning Framework for Defending Against Adversarial Attacks
    Cao, Feilong
    Ye, Xing
    Ye, Hailiang
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (06): : 1 - 11
  • [40] Value at Adversarial Risk: A Graph Defense Strategy against Cost-Aware Attacks
    Liao, Junlong
    Fu, Wenda
    Wang, Cong
    Wei, Zhongyu
    Xu, Jiarong
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13763 - 13771