DeepEC: Adversarial attacks against graph structure prediction models

被引:18
|
作者
Xian, Xingping [1 ]
Wu, Tao [1 ]
Qiao, Shaojie [2 ]
Wang, Wei [3 ]
Wang, Chao [4 ]
Liu, Yanbing [5 ]
Xu, Guangxia [6 ]
机构
[1] Chongqing Univ Posts & Telecommun, Sch Cybersecur & Informat Law, Chongqing, Peoples R China
[2] Chengdu Univ Informat Technol, Sch Software Engn, Chengdu, Peoples R China
[3] Sichuan Univ, Inst Cybersecur, Chengdu, Peoples R China
[4] Chongqing Univ, Inst Elect Engn, Chongqing, Peoples R China
[5] Chongqing Univ Posts & Telecommun, Chongqing Engn Lab Internet & Informat Secur, Chongqing, Peoples R China
[6] Chongqing Univ Posts & Telecommun, Dept Software Engn, Chongqing, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Graph data; Adversarial attacks; Link prediction; Structural perturbation; Deep ensemble coding; LINK-PREDICTION; COMPLEX NETWORKS;
D O I
10.1016/j.neucom.2020.07.126
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Inspired by the practical importance of graph structured data, link prediction, one of the most frequently applied tasks on graph data, has garnered considerable attention in recent years, and they have been widely applied in item recommendation, privacy inference attack, knowledge graph completion, fraud detection, and other fields. However, recent studies show that machine learning-based intelligent systems are vulnerable to adversarial attacks, which has recently inspired much research on the security problems of machine learning in the context of computer vision, natural language processing, physical world, etc. Nonetheless, there is a lack of understanding of the vulnerability of link prediction methods in face of adversarial attacks. To unveil the weaknesses and aid in the development of robust link prediction methods, we propose a deep architecture-based adversarial attack method, called Deep Ensemble Coding, against link prediction. In particular, based on the assumption that links play different structural roles in structure organization, we propose a deep linear coding-based structure enhancement mechanism to generate adversarial examples. We also empirically investigate other adversarial attack methods for graph data, including heuristic and evolutionary perturbation methods. Based on the comprehensive experiments conducted on various real-world networks, we can conclude that the proposed adversarial attack method has satisfactory performance for link prediction. Moreover, we can observe that state-of-the-art link prediction algorithms are vulnerable to adversarial attacks and, for adversarial defense, the attack can be viewed as a robustness evaluation for the construction of robust link prediction methods. Inspired by the practical importance of graph structured data, link prediction, one of the most fre-quently applied tasks on graph data, has garnered considerable attention in recent years, and they have been widely applied in item recommendation, privacy inference attack, knowledge graph completion, fraud detection, and other fields. However, recent studies show that machine learning-based intelligent systems are vulnerable to adversarial attacks, which has recently inspired much research on the secu-rity problems of machine learning in the context of computer vision, natural language processing, phys-ical world, etc. Nonetheless, there is a lack of understanding of the vulnerability of link prediction methods in face of adversarial attacks. To unveil the weaknesses and aid in the development of robust link prediction methods, we propose a deep architecture-based adversarial attack method, called Deep Ensemble Coding, against link prediction. In particular, based on the assumption that links play differ-ent structural roles in structure organization, we propose a deep linear coding-based structure enhancement mechanism to generate adversarial examples. We also empirically investigate other adversarial attack methods for graph data, including heuristic and evolutionary perturbation methods. Based on the comprehensive experiments conducted on various real-world networks, we can conclude that the proposed adversarial attack method has satisfactory performance for link prediction. Moreover, we can observe that state-of-the-art link prediction algorithms are vulnerable to adversarial attacks and, for adversarial defense, the attack can be viewed as a robustness evaluation for the construction of robust link prediction methods. (c) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:168 / 185
页数:18
相关论文
共 50 条
  • [1] Graph Structure Reshaping Against Adversarial Attacks on Graph Neural Networks
    Wang, Haibo
    Zhou, Chuan
    Chen, Xin
    Wu, Jia
    Pan, Shirui
    Li, Zhao
    Wang, Jilong
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6344 - 6357
  • [2] Adversarial Diffusion Attacks on Graph-Based Traffic Prediction Models
    Zhu, Lyuyi
    Feng, Kairui
    Pu, Ziyuan
    Ma, Wei
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (01) : 1481 - 1495
  • [3] DEFENDING GRAPH CONVOLUTIONAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Ioannidis, Vassilis N.
    Giannakis, Georgios B.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8469 - 8473
  • [4] Robust Graph Convolutional Networks Against Adversarial Attacks
    Zhu, Dingyuan
    Zhang, Ziwei
    Cui, Peng
    Zhu, Wenwu
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1399 - 1407
  • [5] A Dual Robust Graph Neural Network Against Graph Adversarial Attacks
    Tao, Qian
    Liao, Jianpeng
    Zhang, Enze
    Li, Lusi
    NEURAL NETWORKS, 2024, 175
  • [6] Robust Trajectory Prediction against Adversarial Attacks
    Cao, Yulong
    Xu, Danfei
    Weng, Xinshuo
    Mao, Z. Morley
    Anandkumar, Anima
    Xiao, Chaowei
    Pavone, Marco
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 128 - 137
  • [7] GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks
    Zhang, Xiang
    Zitnik, Marinka
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [8] Robust Heterogeneous Graph Neural Networks against Adversarial Attacks
    Zhang, Mengmei
    Wang, Xiao
    Zhu, Meiqi
    Shi, Chuan
    Zhang, Zhiqiang
    Zhou, Jun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 4363 - 4370
  • [9] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [10] SECURITY OF FACIAL FORENSICS MODELS AGAINST ADVERSARIAL ATTACKS
    Huang, Rong
    Fang, Fuming
    Nguyen, Huy H.
    Yamagishi, Junichi
    Echizen, Isao
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2236 - 2240