DeepEC: Adversarial attacks against graph structure prediction models

被引:18
|
作者
Xian, Xingping [1 ]
Wu, Tao [1 ]
Qiao, Shaojie [2 ]
Wang, Wei [3 ]
Wang, Chao [4 ]
Liu, Yanbing [5 ]
Xu, Guangxia [6 ]
机构
[1] Chongqing Univ Posts & Telecommun, Sch Cybersecur & Informat Law, Chongqing, Peoples R China
[2] Chengdu Univ Informat Technol, Sch Software Engn, Chengdu, Peoples R China
[3] Sichuan Univ, Inst Cybersecur, Chengdu, Peoples R China
[4] Chongqing Univ, Inst Elect Engn, Chongqing, Peoples R China
[5] Chongqing Univ Posts & Telecommun, Chongqing Engn Lab Internet & Informat Secur, Chongqing, Peoples R China
[6] Chongqing Univ Posts & Telecommun, Dept Software Engn, Chongqing, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Graph data; Adversarial attacks; Link prediction; Structural perturbation; Deep ensemble coding; LINK-PREDICTION; COMPLEX NETWORKS;
D O I
10.1016/j.neucom.2020.07.126
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Inspired by the practical importance of graph structured data, link prediction, one of the most frequently applied tasks on graph data, has garnered considerable attention in recent years, and they have been widely applied in item recommendation, privacy inference attack, knowledge graph completion, fraud detection, and other fields. However, recent studies show that machine learning-based intelligent systems are vulnerable to adversarial attacks, which has recently inspired much research on the security problems of machine learning in the context of computer vision, natural language processing, physical world, etc. Nonetheless, there is a lack of understanding of the vulnerability of link prediction methods in face of adversarial attacks. To unveil the weaknesses and aid in the development of robust link prediction methods, we propose a deep architecture-based adversarial attack method, called Deep Ensemble Coding, against link prediction. In particular, based on the assumption that links play different structural roles in structure organization, we propose a deep linear coding-based structure enhancement mechanism to generate adversarial examples. We also empirically investigate other adversarial attack methods for graph data, including heuristic and evolutionary perturbation methods. Based on the comprehensive experiments conducted on various real-world networks, we can conclude that the proposed adversarial attack method has satisfactory performance for link prediction. Moreover, we can observe that state-of-the-art link prediction algorithms are vulnerable to adversarial attacks and, for adversarial defense, the attack can be viewed as a robustness evaluation for the construction of robust link prediction methods. Inspired by the practical importance of graph structured data, link prediction, one of the most fre-quently applied tasks on graph data, has garnered considerable attention in recent years, and they have been widely applied in item recommendation, privacy inference attack, knowledge graph completion, fraud detection, and other fields. However, recent studies show that machine learning-based intelligent systems are vulnerable to adversarial attacks, which has recently inspired much research on the secu-rity problems of machine learning in the context of computer vision, natural language processing, phys-ical world, etc. Nonetheless, there is a lack of understanding of the vulnerability of link prediction methods in face of adversarial attacks. To unveil the weaknesses and aid in the development of robust link prediction methods, we propose a deep architecture-based adversarial attack method, called Deep Ensemble Coding, against link prediction. In particular, based on the assumption that links play differ-ent structural roles in structure organization, we propose a deep linear coding-based structure enhancement mechanism to generate adversarial examples. We also empirically investigate other adversarial attack methods for graph data, including heuristic and evolutionary perturbation methods. Based on the comprehensive experiments conducted on various real-world networks, we can conclude that the proposed adversarial attack method has satisfactory performance for link prediction. Moreover, we can observe that state-of-the-art link prediction algorithms are vulnerable to adversarial attacks and, for adversarial defense, the attack can be viewed as a robustness evaluation for the construction of robust link prediction methods. (c) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:168 / 185
页数:18
相关论文
共 50 条
  • [41] Graph Adversarial Attacks and Defense: An Empirical Study on Citation Graph
    Chau Pham
    Vung Pham
    Dang, Tommy
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 2553 - 2562
  • [42] Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification
    Wang, Xin
    Chang, Heng
    Xie, Beini
    Bian, Tian
    Zhou, Shiji
    Wang, Daixin
    Zhang, Zhiqiang
    Zhu, Wenwu
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (05) : 2166 - 2178
  • [43] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350
  • [44] Adversarial Attacks on Neural Networks for Graph Data
    Zuegner, Daniel
    Akbarnejad, Amir
    Guennemann, Stephan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 6246 - 6250
  • [45] Exploratory Adversarial Attacks on Graph Neural Networks
    Lin, Xixun
    Zhou, Chuan
    Yang, Hong
    Wu, Jia
    Wang, Haibo
    Cao, Yanan
    Wang, Bin
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 1136 - 1141
  • [46] Adversarial Attacks on Neural Networks for Graph Data
    Zuegner, Daniel
    Akbarnejad, Amir
    Guennemann, Stephan
    KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2847 - 2856
  • [47] Denoised Internal Models: A Brain-inspired Autoencoder Against Adversarial Attacks
    Liu, Kai-Yuan
    Li, Xing-Yu
    Lai, Yu-Rui
    Su, Hang
    Wang, Jia-Chen
    Guo, Chun-Xu
    Xie, Hong
    Guan, Ji-Song
    Zhou, Yi
    MACHINE INTELLIGENCE RESEARCH, 2022, 19 (05) : 456 - 471
  • [48] Denoised Internal Models: A Brain-inspired Autoencoder Against Adversarial Attacks
    Kai-Yuan Liu
    Xing-Yu Li
    Yu-Rui Lai
    Hang Su
    Jia-Chen Wang
    Chun-Xu Guo
    Hong Xie
    Ji-Song Guan
    Yi Zhou
    Machine Intelligence Research , 2022, (05) : 456 - 471
  • [49] Interdiction models for delaying adversarial attacks against critical information technology infrastructure
    Zheng, Kaiyue
    Albert, Laura A.
    NAVAL RESEARCH LOGISTICS, 2019, 66 (05) : 411 - 429
  • [50] Denoised Internal Models: A Brain-inspired Autoencoder Against Adversarial Attacks
    Kai-Yuan Liu
    Xing-Yu Li
    Yu-Rui Lai
    Hang Su
    Jia-Chen Wang
    Chun-Xu Guo
    Hong Xie
    Ji-Song Guan
    Yi Zhou
    Machine Intelligence Research, 2022, 19 : 456 - 471