DeepEC: Adversarial attacks against graph structure prediction models

被引:18
|
作者
Xian, Xingping [1 ]
Wu, Tao [1 ]
Qiao, Shaojie [2 ]
Wang, Wei [3 ]
Wang, Chao [4 ]
Liu, Yanbing [5 ]
Xu, Guangxia [6 ]
机构
[1] Chongqing Univ Posts & Telecommun, Sch Cybersecur & Informat Law, Chongqing, Peoples R China
[2] Chengdu Univ Informat Technol, Sch Software Engn, Chengdu, Peoples R China
[3] Sichuan Univ, Inst Cybersecur, Chengdu, Peoples R China
[4] Chongqing Univ, Inst Elect Engn, Chongqing, Peoples R China
[5] Chongqing Univ Posts & Telecommun, Chongqing Engn Lab Internet & Informat Secur, Chongqing, Peoples R China
[6] Chongqing Univ Posts & Telecommun, Dept Software Engn, Chongqing, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Graph data; Adversarial attacks; Link prediction; Structural perturbation; Deep ensemble coding; LINK-PREDICTION; COMPLEX NETWORKS;
D O I
10.1016/j.neucom.2020.07.126
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Inspired by the practical importance of graph structured data, link prediction, one of the most frequently applied tasks on graph data, has garnered considerable attention in recent years, and they have been widely applied in item recommendation, privacy inference attack, knowledge graph completion, fraud detection, and other fields. However, recent studies show that machine learning-based intelligent systems are vulnerable to adversarial attacks, which has recently inspired much research on the security problems of machine learning in the context of computer vision, natural language processing, physical world, etc. Nonetheless, there is a lack of understanding of the vulnerability of link prediction methods in face of adversarial attacks. To unveil the weaknesses and aid in the development of robust link prediction methods, we propose a deep architecture-based adversarial attack method, called Deep Ensemble Coding, against link prediction. In particular, based on the assumption that links play different structural roles in structure organization, we propose a deep linear coding-based structure enhancement mechanism to generate adversarial examples. We also empirically investigate other adversarial attack methods for graph data, including heuristic and evolutionary perturbation methods. Based on the comprehensive experiments conducted on various real-world networks, we can conclude that the proposed adversarial attack method has satisfactory performance for link prediction. Moreover, we can observe that state-of-the-art link prediction algorithms are vulnerable to adversarial attacks and, for adversarial defense, the attack can be viewed as a robustness evaluation for the construction of robust link prediction methods. Inspired by the practical importance of graph structured data, link prediction, one of the most fre-quently applied tasks on graph data, has garnered considerable attention in recent years, and they have been widely applied in item recommendation, privacy inference attack, knowledge graph completion, fraud detection, and other fields. However, recent studies show that machine learning-based intelligent systems are vulnerable to adversarial attacks, which has recently inspired much research on the secu-rity problems of machine learning in the context of computer vision, natural language processing, phys-ical world, etc. Nonetheless, there is a lack of understanding of the vulnerability of link prediction methods in face of adversarial attacks. To unveil the weaknesses and aid in the development of robust link prediction methods, we propose a deep architecture-based adversarial attack method, called Deep Ensemble Coding, against link prediction. In particular, based on the assumption that links play differ-ent structural roles in structure organization, we propose a deep linear coding-based structure enhancement mechanism to generate adversarial examples. We also empirically investigate other adversarial attack methods for graph data, including heuristic and evolutionary perturbation methods. Based on the comprehensive experiments conducted on various real-world networks, we can conclude that the proposed adversarial attack method has satisfactory performance for link prediction. Moreover, we can observe that state-of-the-art link prediction algorithms are vulnerable to adversarial attacks and, for adversarial defense, the attack can be viewed as a robustness evaluation for the construction of robust link prediction methods. (c) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:168 / 185
页数:18
相关论文
共 50 条
  • [21] Adversarial Attacks Against Deep Generative Models on Data: A Survey
    Sun, Hui
    Zhu, Tianqing
    Zhang, Zhiqiu
    Jin, Dawei
    Xiong, Ping
    Zhou, Wanlei
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (04) : 3367 - 3388
  • [22] Survey of Physical Adversarial Attacks Against Object Detection Models
    Cai, Wei
    Di, Xingyu
    Jiang, Xinhao
    Wang, Xin
    Gao, Weijie
    Computer Engineering and Applications, 2024, 60 (10) : 61 - 75
  • [23] Blind Adversarial Training: Towards Comprehensively Robust Models Against Blind Adversarial Attacks
    Xie, Haidong
    Xiang, Xueshuang
    Dong, Bin
    Liu, Naijin
    ARTIFICIAL INTELLIGENCE, CICAI 2023, PT II, 2024, 14474 : 15 - 26
  • [24] Adversarial Defense on Harmony: Reverse Attack for Robust AI Models Against Adversarial Attacks
    Kim, Yebon
    Jung, Jinhyo
    Kim, Hyunjun
    So, Hwisoo
    Ko, Yohan
    Shrivastava, Aviral
    Lee, Kyoungwoo
    Hwang, Uiwon
    IEEE ACCESS, 2024, 12 : 176485 - 176497
  • [25] The Robustness of Graph k-Shell Structure Under Adversarial Attacks
    Zhou, Bo
    Lv, Yuqian
    Mao, Yongchao
    Wang, Jinhuan
    Yu, Shanqing
    Xuan, Qi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (03) : 1797 - 1801
  • [26] Structack: Structure-based Adversarial Attacks on Graph Neural Networks
    Hussain, Hussain
    Duricic, Tomislav
    Lex, Elisabeth
    Helic, Denis
    Strohmaier, Markus
    Kern, Roman
    PROCEEDINGS OF THE 32ND ACM CONFERENCE ON HYPERTEXT AND SOCIAL MEDIA (HT '21), 2021, : 111 - 120
  • [27] Adversarial Attacks on Scene Graph Generation
    Zhao, Mengnan
    Zhang, Lihe
    Wang, Wei
    Kong, Yuqiu
    Yin, Baocai
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3210 - 3225
  • [28] Adversarial attacks against dynamic graph neural networks via node injection
    Jiang, Yanan
    Xia, Hui
    HIGH-CONFIDENCE COMPUTING, 2024, 4 (01):
  • [29] SAM: Query-efficient Adversarial Attacks against Graph Neural Networks
    Zhang, Chenhan
    Zhang, Shiyao
    Yu, James J. Q.
    Yu, Shui
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2023, 26 (04)
  • [30] Fortifying graph neural networks against adversarial attacks via ensemble learning
    Zhou, Chenyu
    Huang, Wei
    Miao, Xinyuan
    Peng, Yabin
    Kong, Xianglong
    Cao, Yi
    Chen, Xi
    KNOWLEDGE-BASED SYSTEMS, 2025, 309