Anomaly Detection on Attributed Networks via Contrastive Self-Supervised Learning

被引:168
|
作者
Liu, Yixin [1 ]
Li, Zhao [2 ]
Pan, Shirui [1 ]
Gong, Chen [3 ,4 ]
Zhou, Chuan [5 ]
Karypis, George [6 ]
机构
[1] Monash Univ, Fac Informat Technol, Dept Data Sci & AI, Clayton, Vic 3800, Australia
[2] Alibaba Grp, Hangzhou 310000, Peoples R China
[3] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Key Lab Intelligent Percept & Syst High Dimens In, PCA Lab,Minist Educ, Nanjing 210094, Peoples R China
[4] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[5] Chinese Acad Sci, Acad Math & Syst Sci, Beijing 100093, Peoples R China
[6] Univ Minnesota, Dept Comp Sci & Engn, Minneapolis, MN 55455 USA
关键词
Anomaly detection; Task analysis; Graph neural networks; Unsupervised learning; Predictive models; Pattern matching; Training; attributed networks; contrastive self-supervised learning; graph neural networks (GNNs); unsupervised learning;
D O I
10.1109/TNNLS.2021.3068344
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Anomaly detection on attributed networks attracts considerable research interests due to wide applications of attributed networks in modeling a wide range of complex systems. Recently, the deep learning-based anomaly detection methods have shown promising results over shallow approaches, especially on networks with high-dimensional attributes and complex structures. However, existing approaches, which employ graph autoencoder as their backbone, do not fully exploit the rich information of the network, resulting in suboptimal performance. Furthermore, these methods do not directly target anomaly detection in their learning objective and fail to scale to large networks due to the full graph training mechanism. To overcome these limitations, in this article, we present a novel Contrastive self-supervised Learning framework for Anomaly detection on attributed networks (CoLA for abbreviation). Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair, which can capture the relationship between each node and its neighboring substructure in an unsupervised way. Meanwhile, a well-designed graph neural network (GNN)-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure and measure the agreement of each instance pairs with its outputted scores. The multiround predicted scores by the contrastive learning model are further used to evaluate the abnormality of each node with statistical estimation. In this way, the learning model is trained by a specific anomaly detection-aware target. Furthermore, since the input of the GNN module is batches of instance pairs instead of the full network, our framework can adapt to large networks flexibly. Experimental results show that our proposed framework outperforms the state-of-the-art baseline methods on all seven benchmark data sets.
引用
收藏
页码:2378 / 2392
页数:15
相关论文
共 50 条
  • [1] SELF-SUPERVISED ACOUSTIC ANOMALY DETECTION VIA CONTRASTIVE LEARNING
    Hojjati, Hadi
    Armanfard, Narges
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3253 - 3257
  • [2] Federated Graph Anomaly Detection via Contrastive Self-Supervised Learning
    Kong, Xiangjie
    Zhang, Wenyi
    Wang, Hui
    Hou, Mingliang
    Chen, Xin
    Yan, Xiaoran
    Das, Sajal K.
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [3] A NOVEL CONTRASTIVE LEARNING FRAMEWORK FOR SELF-SUPERVISED ANOMALY DETECTION
    Li, Jingze
    Lian, Zhichao
    Li, Min
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3366 - 3370
  • [4] Generative and Contrastive Self-Supervised Learning for Graph Anomaly Detection
    Zheng, Yu
    Jin, Ming
    Liu, Yixin
    Chi, Lianhua
    Phan, Khoa T.
    Chen, Yi-Ping Phoebe
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (12) : 12220 - 12233
  • [5] CADet: Fully Self-Supervised Anomaly Detection With Contrastive Learning
    Guille-Escuret, Charles
    Rodriguez, Pau
    Vazquez, David
    Mitliagkas, Ioannis
    Monteiro, Joao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Decoupling Anomaly Discrimination and Representation Learning: Self-supervised Learning for Anomaly Detection on Attributed Graph
    Hu, Yanming
    Chen, Chuan
    Deng, Bowen
    Lai, Yujing
    Lin, Hao
    Zheng, Zibin
    Bian, Jing
    [J]. DATA SCIENCE AND ENGINEERING, 2024, 9 (03) : 264 - 277
  • [7] Hop-Count Based Self-supervised Anomaly Detection on Attributed Networks
    Huang, Tianjin
    Pei, Yulong
    Menkovski, Vlado
    Pechenizkiy, Mykola
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT I, 2023, 13713 : 225 - 241
  • [8] Slimmable Networks for Contrastive Self-supervised Learning
    Zhao, Shuai
    Zhu, Linchao
    Wang, Xiaohan
    Yang, Yi
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024,
  • [9] CARLA: Self-supervised contrastive representation learning for time series anomaly detection
    Darban, Zahra Zamanzadeh
    Webb, Geoffrey I.
    Pan, Shirui
    Aggarwal, Charu C.
    Salehi, Mahsa
    [J]. PATTERN RECOGNITION, 2025, 157
  • [10] Self-Supervised Contrastive Learning In Spiking Neural Networks
    Bahariasl, Yeganeh
    Kheradpisheh, Saeed Reza
    [J]. PROCEEDINGS OF THE 13TH IRANIAN/3RD INTERNATIONAL MACHINE VISION AND IMAGE PROCESSING CONFERENCE, MVIP, 2024, : 181 - 185