EGAD: Evolving Graph Representation Learning with Self-Attention and Knowledge Distillation for Live Video Streaming Events

被引:3
|
作者
Antaris, Stefanos [1 ,2 ]
Rafailidis, Dimitrios [3 ]
Girdzijauskas, Sarunas [1 ]
机构
[1] KTH Royal Inst Technol, Stockholm, Sweden
[2] HiveStreaming AB, Stockholm, Sweden
[3] Maastricht Univ, Maastricht, Netherlands
关键词
Graph representation learning; live video streaming; evolving graphs; knowledge distillation;
D O I
10.1109/BigData50022.2020.9378219
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this study, we present a dynamic graph representation learning model on weighted graphs to accurately predict the network capacity of connections between viewers in a live video streaming event. We propose EGAD, a neural network architecture to capture the graph evolution by introducing a self-attention mechanism on the weights between consecutive graph convolutional networks. In addition, we account for the fact that neural architectures require a huge amount of parameters to train, thus increasing the online inference latency and negatively influencing the user experience in a live video streaming event. To address the problem of the high online inference of a vast number of parameters, we propose a knowledge distillation strategy. In particular, we design a distillation loss function, aiming to first pretrain a teacher model on offline data, and then transfer the knowledge from the teacher to a smaller student model with less parameters. We evaluate our proposed model on the link prediction task on three real-world datasets, generated by live video streaming events. The events lasted 80 minutes and each viewer exploited the distribution solution provided by the company Hive Streaming AB. The experiments demonstrate the effectiveness of the proposed model in terms of link prediction accuracy and number of required parameters, when evaluated against state-of-the-art approaches. In addition, we study the distillation performance of the proposed model in terms of compression ratio for different distillation strategies, where we show that the proposed model can achieve a compression ratio up to 15:100, preserving high link prediction accuracy. For reproduction purposes, our evaluation datasets and implementation are publicly available at https://stefanosantaris.github.io/EGAD.
引用
收藏
页码:1455 / 1464
页数:10
相关论文
共 50 条
  • [1] VStreamDRLS: Dynamic Graph Representation Learning with Self-Attention for Enterprise Distributed Video Streaming Solutions
    Antaris, Stefanos
    Rafailidis, Dimitrios
    2020 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM), 2020, : 486 - 493
  • [2] SSAN: Separable Self-Attention Network for Video Representation Learning
    Guo, Xudong
    Guo, Xun
    Lu, Yan
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12613 - 12622
  • [3] Structured self-attention architecture for graph-level representation learning
    Fan, Xiaolong
    Gong, Maoguo
    Xie, Yu
    Jiang, Fenlong
    Li, Hao
    PATTERN RECOGNITION, 2020, 100
  • [4] Self-attention with Functional Time Representation Learning
    Xu, Da
    Ruan, Chuanwei
    Kumar, Sushant
    Korpeoglu, Evren
    Achan, Kannan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [5] LEARNING HIERARCHICAL SELF-ATTENTION FOR VIDEO SUMMARIZATION
    Liu, Yen-Ting
    Li, Yu-Jhe
    Yang, Fu-En
    Chen, Shang-Fu
    Wang, Yu-Chiang Frank
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3377 - 3381
  • [6] Script event prediction method based on self-attention mechanism and graph representation learning
    Hu, Meng
    Bai, Lu
    Yang, Mei
    2022 IEEE 6TH ADVANCED INFORMATION TECHNOLOGY, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IAEAC), 2022, : 722 - 726
  • [7] Compact Cloud Detection with Bidirectional Self-Attention Knowledge Distillation
    Chai, Yajie
    Fu, Kun
    Sun, Xian
    Diao, Wenhui
    Yan, Zhiyuan
    Feng, Yingchao
    Wang, Lei
    REMOTE SENSING, 2020, 12 (17)
  • [8] Evolving Knowledge Graph Representation Learning with Multiple Attention Strategies for Citation Recommendation System
    Liu, Jhih-Chen
    Chen, Chiao-Ting
    Lee, Chi
    Huang, Szu-Hao
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (02) : 1 - 26
  • [9] Meta-Reinforcement Learning via Buffering Graph Signatures for Live Video Streaming Events
    Antaris, Stefanos
    Rafailidis, Dimitrios
    Girdzijauskas, Sarunas
    PROCEEDINGS OF THE 2021 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING, ASONAM 2021, 2021, : 385 - 392
  • [10] Cascade Prediction model based on Dynamic Graph Representation and Self-Attention
    Zhang F.
    Wang X.
    Wang R.
    Tang Q.
    Han Y.
    Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China, 2022, 51 (01): : 83 - 90