EGAD: Evolving Graph Representation Learning with Self-Attention and Knowledge Distillation for Live Video Streaming Events

被引:3
|
作者
Antaris, Stefanos [1 ,2 ]
Rafailidis, Dimitrios [3 ]
Girdzijauskas, Sarunas [1 ]
机构
[1] KTH Royal Inst Technol, Stockholm, Sweden
[2] HiveStreaming AB, Stockholm, Sweden
[3] Maastricht Univ, Maastricht, Netherlands
关键词
Graph representation learning; live video streaming; evolving graphs; knowledge distillation;
D O I
10.1109/BigData50022.2020.9378219
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this study, we present a dynamic graph representation learning model on weighted graphs to accurately predict the network capacity of connections between viewers in a live video streaming event. We propose EGAD, a neural network architecture to capture the graph evolution by introducing a self-attention mechanism on the weights between consecutive graph convolutional networks. In addition, we account for the fact that neural architectures require a huge amount of parameters to train, thus increasing the online inference latency and negatively influencing the user experience in a live video streaming event. To address the problem of the high online inference of a vast number of parameters, we propose a knowledge distillation strategy. In particular, we design a distillation loss function, aiming to first pretrain a teacher model on offline data, and then transfer the knowledge from the teacher to a smaller student model with less parameters. We evaluate our proposed model on the link prediction task on three real-world datasets, generated by live video streaming events. The events lasted 80 minutes and each viewer exploited the distribution solution provided by the company Hive Streaming AB. The experiments demonstrate the effectiveness of the proposed model in terms of link prediction accuracy and number of required parameters, when evaluated against state-of-the-art approaches. In addition, we study the distillation performance of the proposed model in terms of compression ratio for different distillation strategies, where we show that the proposed model can achieve a compression ratio up to 15:100, preserving high link prediction accuracy. For reproduction purposes, our evaluation datasets and implementation are publicly available at https://stefanosantaris.github.io/EGAD.
引用
收藏
页码:1455 / 1464
页数:10
相关论文
共 50 条
  • [21] A hierarchical and interlamination graph self-attention mechanism-based knowledge graph reasoning architecture
    Wu, Yuejia
    Zhou, Jian-tao
    INFORMATION SCIENCES, 2025, 686
  • [22] Vector Quantization with Self-Attention for Quality-Independent Representation Learning
    Yang, Zhou
    Dong, Weisheng
    Li, Xin
    Huang, Mengluan
    Sun, Yulin
    Shi, Guangming
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24438 - 24448
  • [23] Self-attention guided representation learning for image-text matching
    Qi, Xuefei
    Zhang, Ying
    Qi, Jinqing
    Lu, Huchuan
    NEUROCOMPUTING, 2021, 450 : 143 - 155
  • [24] Leverage External Knowledge and Self-attention for Chinese Semantic Dependency Graph Parsing
    Liu, Dianqing
    Zhang, Lanqiu
    Shao, Yanqiu
    Sun, Junzhao
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2021, 28 (02): : 447 - 458
  • [25] End-to-End Learning for Video Frame Compression with Self-Attention
    Zou, Nannan
    Zhang, Honglei
    Cricri, Francesco
    Tavakoli, Hamed R.
    Lainema, Jani
    Aksu, Emre
    Hannuksela, Miska
    Rahtu, Esa
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 580 - 584
  • [26] Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation
    Dadashzadeh, Amirhossein
    Whone, Alan
    Mirmehdi, Majid
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 4230 - 4239
  • [27] KNOWLEDGE DISTILLATION USING OUTPUT ERRORS FOR SELF-ATTENTION END-TO-END MODELS
    Kim, Ho-Gyeong
    Na, Hwidong
    Lee, Hoshik
    Lee, Jihyun
    Kang, Tae Gyoon
    Lee, Min-Joong
    Choi, Young Sang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 6181 - 6185
  • [28] Formula Graph Self-Attention Network for Representation-Domain Independent Materials Discovery
    Ihalage, Achintha
    Hao, Yang
    ADVANCED SCIENCE, 2022, 9 (18)
  • [29] A Deep Graph Reinforcement Learning Model for Improving User Experience in Live Video Streaming
    Antaris, Stelanos
    Rafailidis, Dimitrios
    Gidzijauskas, Sarunas
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 1787 - 1796
  • [30] Distill2Vec: Dynamic Graph Representation Learning with Knowledge Distillation
    Antaris, Stefanos
    Rafailidis, Dimitrios
    2020 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM), 2020, : 60 - 64