EGAD: Evolving Graph Representation Learning with Self-Attention and Knowledge Distillation for Live Video Streaming Events

被引:3
|
作者
Antaris, Stefanos [1 ,2 ]
Rafailidis, Dimitrios [3 ]
Girdzijauskas, Sarunas [1 ]
机构
[1] KTH Royal Inst Technol, Stockholm, Sweden
[2] HiveStreaming AB, Stockholm, Sweden
[3] Maastricht Univ, Maastricht, Netherlands
关键词
Graph representation learning; live video streaming; evolving graphs; knowledge distillation;
D O I
10.1109/BigData50022.2020.9378219
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this study, we present a dynamic graph representation learning model on weighted graphs to accurately predict the network capacity of connections between viewers in a live video streaming event. We propose EGAD, a neural network architecture to capture the graph evolution by introducing a self-attention mechanism on the weights between consecutive graph convolutional networks. In addition, we account for the fact that neural architectures require a huge amount of parameters to train, thus increasing the online inference latency and negatively influencing the user experience in a live video streaming event. To address the problem of the high online inference of a vast number of parameters, we propose a knowledge distillation strategy. In particular, we design a distillation loss function, aiming to first pretrain a teacher model on offline data, and then transfer the knowledge from the teacher to a smaller student model with less parameters. We evaluate our proposed model on the link prediction task on three real-world datasets, generated by live video streaming events. The events lasted 80 minutes and each viewer exploited the distribution solution provided by the company Hive Streaming AB. The experiments demonstrate the effectiveness of the proposed model in terms of link prediction accuracy and number of required parameters, when evaluated against state-of-the-art approaches. In addition, we study the distillation performance of the proposed model in terms of compression ratio for different distillation strategies, where we show that the proposed model can achieve a compression ratio up to 15:100, preserving high link prediction accuracy. For reproduction purposes, our evaluation datasets and implementation are publicly available at https://stefanosantaris.github.io/EGAD.
引用
收藏
页码:1455 / 1464
页数:10
相关论文
共 50 条
  • [41] Leveraging multimodal features for knowledge graph entity alignment based on dynamic self-attention networks
    Qian, Ye
    Pan, Li
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 228
  • [42] Optimal Recommendation Models Based on Knowledge Representation Learning and Graph Attention Networks
    He, Qing
    Liu, Songyan
    Liu, Yao
    IEEE ACCESS, 2023, 11 : 19809 - 19818
  • [43] An attention-based representation learning model for multiple relational knowledge graph
    Han, Zhongming
    Chen, Fuyu
    Zhang, Hui
    Yang, Zhiyu
    Liu, Wenwen
    Shen, Zequan
    Xiong, Haitao
    EXPERT SYSTEMS, 2023, 40 (06)
  • [44] DySAT: Deep Neural Representation Learning on Dynamic Graphs via Self-Attention Networks
    Sankar, Aravind
    Wu, Yanhong
    Gou, Liang
    Zhang, Wei
    Yang, Hao
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 519 - 527
  • [45] A Model of Text-Enhanced Knowledge Graph Representation Learning With Mutual Attention
    Wang, Yashen
    Zhang, Huanhuan
    Shi, Ge
    Liu, Zhirun
    Zhou, Qiang
    IEEE ACCESS, 2020, 8 : 52895 - 52905
  • [46] A Model of Text-Enhanced Knowledge Graph Representation Learning with Collaborative Attention
    Wang, Yashen
    Zhang, Huanhuan
    Xie, Haiyong
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 236 - 251
  • [47] A personalized paper recommendation method based on knowledge graph and transformer encoder with a self-attention mechanism
    Gao, Li
    Lan, Yu
    Yu, Zhen
    Zhu, Jian-min
    APPLIED INTELLIGENCE, 2023, 53 (24) : 29991 - 30008
  • [48] A personalized paper recommendation method based on knowledge graph and transformer encoder with a self-attention mechanism
    Li Gao
    Yu Lan
    Zhen Yu
    Jian-min Zhu
    Applied Intelligence, 2023, 53 : 29991 - 30008
  • [49] Semi-supervised Training for Knowledge Base Graph Self-attention Networks on Link Prediction
    Yao, Shuanglong
    Pi, Dechang
    Chen, Junfu
    Liu, Yufei
    Wu, Zhiyuan
    arXiv, 2022,
  • [50] XKD: Cross-Modal Knowledge Distillation with Domain Alignment for Video Representation Learning
    Sarkar, Pritam
    Etemad, Ali
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14875 - 14885