A self-attention sequential model for long-term prediction of video streams

被引:0
|
作者
Ge, Yunfeng [1 ]
Li, Hongyan [1 ]
Shi, Keyi [1 ]
机构
[1] School of Telecommunications Engineering, Xidian University, Xi'an,710071, China
关键词
D O I
10.19665/j.issn1001-2400.20240202
中图分类号
学科分类号
摘要
Video traffic prediction is a key technology to achieve accurate transmission bandwidth allocation and improve the quality of the Internet service. However, the inherent high rate variability, long-term dependence and short-term dependence of video traffic make it difficult to make a quick, accurate and long- term prediction:because existing models for predicting sequence dependencies have a high complexity and prediction models fail quickly. Aiming at the problem of long-term prediction of video streams, a sequential self-attention model with frame structure feature embedding is proposed. The sequential self-attention model has a strong modeling ability for the nonlinear relationship of discrete data. Based on the difference of correlation between video frames, this paper applies the time series self-attention model to the long-term prediction of video traffic for the first time. The existing time series self-attention model cannot effectively represent the category features of video frames. By introducing an embedding layer based on the frame structure, the frame structure information is effectively embedded into the time series to improve the accuracy of the model. The results show that, compared with the existing long short-term memory network model and convolutional neural network model, the proposed sequential self-attention model based on frame structure feature embedding has a fast inference speed, and that the prediction accuracy is reduced by at least 32% in the mean absolute error. © 2024 ournal of Xidian University. All Rights Reserved.
引用
收藏
页码:88 / 102
相关论文
共 50 条
  • [41] Core Interests Focused Self-attention for Sequential Recommendation
    Ai, Zhengyang
    Wang, Shupeng
    Jia, Siyu
    Guo, Shu
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT II, 2022, : 306 - 314
  • [42] Weight Adjustment Framework for Self-Attention Sequential Recommendation
    Su, Zheng-Ang
    Zhang, Juan
    APPLIED SCIENCES-BASEL, 2024, 14 (09):
  • [43] Time Interval Aware Self-Attention for Sequential Recommendation
    Li, Jiacheng
    Wang, Yujie
    McAuley, Julian
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 322 - 330
  • [44] Vehicle Interaction Behavior Prediction with Self-Attention
    Li, Linhui
    Sui, Xin
    Lian, Jing
    Yu, Fengning
    Zhou, Yafu
    SENSORS, 2022, 22 (02)
  • [45] Mechanics of Next Token Prediction with Self-Attention
    Li, Yingcong
    Huang, Yixiao
    Ildiz, M. Emrullah
    Rawat, Ankit Singh
    Oymak, Samet
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [46] Relational Self-Attention: What's Missing in Attention for Video Understanding
    Kim, Manjin
    Kwon, Heeseung
    Wang, Chunyu
    Kwak, Suha
    Cho, Minsu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [47] Improving Ship Fuel Consumption and Carbon Intensity Prediction Accuracy Based on a Long Short-Term Memory Model with Self-Attention Mechanism
    Wang, Zhihuan
    Lu, Tianye
    Han, Yi
    Zhang, Chunchang
    Zeng, Xiangming
    Li, Wei
    APPLIED SCIENCES-BASEL, 2024, 14 (18):
  • [48] Self-attention binary neural tree for video summarization
    Fu, Hao
    Wang, Hongxing
    PATTERN RECOGNITION LETTERS, 2021, 143 : 19 - 26
  • [49] Self-attention binary neural tree for video summarization
    Fu, Hao
    Wang, Hongxing
    Wang, Hongxing (ihxwang@cqu.edu.cn), 1600, Elsevier B.V. (143): : 19 - 26
  • [50] An improved sequential recommendation model based on spatial self-attention mechanism and meta learning
    Ni, Jianjun
    Shen, Tong
    Tang, Guangyi
    Shi, Pengfei
    Yang, Simon X.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (21) : 60003 - 60025