Communication Traffic Prediction with Continual Knowledge Distillation

被引:1
|
作者
Li, Hang [1 ]
Wang, Ju [1 ]
Hu, Chengming [1 ]
Chen, Xi [1 ]
Liu, Xue [1 ]
Jang, Seowoo [2 ]
Dudek, Gregory [1 ]
机构
[1] Samsung Elect, Mississauga, ON, Canada
[2] Samsung Elect, Suwon, South Korea
关键词
traffic prediction; knowledge distillation;
D O I
10.1109/ICC45855.2022.9838521
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Accurate traffic volume estimation and prediction are essential for advanced communication network functions, such as automatic operations and predictive resource allocation. Although machine learning (ML)-based approaches achieve great success in accomplishing this goal, existing approaches suffer from two drawbacks that limit their real-world applications. First, the ML-based prediction models developed in the past might be obsolete now, since the communication traffic patterns and volumes keep changing in the real world, leading to prediction errors. Second, most Base Stations (BSs) can only save a small amount of data due to the limited storage capacity and high storage costs, which prevents from training an accurate prediction model. In this paper, we propose a novel framework that adapts the prediction model to the constantly changing traffic with only a few current traffic data. Specifically, the framework first learns the knowledge of historical traffic data as much as possible by using a proposed two-branch neural network design, which includes a prediction and a reconstruction module. Then, the framework transfers the knowledge from an old (past) prediction model to a new (current) model for the model update by using a proposed continual knowledge distillation technique. Evaluations on a real-world dataset show that the proposed framework reduces the Mean Absolute Error (MAE) of traffic prediction by up to 9.62% compared to the state-of-the-art prediction methods.
引用
收藏
页码:5481 / 5486
页数:6
相关论文
共 50 条
  • [21] Continual text classification based on knowledge distillation and class-aware experience replay
    Fengqin Yang
    Yinshu Che
    Mei Kang
    Shuhua Liu
    Zhiguo Fu
    [J]. Knowledge and Information Systems, 2023, 65 : 3923 - 3944
  • [22] Continual text classification based on knowledge distillation and class-aware experience replay
    Yang, Fengqin
    Che, Yinshu
    Kang, Mei
    Liu, Shuhua
    Fu, Zhiguo
    [J]. KNOWLEDGE AND INFORMATION SYSTEMS, 2023, 65 (10) : 3923 - 3944
  • [23] Joint data augmentation and knowledge distillation for few-shot continual relation extraction
    Wei, Zhongcheng
    Zhang, Yunping
    Lian, Bin
    Fan, Yongjian
    Zhao, Jijun
    [J]. APPLIED INTELLIGENCE, 2024, 54 (04) : 3516 - 3528
  • [24] Lightweight Network Traffic Classification Model Based on Knowledge Distillation
    Wu, Yanhui
    Zhang, Meng
    [J]. WEB INFORMATION SYSTEMS ENGINEERING - WISE 2021, PT II, 2021, 13081 : 107 - 121
  • [25] A continual learning framework to train robust image recognition models by adversarial training and knowledge distillation
    Chou, Ting-Chun
    Kuo, Yu-Cheng
    Huang, Jhih-Yuan
    Lee, Wei-Po
    [J]. CONNECTION SCIENCE, 2024, 36 (01)
  • [26] Position Awareness Modeling with Knowledge Distillation for CTR Prediction
    Liu, Congcong
    Li, Yuejiang
    Zhu, Jian
    Teng, Fei
    Zhao, Xiwei
    Peng, Chanping
    Lin, Zhangang
    Shao, Jingping
    [J]. PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 562 - 566
  • [27] Knowledge Distillation for Enhanced Age and Gender Prediction Accuracy
    Kim, Seunghyun
    Park, Yeongje
    Lee, Eui Chul
    [J]. MATHEMATICS, 2024, 12 (17)
  • [28] Training Efficient Saliency Prediction Models with Knowledge Distillation
    Zhang, Peng
    Su, Li
    Li, Liang
    Bao, BingKun
    Cosman, Pamela
    Li, GuoRong
    Huang, Qingming
    [J]. PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 512 - 520
  • [29] Channel-wise Knowledge Distillation for Dense Prediction
    Shu, Changyong
    Liu, Yifan
    Gao, Jianfei
    Yan, Zheng
    Shen, Chunhua
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 5291 - 5300
  • [30] On Knowledge Distillation from Complex Networks for Response Prediction
    Arora, Siddhartha
    Khapra, Mitesh M.
    Ramaswamy, Harish G.
    [J]. 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 3813 - 3822