Online Meta-Learning for Hybrid Model-Based Deep Receivers

被引:11
|
作者
Raviv, Tomer [1 ]
Park, Sangwoo [2 ]
Simeone, Osvaldo [2 ]
Eldar, Yonina C. [3 ]
Shlezinger, Nir [1 ]
机构
[1] Ben Gurion Univ Negev, Sch ECE, IL-8410501 Beer Sheva, Israel
[2] Kings Coll London, Dept Engn, London WC2R 2LS, England
[3] Weizmann Inst Sci, Fac Math & CS, IL-7610001 Rehovot, Israel
基金
以色列科学基金会;
关键词
Wireless communications; model-based deep learning; deep receivers; meta-learning; SOFT INTERFERENCE CANCELLATION; COMMUNICATION-SYSTEMS; AUGMENTATION; ALGORITHM; CODES;
D O I
10.1109/TWC.2023.3241841
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recent years have witnessed growing interest in the application of deep neural networks (DNNs) for receiver design, which can potentially be applied in complex environments without relying on knowledge of the channel model. However, the dynamic nature of communication channels often leads to rapid distribution shifts, which may require periodically retraining. This paper formulates a data-efficient two-stage training method that facilitates rapid online adaptation. Our training mechanism uses a predictive meta-learning scheme to train rapidly from data corresponding to both current and past channel realizations. Our method is applicable to any deep neural network (DNN)-based receiver, and does not require transmission of new pilot data for training. To illustrate the proposed approach, we study DNN-aided receivers that utilize an interpretable model-based architecture, and introduce a modular training strategy based on predictive meta-learning. We demonstrate our techniques in simulations on a synthetic linear channel, a synthetic non-linear channel, and a COST 2100 channel. Our results demonstrate that the proposed online training scheme allows receivers to outperform previous techniques based on self-supervision and joint-learning by a margin of up to 2.5 dB in coded bit error rate in rapidly-varying scenarios.
引用
收藏
页码:6415 / 6431
页数:17
相关论文
共 50 条
  • [1] Online Meta-Learning
    Finn, Chelsea
    Rajeswaran, Aravind
    Kakade, Sham
    Levine, Sergey
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [2] Online Structured Meta-learning
    Yao, Huaxiu
    Zhou, Yingbo
    Mahdavi, Mehrdad
    Li, Zhenhui
    Socher, Richard
    Xiong, Caiming
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [3] Efficient hyperparameters optimization through model-based reinforcement learning with experience exploiting and meta-learning
    Liu, Xiyuan
    Wu, Jia
    Chen, Senpeng
    [J]. SOFT COMPUTING, 2023, 27 (13) : 8661 - 8678
  • [4] Efficient hyperparameters optimization through model-based reinforcement learning with experience exploiting and meta-learning
    Xiyuan Liu
    Jia Wu
    Senpeng Chen
    [J]. Soft Computing, 2023, 27 : 8661 - 8678
  • [5] Online-Within-Online Meta-Learning
    Denevi, Giulia
    Stamos, Dimitris
    Ciliberto, Carlo
    Pontil, Massimiliano
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [6] A survey of deep meta-learning
    Mike Huisman
    Jan N. van Rijn
    Aske Plaat
    [J]. Artificial Intelligence Review, 2021, 54 : 4483 - 4541
  • [7] A survey of deep meta-learning
    Huisman, Mike
    van Rijn, Jan N.
    Plaat, Aske
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2021, 54 (06) : 4483 - 4541
  • [8] Hybrid Deep Learning and Model-Based Needle Shape Prediction
    Lezcano, Dimitri A.
    Zhetpissov, Yernar
    Bernardes, Mariana C.
    Moreira, Pedro
    Tokuda, Junichi
    Kim, Jin Seob
    Iordachita, Iulian I.
    [J]. IEEE SENSORS JOURNAL, 2024, 24 (11) : 18359 - 18371
  • [9] Online meta-learning for POI recommendation
    Lv, Yao
    Sang, Yu
    Tai, Chong
    Cheng, Wanjun
    Shang, Jedi S.
    Qu, Jianfeng
    Chu, Xiaomin
    Zhang, Ruoqian
    [J]. GEOINFORMATICA, 2023, 27 (01) : 61 - 76
  • [10] Model-Based Deep Learning
    Shlezinger, Nir
    Whang, Jay
    Eldar, Yonina C.
    Dimakis, Alexandros G.
    [J]. PROCEEDINGS OF THE IEEE, 2023, 111 (05) : 465 - 499