Realistic Channel Models Pre-training

被引:1
|
作者
Huangfu, Yourui [1 ]
Wang, Jian [1 ]
Xu, Chen [1 ]
Li, Rong [1 ]
Ge, Yiqun [2 ]
Wang, Xianbin [1 ]
Zhang, Huazi [1 ]
Wang, Jun [1 ]
机构
[1] Huawei Technol, Hangzhou Res Ctr, Hangzhou, Peoples R China
[2] Huawei Technol, Ottawa Res Ctr, Ottawa, ON, Canada
关键词
D O I
10.1109/gcwkshps45667.2019.9024572
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In this paper, we propose a neural-network-based realistic channel model with both the similar accuracy as deterministic channel models and uniformity as stochastic channel models. To facilitate this realistic channel modeling, a multi-domain channel embedding method combined with self-attention mechanism is proposed to extract channel features from multiple domains simultaneously. This "one model to fit them all" solution employs available wireless channel data as the only data set for self-supervised pre-training. With the permission of users, network operators or other organizations can make use of some available user specific data to fine-tune this pre-trained realistic channel model for applications on channel-related downstream tasks. Moreover, even without fine-tuning, we show that the pre-trained realistic channel model itself is a great tool with its understanding of wireless channel.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Multi-stage Pre-training over Simplified Multimodal Pre-training Models
    Liu, Tongtong
    Feng, Fangxiang
    Wang, Xiaojie
    [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2556 - 2565
  • [2] Pre-training Mention Representations in Coreference Models
    Varkel, Yuval
    Globerson, Amir
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 8534 - 8540
  • [3] Masked Channel Modeling for Bootstrapping Visual Pre-training
    Liu, Yang
    Wang, Xinlong
    Zhu, Muzhi
    Cao, Yue
    Huang, Tiejun
    Shen, Chunhua
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024,
  • [4] A Method of Relation Extraction Using Pre-training Models
    Wang, Yu
    Sun, Yining
    Ma, Zuchang
    Gao, Lisheng
    Xu, Yang
    Wu, Yichen
    [J]. 2020 13TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID 2020), 2020, : 176 - 179
  • [5] Improving the Sample Efficiency of Pre-training Language Models
    Berend, Gabor
    [J]. ERCIM NEWS, 2024, (136): : 38 - 40
  • [6] Pre-Training Clustering Models to Summarize Vietnamese Texts
    Nguyen, Ti-Hon
    Do, Thanh-Nghi
    [J]. VIETNAM JOURNAL OF COMPUTER SCIENCE, 2024,
  • [7] Pre-training and diagnosing knowledge base completion models
    Kocijan, Vid
    Jang, Myeongjun
    Lukasiewicz, Thomas
    [J]. ARTIFICIAL INTELLIGENCE, 2024, 329
  • [8] Pre-training with Diffusion Models for Dental Radiography Segmentation
    Rousseau, Jeremy
    Alaka, Christian
    Covili, Emma
    Mayard, Hippolyte
    Misrachi, Laura
    Au, Willy
    [J]. DEEP GENERATIVE MODELS, DGM4MICCAI 2023, 2024, 14533 : 174 - 182
  • [9] Research frontiers of pre-training mathematical models based on BERT
    Li, Guang
    Wang, Wennan
    Zhu, Liukai
    Peng, Jun
    Li, Xujia
    Luo, Ruijie
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, BIG DATA AND ALGORITHMS (EEBDA), 2022, : 154 - 158
  • [10] AUGER: Automatically Generating Review Comments with Pre-training Models
    Li, Lingwei
    Yang, Li
    Jiang, Huaxi
    Yan, Jun
    Luo, Tiejian
    Hua, Zihan
    Liang, Geng
    Zuo, Chun
    [J]. PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 1009 - 1021