Integrating Pre-Trained Language Model With Physical Layer Communications

被引:0
|
作者
Lee, Ju-Hyung [1 ,2 ]
Lee, Dong-Ho [3 ]
Lee, Joohan [1 ]
Pujara, Jay [3 ]
机构
[1] Univ Southern Calif USC, Ming Hsieh Dept Elect & Comp Engn, Los Angeles, CA 90007 USA
[2] Nokia, Sunnyvale, CA 94085 USA
[3] Univ Southern Calif USC, Informat Sci Inst, Los Angeles, CA 90007 USA
基金
美国国家科学基金会;
关键词
Artificial intelligence; Semantics; Vectors; Wireless communication; Noise; Data models; Decoding; Physical layer communications; language model; VQ-VAE; natural language processing (NLP); link-level simulation; SEMANTIC COMMUNICATIONS;
D O I
10.1109/TWC.2024.3452481
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The burgeoning field of on-device AI communication, where devices exchange information directly through embedded foundation models, such as language models (LMs), requires robust, efficient, and generalizable communication frameworks. However, integrating these frameworks with existing wireless systems and effectively managing noise and bit errors pose significant challenges. In this work, we introduce a practical on-device AI communication framework, integrated with physical layer (PHY) communication functions, demonstrated through its performance on a link-level simulator. Our framework incorporates end-to-end training with channel noise to enhance resilience, incorporates vector quantized variational autoencoders (VQ-VAE) for efficient and robust communication, and utilizes pre-trained encoder-decoder transformers for improved generalization capabilities. Simulations, across various communication scenarios, reveal that our framework achieves a 50% reduction in transmission size while demonstrating substantial generalization ability and noise robustness under standardized 3GPP channel models.
引用
收藏
页码:17266 / 17278
页数:13
相关论文
共 50 条
  • [41] Schema matching based on energy domain pre-trained language model
    Pan Z.
    Yang M.
    Monti A.
    Energy Informatics, 2023, 6 (Suppl 1)
  • [42] Annotating Columns with Pre-trained Language Models
    Suhara, Yoshihiko
    Li, Jinfeng
    Li, Yuliang
    Zhang, Dan
    Demiralp, Cagatay
    Chen, Chen
    Tan, Wang-Chiew
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 1493 - 1503
  • [43] NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application
    Wu, Chuhan
    Wu, Fangzhao
    Yu, Yang
    Qi, Tao
    Huang, Yongfeng
    Liu, Qi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3285 - 3295
  • [44] BERTweet.BR: a pre-trained language model for tweets in Portuguese
    Fernando Carneiro
    Daniela Vianna
    Jonnathan Carvalho
    Alexandre Plastino
    Aline Paes
    Neural Computing and Applications, 2025, 37 (6) : 4363 - 4385
  • [45] Biomedical-domain pre-trained language model for extractive summarization
    Du, Yongping
    Li, Qingxiao
    Wang, Lulin
    He, Yanqing
    KNOWLEDGE-BASED SYSTEMS, 2020, 199 (199)
  • [46] Pre-trained Language Model with Prompts for Temporal Knowledge Graph Completion
    Xu, Wenjie
    Liu, Ben
    Peng, Miao
    Jia, Xu
    Peng, Min
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 7790 - 7803
  • [47] Data Augmentation Based on Pre-trained Language Model for Event Detection
    Zhang, Meng
    Xie, Zhiwen
    Liu, Jin
    CCKS 2021 - EVALUATION TRACK, 2022, 1553 : 59 - 68
  • [48] Syntax-guided Contrastive Learning for Pre-trained Language Model
    Zhang, Shuai
    Wang, Lijie
    Xiao, Xinyan
    Wu, Hua
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2430 - 2440
  • [49] AMBERT: A Pre-trained Language Model with Multi-Grained Tokenization
    Zhang, Xinsong
    Li, Pengshuai
    Li, Hang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 421 - 435
  • [50] Leveraging Pre-Trained Language Model for Summary Generation on Short Text
    Zhao, Shuai
    You, Fucheng
    Liu, Zeng Yuan
    IEEE ACCESS, 2020, 8 : 228798 - 228803