CSI Acquisition in Internet of Vehicle Network: Federated Edge Learning With Model Pruning and Vector Quantization

被引:0
|
作者
Wang, Yi [1 ,2 ]
Zhi, Junlei [1 ,2 ]
Mei, Linsheng [3 ]
Huang, Wei [3 ]
机构
[1] Zhengzhou Univ Aeronaut, Sch Elect & Informat, Zhengzhou 450046, Henan, Peoples R China
[2] Zhengzhou Univ Aeronaut, Henan Key Lab Gen Aviat Technol, Zhengzhou 450046, Henan, Peoples R China
[3] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230601, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
CSI acquisition; federated learning; Internet of vehicle; model pruning; vector quantization; FDD MASSIVE MIMO; CHANNEL ESTIMATION; COMMUNICATION; DESIGN;
D O I
10.1155/int/5813659
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The conventional machine learning (ML)-based channel state information (CSI) acquisition has overlooked the potential privacy disclosure and estimation overhead problem caused by transmitting pilot datasets during the estimation stage. In this paper, we propose federated edge learning for CSI acquisition to protect the data privacy in the Internet of vehicle network with massive antenna array. To reduce the channel estimation overhead, the joint model pruning and vector quantization algorithm for network gradient parameters is presented to reduce the amount of exchange information between the centralized server and devices. This scheme allows for local fine-tuning to adapt the global model to the channel characteristics of each device. In addition, we also provide theoretical guarantees of convergence and quantization error bound in closed form, respectively. Simulation results demonstrate that the proposed FL-based CSI acquisition with model pruning and vector quantization scheme can efficiently improve the performance of channel estimation while reducing the communication overhead.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] A lightweight deep neural network model and its applications based on channel pruning and group vector quantization
    Mingzhong Huang
    Yan Liu
    Lijie Zhao
    Guogang Wang
    Neural Computing and Applications, 2024, 36 : 5333 - 5346
  • [32] Accelerating federated learning for IoT in big data analytics with pruning, quantization and selective updating
    Xu, Wenyuan
    Fang, Weiwei
    Ding, Yi
    Zou, Meixia
    Xiong, Naixue
    IEEE Access, 2021, 9 : 38457 - 38466
  • [33] Federated Learning With Heterogeneous Quantization Bit Allocation and Aggregation for Internet of Things
    Chen, Shengbo
    Li, Le
    Wang, Guanghui
    Pang, Meng
    Shen, Cong
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (02) : 3132 - 3143
  • [34] A lightweight deep neural network model and its applications based on channel pruning and group vector quantization
    Huang, Mingzhong
    Liu, Yan
    Zhao, Lijie
    Wang, Guogang
    NEURAL COMPUTING & APPLICATIONS, 2023, 36 (10): : 5333 - 5346
  • [35] Enabling Intelligence at Network Edge Edge:An Overview of Federated Learning
    Howard H.YANG
    ZHAO Zhongyuan
    Tony Q.S.QUEK
    ZTECommunications, 2020, 18 (02) : 2 - 10
  • [36] Decentralized Federated Learning on the Edge: From the Perspective of Quantization and Graphical Topology
    Yan, Zhigang
    Li, Dong
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (21): : 34172 - 34186
  • [37] Joint Optimization of Bandwidth Allocation and Gradient Quantization for Federated Edge Learning
    Yan, Hao
    Tang, Bin
    Ye, Baoliu
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, PT III, 2022, 13473 : 444 - 455
  • [38] Joint client selection and resource allocation for federated edge learning with imperfect CSI
    Zhou, Sheng
    Wang, Liangmin
    Wu, Weihua
    Feng, Li
    COMPUTER NETWORKS, 2025, 257
  • [39] LEARNING VECTOR QUANTIZATION FOR THE PROBABILISTIC NEURAL NETWORK
    BURRASCANO, P
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1991, 2 (04): : 458 - 461
  • [40] PrVFL: Pruning-Aware Verifiable Federated Learning for Heterogeneous Edge Computing
    Wang, Xigui
    Yu, Haiyang
    Chen, Yuwen
    Sinnott, Richard O.
    Yang, Zhen
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 15062 - 15079