Communication-Efficient Satellite-Ground Federated Learning Through Progressive Weight Quantization

被引:2
|
作者
Yang, Chen [1 ]
Yuan, Jinliang [1 ]
Wu, Yaozong [1 ]
Sun, Qibo [1 ]
Zhou, Ao [1 ]
Wang, Shangguang [1 ]
Xu, Mengwei [1 ]
机构
[1] Beijing Univ Posts & Telecommun, State Key Lab Networking & Switching Technol, Beijing 100876, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Satellites; Training; Low earth orbit satellites; Data models; Bandwidth; Quantization (signal); Space vehicles; In-orbit computing; satellite network; federated learning; SYSTEMS;
D O I
10.1109/TMC.2024.3358804
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Large constellations of Low Earth Orbit (LEO) satellites have been launched for Earth observation and satellite-ground communication, which collect massive imagery and sensor data. These data can enhance the AI capabilities of satellites to address global challenges such as real-time disaster navigation and mitigation. Prior studies proposed leveraging federated learning (FL) across satellite-ground to collaboratively train a share machine learning (ML) model in a privacy-preserving mechanism. However, they mostly focus on single unique challenges such as limited ground-to-satellite bandwidth, short connection window, and long connection cycle, while ignoring the completeness of these challenges in deploying efficient FL frameworks in space. In this paper, we propose an efficient satellite-ground FL framework, SatelliteFL, to address these three challenges collectively. Its key idea is to ensure that each satellite must complete per-round training within each connection window. Moreover, we design a progressive block-wise quantization algorithm that determines a unique bitwidth for each block of the ML model to maximize the model utility while not exceeding the connection window. We evaluate SatelliteFL by plugging an implemented FL platform into real-world satellite networks and satellite images. The results show that SatelliteFL highly accelerates the convergence by up to 2.8x and improves the bandwidth utilization ratio by up to 9.3x compared to the state-of-the-art methods.
引用
收藏
页码:8999 / 9011
页数:13
相关论文
共 50 条
  • [1] Communication-Efficient Federated Learning with Adaptive Quantization
    Mao, Yuzhu
    Zhao, Zihao
    Yan, Guangfeng
    Liu, Yang
    Lan, Tian
    Song, Linqi
    Ding, Wenbo
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
  • [2] FedDQ: Communication-Efficient Federated Learning with Descending Quantization
    Qu, Linping
    Song, Shenghui
    Tsui, Chi-Ying
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 281 - 286
  • [3] ADAPTIVE QUANTIZATION OF MODEL UPDATES FOR COMMUNICATION-EFFICIENT FEDERATED LEARNING
    Jhunjhunwala, Divyansh
    Gadhikar, Advait
    Joshi, Gauri
    Eldar, Yonina C.
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3110 - 3114
  • [4] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [5] FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization
    Reisizadeh, Amirhossein
    Mokhtari, Aryan
    Hassani, Hamed
    Jadbabaie, Ali
    Pedarsani, Ramtin
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2021 - 2030
  • [6] Communication-efficient federated learning based on compressed sensing and ternary quantization
    Zheng, Jiali
    Tang, Jing
    APPLIED INTELLIGENCE, 2025, 55 (02)
  • [7] DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning
    Hoenig, Robert
    Zhao, Yiren
    Mullins, Robert
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [8] Computation Offloading and Quantization Schemes for Federated Satellite-Ground Graph Networks
    Gong, Yongkang
    Yu, Dongxiao
    Cheng, Xiuzhen
    Yuen, Chau
    Bennis, Mehdi
    Debbah, Merouane
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (10) : 14140 - 14154
  • [9] A Communication-Efficient Federated Learning by Dynamic Quantization and Free-Ride Coding
    Chen, Junjie
    Wang, Qianfan
    Wan, Hai
    Ma, Xiao
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [10] Communication-Efficient Vertical Federated Learning
    Khan, Afsana
    ten Thij, Marijn
    Wilbik, Anna
    ALGORITHMS, 2022, 15 (08)