Optimized Quantization for Convolutional Deep Neural Networks in Federated Learning

被引:0
|
作者
Kim, You Jun [1 ]
Hong, Choong Seon [1 ]
机构
[1] Kyung Hee Univ, Dept Comp Sci & Engn, Yongin 17104, Gyeonggi Do, South Korea
关键词
federated learning; OQFL; FPROPS; quantization;
D O I
10.23919/apnoms50412.2020.9236949
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Federated learning is a distributed learning method that trains a deep network on user devices without collecting data from central server. It is useful when the central server can't collect data. However, the absence of data on central server means that deep network compression using data is not possible. Deep network compression is very important because it enables inference even on device with low capacity. In this paper, we proposed a new quantization method that significantly reduces FPROPS(floating-point operations per second) in deep networks without leaking user data in federated learning. Quantization parameters are trained by general learning loss, and updated simultaneously with weight. We call this method as OQFL(Optimized Quantization in Federated Learning). OQFL is a method of learning deep networks and quantization while maintaining security in a distributed network environment including edge computing. We introduce the OQFL method and simulate it in various Convolutional deep neural networks. We shows that OQFL is possible in most representative convolutional deep neural network. Surprisingly, OQFL(4bits) can preserve the accuracy of conventional federated learning(32bits) in test dataset.
引用
收藏
页码:150 / 154
页数:5
相关论文
共 50 条
  • [31] PerHeFed: A general framework of personalized federated learning for heterogeneous convolutional neural networks
    Le Ma
    YuYing Liao
    Bin Zhou
    Wen Xi
    [J]. World Wide Web, 2023, 26 : 2027 - 2049
  • [32] Deep Reinforcement Learning-based Quantization for Federated Learning
    Zheng, Sihui
    Dong, Yuhan
    Chen, Xiang
    [J]. 2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [33] Federated learning with deep convolutional neural networks for the detection of multiple chest diseases using chest x-rays
    Malik, Hassaan
    Anees, Tayyaba
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (23) : 63017 - 63045
  • [34] Fixed Point Quantization of Deep Convolutional Networks
    Lin, Darryl D.
    Talathi, Sachin S.
    Annapureddy, V. Sreekanth
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [35] Flexible Quantization for Efficient Convolutional Neural Networks
    Zacchigna, Federico Giordano
    Lew, Sergio
    Lutenberg, Ariel
    [J]. ELECTRONICS, 2024, 13 (10)
  • [36] Federated Repair of Deep Neural Networks
    Li Calsi, Davide
    Laurent, Thomas
    Arcaini, Paolo
    Ishikawa, Fuyuki
    [J]. PROCEEDINGS OF THE 2024 IEEE/ACM INTERNATIONAL WORKSHOP ON DEEP LEARNING FOR TESTING AND TESTING FOR DEEP LEARNING, DEEPTEST 2024, 2024, : 17 - 24
  • [37] Curriculum Learning for Depth Estimation with Deep Convolutional Neural Networks
    Surendranath, Ajay
    Jayagopi, Dinesh Babu
    [J]. PROCEEDINGS OF THE 2ND MEDITERRANEAN CONFERENCE ON PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE (MEDPRAI-2018), 2018, : 95 - 100
  • [38] Deep Learning Convolutional Neural Networks with Dropout - a Parallel Approach
    Shen, Jingyi
    Shafiq, M. Omair
    [J]. 2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2018, : 572 - 577
  • [39] Learning Deep Graph Representations via Convolutional Neural Networks
    Ye, Wei
    Askarisichani, Omid
    Jones, Alex
    Singh, Ambuj
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (05) : 2268 - 2279
  • [40] WELDON: Weakly Supervised Learning of Deep Convolutional Neural Networks
    Durand, Thibaut
    Thome, Nicolas
    Cord, Matthieu
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 4743 - 4752