Optimized Quantization for Convolutional Deep Neural Networks in Federated Learning

被引:0
|
作者
Kim, You Jun [1 ]
Hong, Choong Seon [1 ]
机构
[1] Kyung Hee Univ, Dept Comp Sci & Engn, Yongin 17104, Gyeonggi Do, South Korea
关键词
federated learning; OQFL; FPROPS; quantization;
D O I
10.23919/apnoms50412.2020.9236949
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Federated learning is a distributed learning method that trains a deep network on user devices without collecting data from central server. It is useful when the central server can't collect data. However, the absence of data on central server means that deep network compression using data is not possible. Deep network compression is very important because it enables inference even on device with low capacity. In this paper, we proposed a new quantization method that significantly reduces FPROPS(floating-point operations per second) in deep networks without leaking user data in federated learning. Quantization parameters are trained by general learning loss, and updated simultaneously with weight. We call this method as OQFL(Optimized Quantization in Federated Learning). OQFL is a method of learning deep networks and quantization while maintaining security in a distributed network environment including edge computing. We introduce the OQFL method and simulate it in various Convolutional deep neural networks. We shows that OQFL is possible in most representative convolutional deep neural network. Surprisingly, OQFL(4bits) can preserve the accuracy of conventional federated learning(32bits) in test dataset.
引用
收藏
页码:150 / 154
页数:5
相关论文
共 50 条
  • [1] Space Efficient Quantization for Deep Convolutional Neural Networks
    Zhao, Dong-Di
    Li, Fan
    Sharif, Kashif
    Xia, Guang-Min
    Wang, Yu
    [J]. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2019, 34 (02): : 305 - 317
  • [2] Space Efficient Quantization for Deep Convolutional Neural Networks
    Dong-Di Zhao
    Fan Li
    Kashif Sharif
    Guang-Min Xia
    Yu Wang
    [J]. Journal of Computer Science and Technology, 2019, 34 : 305 - 317
  • [3] Effective Skin Cancer Diagnosis Through Federated Learning and Deep Convolutional Neural Networks
    Al-Rakhami, Mabrook S.
    Alqahtani, Salman A.
    Alawwad, Abdulaziz
    [J]. APPLIED ARTIFICIAL INTELLIGENCE, 2024, 38 (01)
  • [4] Vector Quantization of Deep Convolutional Neural Networks With Learned Codebook
    Yang, Siyuan
    Mao, Yongyi
    [J]. 2022 17TH CANADIAN WORKSHOP ON INFORMATION THEORY (CWIT), 2022, : 39 - 44
  • [5] Quantization of Deep Convolutional Networks
    Huang, Yea-Shuan
    Slot, Charles Djimy
    Yu, Chang Wu
    [J]. 2019 INTERNATIONAL CONFERENCE ON IMAGE AND VIDEO PROCESSING, AND ARTIFICIAL INTELLIGENCE, 2019, 11321
  • [6] An Analysis on Ensemble Learning Optimized Medical Image Classification With Deep Convolutional Neural Networks
    Mueller, Dominik
    Soto-Rey, Inaki
    Kramer, Frank
    [J]. IEEE ACCESS, 2022, 10 : 66467 - 66480
  • [7] Learning accelerator of deep neural networks with logarithmic quantization
    Ueki, Takeo
    Iwai, Keisuke
    Matsubara, Takashi
    Kurokawa, Takakazu
    [J]. 2018 7TH INTERNATIONAL CONGRESS ON ADVANCED APPLIED INFORMATICS (IIAI-AAI 2018), 2018, : 634 - 638
  • [8] Multiarea Inertia Estimation Using Convolutional Neural Networks and Federated Learning
    Poudyal, Abodh
    Tamrakar, Ujjwol
    Trevizan, Rodrigo D.
    Fourney, Robert
    Tonkoski, Reinaldo
    Hansen, Timothy M.
    [J]. IEEE SYSTEMS JOURNAL, 2022, 16 (04): : 6401 - 6412
  • [9] Quantization in Graph Convolutional Neural Networks
    Ben Saad, Leila
    Beferull-Lozano, Baltasar
    [J]. 29TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2021), 2021, : 1855 - 1859
  • [10] Federated Learning for Medical Image Analysis with Deep Neural Networks
    Nazir, Sajid
    Kaleem, Mohammad
    [J]. DIAGNOSTICS, 2023, 13 (09)