Filtering-based Layer-wise Parameter Update Method for Training a Neural Network

被引:0
|
作者
Ji, Siyu [1 ]
Zhai, Kaikai [1 ]
Wen, Chenglin [1 ]
机构
[1] Hangzhou Dianzi Univ, Inst Syst Sci & Control Engn, Hangzhou, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Neural Network; Parameter Training; Gradient Descent; Kalman Filtering; Extended Kalman Filtering;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Aiming at the difficulties in modeling variable nonlinear systems with noise interference, a network model with optimal generalization ability is established to realize the identification of the system. Traditional network parameter training method, like gradient descent method and least squares are centralized, and it is difficult to adaptively update the model parameters according to changes in the system. Firstly, in order to adaptively update the network parameters and quickly reflect the changes in the input and output of the system, the network weights are used as time-varying parameters, and some parameters in the network are updated by Kalman filtering algorithm. Then, in order to further improve the generalization ability of the network, EKF is used to update all the parameters in the network. Finally, the effectiveness of the algorithm is verified by an example of the standard data set UCI-ccpp.
引用
收藏
页码:389 / 394
页数:6
相关论文
共 50 条
  • [31] ALWANN: Automatic Layer-Wise Approximation of Deep Neural Network Accelerators without Retraining
    Mrazek, Vojtech
    Vasicek, Zdenek
    Sekanina, Lukas
    Hanif, Muhammad Abdullah
    Shafique, Muhammad
    [J]. 2019 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2019,
  • [32] Explaining Deep Neural Network using Layer-wise Relevance Propagation and Integrated Gradients
    Cik, Ivan
    Rasamoelina, Andrindrasana David
    Mach, Marian
    Sincak, Peter
    [J]. 2021 IEEE 19TH WORLD SYMPOSIUM ON APPLIED MACHINE INTELLIGENCE AND INFORMATICS (SAMI 2021), 2021, : 381 - 386
  • [33] A Layer-Wise Extreme Network Compression for Super Resolution
    Hwang, Jiwon
    Uddin, A. F. M. Shahab
    Bae, Sung-Ho
    [J]. IEEE ACCESS, 2021, 9 : 93998 - 94009
  • [34] A Layer-wise Training and Pruning Method for Memory Efficient On-chip Learning Hardware
    Lew, Dongwoo
    Park, Jongsun
    [J]. 2022 19TH INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), 2022, : 97 - 98
  • [35] FedVF: Personalized Federated Learning Based on Layer-wise Parameter Updates with Variable Frequency
    Mei, Yuan
    Guo, Binbin
    Xiao, Danyang
    Wu, Weigang
    [J]. 2021 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE (IPCCC), 2021,
  • [36] Activation Distribution-based Layer-wise Quantization for Convolutional Neural Networks
    Ki, Subin
    Kim, Hyun
    [J]. 2022 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2022,
  • [37] Multithreaded Layer-wise Training of Sparse Deep Neural Networks using Compressed Sparse Column
    Mofrad, Mohammad Hasanzadeh
    Melhem, Rami
    Ahmad, Yousuf
    Hammoud, Mohammad
    [J]. 2019 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2019,
  • [38] Pruning Ratio Optimization with Layer-Wise Pruning Method for Accelerating Convolutional Neural Networks
    Kamma, Koji
    Inoue, Sarimu
    Wada, Toshikazu
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (01) : 161 - 169
  • [39] Layer-wise Adversarial Training Approach to Improve Adversarial Robustness
    Chen, Xiaoyi
    Zhang, Ni
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [40] The Layer-Wise Training Convolutional Neural Networks Using Local Loss for Sensor-Based Human Activity Recognition
    Teng, Qi
    Wang, Kun
    Zhang, Lei
    He, Jun
    [J]. IEEE SENSORS JOURNAL, 2020, 20 (13) : 7265 - 7274