Forward layer-wise learning of convolutional neural networks through separation index maximizing

被引:0
|
作者
Karimi, Ali [1 ]
Kalhor, Ahmad [1 ]
Tabrizi, Melika Sadeghi [1 ]
机构
[1] Univ Tehran, Coll Engn, Sch Elect & Comp Engn, Tehran, Iran
关键词
D O I
10.1038/s41598-024-59176-3
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
This paper proposes a forward layer-wise learning algorithm for CNNs in classification problems. The algorithm utilizes the Separation Index (SI) as a supervised complexity measure to evaluate and train each layer in a forward manner. The proposed method explains that gradually increasing the SI through layers reduces the input data's uncertainties and disturbances, achieving a better feature space representation. Hence, by approximating the SI with a variant of local triplet loss at each layer, a gradient-based learning algorithm is suggested to maximize it. Inspired by the NGRAD (Neural Gradient Representation by Activity Differences) hypothesis, the proposed algorithm operates in a forward manner without explicit error information from the last layer. The algorithm's performance is evaluated on image classification tasks using VGG16, VGG19, AlexNet, and LeNet architectures with CIFAR-10, CIFAR-100, Raabin-WBC, and Fashion-MNIST datasets. Additionally, the experiments are applied to text classification tasks using the DBPedia and AG's News datasets. The results demonstrate that the proposed layer-wise learning algorithm outperforms state-of-the-art methods in accuracy and time complexity.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Voice Conversion Using Deep Neural Networks With Layer-Wise Generative Training
    Chen, Ling-Hui
    Ling, Zhen-Hua
    Liu, Li-Juan
    Dai, Li-Rong
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2014, 22 (12) : 1859 - 1872
  • [42] Layer-wise synapse optimization for implementing neural networks on general neuromorphic architectures
    Mern, John
    Gupta, Jayesh K.
    Kochenderfer, Mykel J.
    [J]. 2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017, : 3314 - 3321
  • [43] Explanation of Multi-Label Neural Networks with Layer-Wise Relevance Propagation
    Bello, Marilyn
    Napoles, Gonzalo
    Vanhoof, Koen
    Garcia, Maria M.
    Bello, Rafael
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [44] Optimizing the Deep Neural Networks by Layer-Wise Refined Pruning and the Acceleration on FPGA
    Li, Hengyi
    Yue, Xuebin
    Wang, Zhichen
    Chai, Zhilei
    Wang, Wenwen
    Tomiyama, Hiroyuki
    Meng, Lin
    [J]. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [45] SPSA for Layer-Wise Training of Deep Networks
    Wulff, Benjamin
    Schuecker, Jannis
    Bauckhage, Christian
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT III, 2018, 11141 : 564 - 573
  • [46] Towards Layer-Wise Optimization of Contextual Neural Networks with Constant Field of Aggregation
    Mikusova, Miroslava
    Fuchs, Antonin
    Karasinski, Adrian
    Baruah, Rashmi Dutta
    Palak, Rafal
    Burnell, Erik Dawid
    Wolk, Krzysztof
    [J]. INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2021, 2021, 12672 : 743 - 753
  • [47] Deep Learning Layer-Wise Learning of Feature Hierarchies
    Schulz, Hannes
    Behnke, Sven
    [J]. KUNSTLICHE INTELLIGENZ, 2012, 26 (04): : 357 - 363
  • [48] FedLF: Layer-Wise Fair Federated Learning
    Pan, Zibin
    Li, Chi
    Yu, Fangchen
    Wang, Shuyi
    Wang, Haijin
    Tang, Xiaoying
    Zhao, Junhua
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14527 - 14535
  • [49] The layer-wise method and the backpropagation hybrid approach to learning a feedforward neural network
    Rubanov, NS
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (02): : 295 - 305
  • [50] Layer-Wise Personalized Federated Learning with Hypernetwork
    Suxia Zhu
    Tianyu Liu
    Guanglu Sun
    [J]. Neural Processing Letters, 2023, 55 (9) : 12273 - 12287