Learning with rethinking: Recurrently improving convolutional neural networks through feedback

被引:18
|
作者
Li, Xin [1 ]
Jie, Zequn [2 ]
Feng, Jiashi [2 ]
Liu, Changsong [1 ]
Yan, Shuicheng [2 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, State Key Lab Intelligent Technol & Syst, Tsinghua Natl Lab Informat Sci & Technol, Beijing 100084, Peoples R China
[2] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117583, Singapore
基金
中国国家自然科学基金;
关键词
Convolutional neural network; Image classification; Deep learning; FEEDFORWARD; V1;
D O I
10.1016/j.patcog.2018.01.015
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent years have witnessed the great success of convolutional neural network (CNN) based models in the field of computer vision. CNN is able to learn hierarchically abstracted features from images in an end-to-end training manner. However, most of the existing CNN models only learn features through a feedforward structure and no feedback information from top to bottom layers is exploited to enable the networks to refine themselves. In this paper, we propose a Learning with Rethinking algorithm. By adding a feedback layer and producing the emphasis vector, the model is able to recurrently boost the performance based on previous prediction. Particularly, it can be employed to boost any pre-trained models. This algorithm is tested on four object classification benchmark datasets: CIFAR-100, CIFAR-10, MNIST-background-image and ILSVRC-2012 dataset, and the results have demonstrated the advantage of training CNN models with the proposed feedback mechanism. (C) 2018 Published by Elsevier Ltd.
引用
收藏
页码:183 / 194
页数:12
相关论文
共 50 条
  • [31] Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
    Sun T.S.
    Gao Y.
    Khaladkar S.
    Liu S.
    Zhao L.
    Kim Y.-H.
    Hong S.R.
    Proceedings of the ACM on Human-Computer Interaction, 2023, 7 (CSCW2)
  • [32] Improving the Performance of Convolutional Neural Networks for Image Classification
    Optical Memory and Neural Networks, 2021, 30 : 51 - 66
  • [33] Improving the Performance of Convolutional Neural Networks for Image Classification
    Giveki, Davar
    OPTICAL MEMORY AND NEURAL NETWORKS, 2021, 30 (01) : 51 - 66
  • [34] Improving efficiency in convolutional neural networks with multilinear filters
    Dat Thanh Tran
    Iosifidis, Alexandros
    Gabbouj, Moncef
    NEURAL NETWORKS, 2018, 105 : 328 - 339
  • [35] Strategies for Improving the Error Robustness of Convolutional Neural Networks
    Morais, Antonio
    Barbosa, Raul
    Lourenco, Nuno
    Cerveira, Frederico
    Lombardi, Michele
    Madeira, Henrique
    2022 IEEE 22ND INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS, 2022, : 874 - 883
  • [36] Learning to Prune Filters in Convolutional Neural Networks
    Huang, Qiangui
    Zhou, Kevin
    You, Suya
    Neumann, Ulrich
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 709 - 718
  • [37] Hierarchical Color Learning in Convolutional Neural Networks
    Hickey, Chris
    Zhang, Byoung-Tak
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 1559 - 1562
  • [38] Employing Convolutional Neural Networks for Continual Learning
    Jasinski, Marcin
    Wozniak, Michal
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2022, PT I, 2023, 13588 : 288 - 297
  • [39] LEARNING CONNECTED ATTENTIONS FOR CONVOLUTIONAL NEURAL NETWORKS
    Ma, Xu
    Guo, Jingda
    Tang, Sihai
    Qiao, Zhinan
    Chen, Qi
    Yang, Qing
    Fu, Song
    Palacharla, Paparao
    Wang, Nannan
    Wang, Xi
    Proceedings - IEEE International Conference on Multimedia and Expo, 2021,
  • [40] Lateral Representation Learning in Convolutional Neural Networks
    Ballester, Pedro
    Correa, Ulisses Brisolara
    Araujo, Ricardo Matsumura
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,