Mixup of Feature Maps in a Hidden Layer for Training of Convolutional Neural Network

被引:4
|
作者
Oki, Hideki [1 ]
Kurita, Takio [1 ]
机构
[1] Hiroshima Univ, Grad Sch Engn, Dept Informat Engn, Higashihiroshima, Japan
关键词
D O I
10.1007/978-3-030-04179-3_56
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The deep Convolutional Neural Network (CNN) became very popular as a fundamental technique for image classification and objects recognition. To improve the recognition accuracy for the more complex tasks, deeper networks have being introduced. However, the recognition accuracy of the trained deep CNN drastically decreases for the samples which are obtained from the outside regions of the training samples. To improve the generalization ability for such samples, Krizhevsky et al. proposed to generate additional samples through transformations from the existing samples and to make the training samples richer. This method is known as data augmentation. Hongyi Zhang et al. introduced data augmentation method called mixup which achieves state-of-the-art performance in various datasets. Mixup generates new samples by mixing two different training samples. Mixing of the two images is implemented with simple image morphing. In this paper, we propose to apply mixup to the feature maps in a hidden layer. To implement the mixup in the hidden layer we use the Siamese network or the triplet network architecture to mix feature maps. From the experimental comparison, it is observed that the mixup of the feature maps obtained from the first convolution layer is more effective than the original image mixup.
引用
收藏
页码:635 / 644
页数:10
相关论文
共 50 条
  • [1] Layered neural network training with model switching and hidden layer feature regularization
    Kameyama, K
    Taga, K
    [J]. PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 2294 - 2299
  • [2] Convolutional Neural Network Simplification based on Feature Maps Selection
    Rui, Ting
    Zou, Junhua
    Zhou, You
    Fei, Jianchao
    Yang, Chengsong
    [J]. 2016 IEEE 22ND INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2016, : 1207 - 1210
  • [3] Convolutional neural network feature maps selection based on LDA
    Rui, Ting
    Zou, Junhua
    Zhou, You
    Fei, Jianchao
    Yang, Chengsong
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (09) : 10635 - 10649
  • [4] Convolutional neural network feature maps selection based on LDA
    Ting Rui
    Junhua Zou
    You Zhou
    Jianchao Fei
    Chengsong Yang
    [J]. Multimedia Tools and Applications, 2018, 77 : 10635 - 10649
  • [5] Spectral technique for hidden layer neural network training
    Windeatt, T
    Tebbs, R
    [J]. PATTERN RECOGNITION LETTERS, 1997, 18 (08) : 723 - 731
  • [6] Visualization of Feature Evolution During Convolutional Neural Network Training
    Punjabi, Arjun
    Katsaggelos, Aggelos K.
    [J]. 2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2017, : 311 - 315
  • [7] Structured feature sparsity training for convolutional neural network compression
    Wang, Wei
    Zhu, Liqiang
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2020, 71
  • [8] Feature Mining: A Novel Training Strategy for Convolutional Neural Network
    Xie, Tianshu
    Deng, Jiali
    Cheng, Xuan
    Liu, Minghui
    Wang, Xiaomin
    Liu, Ming
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (07):
  • [9] Deep Convolutional Neural Network with Mixup for Environmental Sound Classification
    Zhang, Zhichao
    Xu, Shugong
    Cao, Shan
    Zhang, Shunqing
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PT II, 2018, 11257 : 356 - 367
  • [10] Visually Interpretable Fuzzy Neural Classification Network With Deep Convolutional Feature Maps
    Juang, Chia-Feng
    Cheng, Yun-Wei
    Lin, Yeh-Ming
    [J]. IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2024, 32 (03) : 1063 - 1077