IMPROVING CONVOLUTIONAL NEURAL NETWORKS VIA COMPACTING FEATURES

被引:0
|
作者
Zhou, Liguo [1 ,2 ]
Zhu, Rong [1 ,2 ]
Luo, Yimin [2 ,3 ]
Liu, Siwen [1 ,2 ]
Wang, Zhongyuan [1 ,2 ]
机构
[1] Wuhan Univ, Comp Sch, Natl Engn Res Ctr Multimedia Software, Wuhan, Hubei, Peoples R China
[2] Collaborat Innovat Ctr Geospatial Informat Techno, Wuhan, Hubei, Peoples R China
[3] Wuhan Univ, Remote Sensing Informat Engn Sch, Wuhan, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Convolutional neural networks (CNNs); Softmax loss; joint supervision; visual classification; face verification;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Convolutional neural networks (CNNs) have shown great advantages in computer vision fields, and loss functions are of great significance to their gradient descent algorithms. Softmax loss, a combination of cross-entropy loss and Softmax function, is the most commonly used one for CNNs. Hence, it can continuously increase the discernibility of sample features in classification tasks. Intuitively, to promote the discrimination of CNNs, the learned features are desirable when the inter-class separability and intra-class compactness are maximized simultaneously. Since Softmax loss hardly motivates this inter-class separability and intra-class compactness simultaneously and explicitly, we propose a new method to achieve this simultaneous maximization. This method minimizes the distance between features of homogeneous samples along with Softmax loss and thus improves CNNs' performance on vision-related tasks. Experiments on both visual classification and face verification datasets validate the effectiveness and advantages of our method.
引用
收藏
页码:2946 / 2950
页数:5
相关论文
共 50 条
  • [21] DEEP CONVOLUTIONAL NEURAL NETWORKS FEATURES FOR IMAGE RETRIEVAL
    Kanaparthi, Suresh kumar
    Raju, U. S. N.
    ADVANCES AND APPLICATIONS IN MATHEMATICAL SCIENCES, 2021, 20 (11): : 2613 - 2626
  • [22] Convolutional Neural Networks Features: Principal Pyramidal Convolution
    Guo, Yanming
    Lao, Songyang
    Liu, Yu
    Bai, Liang
    Liu, Shi
    Lew, Michael S.
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2015, PT I, 2015, 9314 : 245 - 253
  • [23] Improving Preterm Infants' Joint Detection in Depth Images Via Dense Convolutional Neural Networks
    Migliorelli, Lucia
    Frontoni, Emanuele
    Appugliese, Simone
    Cannata, Giuseppe Pio
    Carnielli, Virgilio
    Moccia, Sara
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 3013 - 3016
  • [24] Improving Rainfall Forecasting via Radial Basis Function and Deep Convolutional Neural Networks Integration
    Hemalatha, J.
    Vivek, V.
    Sekar, M.
    Devi, M. K. Kavitha
    JOURNAL OF CLIMATE CHANGE, 2023, 9 (04) : 37 - 43
  • [25] Convolutional Neural Networks With Discrete Cosine Transform Features
    Ju, Sanghyeon
    Lee, Youngjoo
    Lee, Sunggu
    IEEE TRANSACTIONS ON COMPUTERS, 2022, 71 (12) : 3389 - 3395
  • [26] LEARNING THE FEATURES OF DIABETIC RETINOPATHY WITH CONVOLUTIONAL NEURAL NETWORKS
    Pratt, H.
    Williams, B. M.
    Broadbent, D.
    Harding, S. P.
    Coenen, F.
    Zheng, Y.
    EUROPEAN JOURNAL OF OPHTHALMOLOGY, 2019, 29 (03) : NP15 - NP16
  • [27] Improving code readability classification using convolutional neural networks
    Mi, Qing
    Keung, Jacky
    Xiao, Yan
    Mensah, Solomon
    Gao, Yujin
    INFORMATION AND SOFTWARE TECHNOLOGY, 2018, 104 : 60 - 71
  • [28] Improving Musical Tag Annotation with Stacking and Convolutional Neural Networks
    da Silva, Juliano Donini
    Gomes da Costa, Yandre Maldonado
    Domingues, Marcos Aurelio
    PROCEEDINGS OF THE 2020 INTERNATIONAL CONFERENCE ON SYSTEMS, SIGNALS AND IMAGE PROCESSING (IWSSIP), 27TH EDITION, 2020, : 393 - 398
  • [29] Improving deep convolutional neural networks with mixed maxout units
    Zhao, Hui-zhen
    Liu, Fu-xian
    Li, Long-yue
    PLOS ONE, 2017, 12 (07):
  • [30] IMPROVING CONVOLUTIONAL RECURRENT NEURAL NETWORKS FOR SPEECH EMOTION RECOGNITION
    Meyer, Patrick
    Xu, Ziyi
    Fingscheidt, Tim
    2021 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP (SLT), 2021, : 365 - 372