Surrogate network-based sparseness hyper-parameter optimization for deep expression recognition

被引:16
|
作者
Xie, Weicheng [1 ,2 ,3 ]
Chen, Wenting [1 ,2 ,3 ]
Shen, Linlin [1 ,2 ,3 ]
Duan, Jinming [4 ]
Yang, Meng [5 ]
机构
[1] Shenzhen Univ, Sch Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
[2] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen, Peoples R China
[3] Shenzhen Univ, Guangdong Key Lab Intelligent Informat Proc, Shenzhen, Peoples R China
[4] Univ Birmingham, Sch Comp Sci, Birmingham, W Midlands, England
[5] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou, Guangdong, Peoples R China
关键词
Expression recognition; Deep sparseness strategies; Hyper-parameter optimization; Surrogate network; Heuristic optimizer;
D O I
10.1016/j.patcog.2020.107701
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For facial expression recognition, the sparseness constraints of the features or weights can improve the generalization ability of a deep network. However, the optimization of the hyper-parameters in fusing different sparseness strategies demands much computation, when the traditional gradient-based algorithms are used. In this work, an iterative framework with surrogate network is proposed for the optimization of hyper-parameters in fusing different sparseness strategies. In each iteration, a network with significantly smaller model complexity is fitted to the original large network based on four Euclidean losses, where the hyper-parameters are optimized with heuristic optimizers. Since the surrogate network uses the same deep metrics and embeds the same hyper-parameters as the original network, the optimized hyper-parameters are then used for the training of the original deep network in the next iteration. While the performance of the proposed algorithm is justified with a tiny model, i.e. LeNet on the FER2013 database, our approach achieved competitive performances on six publicly available expression datasets, i.e., FER2013, CK+, Oulu-CASIA, MMI, AFEW and AffectNet. (C) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] HYPER-PARAMETER OPTIMIZATION OF DEEP CONVOLUTIONAL NETWORKS FOR OBJECT RECOGNITION
    Talathi, Sachin S.
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 3982 - 3986
  • [2] Hyper-Parameter Optimization for Deep Learning by Surrogate-based Model with Weighted Distance Exploration
    Li, Zhenhua
    Shoemaker, Christine A.
    2021 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC 2021), 2021, : 917 - 925
  • [3] APPLICATION OF A HYPER-PARAMETER OPTIMIZATION ALGORITHM USING MARS SURROGATE FOR DEEP POLSAR IMAGE CLASSIFICATION MODELS
    Liu, Guangyuan
    Li, Yangyang
    Jiao, Licheng
    IGARSS 2020 - 2020 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2020, : 2591 - 2594
  • [4] HYPER-PARAMETER OPTIMIZATION FOR CONVOLUTIONAL NEURAL NETWORK COMMITTEES BASED ON EVOLUTIONARY ALGORITHMS
    Bochinski, Erik
    Senst, Tobias
    Sikora, Thomas
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 3924 - 3928
  • [5] Hyper-Parameter Optimization Using MARS Surrogate for Machine-Learning Algorithms
    Li, Yangyang
    Liu, Guangyuan
    Lu, Gao
    Jiao, Licheng
    Marturi, Naresh
    Shang, Ronghua
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2020, 4 (03): : 287 - 297
  • [6] Deep Learning Hyper-Parameter Optimization for Video Analytics in Clouds
    Yaseen, Muhammad Usman
    Anjum, Ashiq
    Rana, Omer
    Antonopoulos, Nikolaos
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2019, 49 (01): : 253 - 264
  • [7] Research on Hyper-Parameter Optimization of Activity Recognition Algorithm Based on Improved Cuckoo Search
    Tong, Yu
    Yu, Bo
    ENTROPY, 2022, 24 (06)
  • [8] Random search for hyper-parameter optimization
    Département D'Informatique et de Recherche Opérationnelle, Université de Montréal, Montréal, QC, H3C 3J7, Canada
    J. Mach. Learn. Res., (281-305):
  • [9] Random Search for Hyper-Parameter Optimization
    Bergstra, James
    Bengio, Yoshua
    JOURNAL OF MACHINE LEARNING RESEARCH, 2012, 13 : 281 - 305
  • [10] Hyper-parameter Optimization for Latent Spaces
    Veloso, Bruno
    Caroprese, Luciano
    Konig, Matthias
    Teixeira, Sonia
    Manco, Giuseppe
    Hoos, Holger H.
    Gama, Joao
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: RESEARCH TRACK, PT III, 2021, 12977 : 249 - 264