Surrogate network-based sparseness hyper-parameter optimization for deep expression recognition

被引:16
|
作者
Xie, Weicheng [1 ,2 ,3 ]
Chen, Wenting [1 ,2 ,3 ]
Shen, Linlin [1 ,2 ,3 ]
Duan, Jinming [4 ]
Yang, Meng [5 ]
机构
[1] Shenzhen Univ, Sch Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
[2] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen, Peoples R China
[3] Shenzhen Univ, Guangdong Key Lab Intelligent Informat Proc, Shenzhen, Peoples R China
[4] Univ Birmingham, Sch Comp Sci, Birmingham, W Midlands, England
[5] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou, Guangdong, Peoples R China
关键词
Expression recognition; Deep sparseness strategies; Hyper-parameter optimization; Surrogate network; Heuristic optimizer;
D O I
10.1016/j.patcog.2020.107701
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For facial expression recognition, the sparseness constraints of the features or weights can improve the generalization ability of a deep network. However, the optimization of the hyper-parameters in fusing different sparseness strategies demands much computation, when the traditional gradient-based algorithms are used. In this work, an iterative framework with surrogate network is proposed for the optimization of hyper-parameters in fusing different sparseness strategies. In each iteration, a network with significantly smaller model complexity is fitted to the original large network based on four Euclidean losses, where the hyper-parameters are optimized with heuristic optimizers. Since the surrogate network uses the same deep metrics and embeds the same hyper-parameters as the original network, the optimized hyper-parameters are then used for the training of the original deep network in the next iteration. While the performance of the proposed algorithm is justified with a tiny model, i.e. LeNet on the FER2013 database, our approach achieved competitive performances on six publicly available expression datasets, i.e., FER2013, CK+, Oulu-CASIA, MMI, AFEW and AffectNet. (C) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] A Comparative study of Hyper-Parameter Optimization Tools
    Shekhar, Shashank
    Bansode, Adesh
    Salim, Asif
    2021 IEEE ASIA-PACIFIC CONFERENCE ON COMPUTER SCIENCE AND DATA ENGINEERING (CSDE), 2021,
  • [22] Efficient Hyper-parameter Optimization with Cubic Regularization
    Shen, Zhenqian
    Yang, Hansi
    Li, Yong
    Kwok, James
    Yao, Quanming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [23] Hyper-parameter optimization of deep learning model for prediction of Parkinson's disease
    Kaur, Sukhpal
    Aggarwal, Himanshu
    Rani, Rinkle
    MACHINE VISION AND APPLICATIONS, 2020, 31 (05)
  • [24] Cultural Events Classification using Hyper-parameter Optimization of Deep Learning Technique
    Feng Zhipeng
    Gani, Hamdan
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2021, 12 (05) : 603 - 609
  • [25] A GPU Scheduling Framework to Accelerate Hyper-Parameter Optimization in Deep Learning Clusters
    Son, Jaewon
    Yoo, Yonghyuk
    Kim, Khu-rai
    Kim, Youngjae
    Lee, Kwonyong
    Park, Sungyong
    ELECTRONICS, 2021, 10 (03) : 1 - 15
  • [26] Hyper-parameter optimization of deep learning model for prediction of Parkinson’s disease
    Sukhpal Kaur
    Himanshu Aggarwal
    Rinkle Rani
    Machine Vision and Applications, 2020, 31
  • [27] Hyper-parameter Optimization Using Continuation Algorithms
    Rojas-Delgado, Jairo
    Jimenez, J. A.
    Bello, Rafael
    Lozano, J. A.
    METAHEURISTICS, MIC 2022, 2023, 13838 : 365 - 377
  • [28] Modified Grid Searches for Hyper-Parameter Optimization
    Lopez, David
    Alaiz, Carlos M.
    Dorronsoro, Jose R.
    HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, HAIS 2020, 2020, 12344 : 221 - 232
  • [29] Hybrid Hyper-parameter Optimization for Collaborative Filtering
    Szabo, Peter
    Genge, Bela
    2020 22ND INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING (SYNASC 2020), 2020, : 210 - 217
  • [30] Deep neural network hyper-parameter tuning through twofold genetic approach
    Kumar, Puneet
    Batra, Shalini
    Raman, Balasubramanian
    SOFT COMPUTING, 2021, 25 (13) : 8747 - 8771