Face Attribute Estimation Using Multi-Task Convolutional Neural Network

被引:0
|
作者
Kawai, Hiroyarr [1 ]
Ito, Koichi [1 ]
Aoki, Takafumi [1 ]
机构
[1] Tohoku Univ, Grad Sch Informat Sci, 6-6-05,Aramaki Aza Aoba, Sendai, Miyagi 9808579, Japan
关键词
face attribute estimation; CNN; multi-task learning; deep learning; biometrics;
D O I
10.3390/jimaging8040105
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Face attribute estimation can be used for improving the accuracy of face recognition, customer analysis in marketing, image retrieval, video surveillance, and criminal investigation. The major methods for face attribute estimation are based on Convolutional Neural Networks (CNNs) that solve face attribute estimation as a multiple two-class classification problem. Although one feature extractor should be used for each attribute to explore the accuracy of attribute estimation, in most cases, one feature extractor is shared to estimate all face attributes for the parameter efficiency. This paper proposes a face attribute estimation method using Merged Multi-CNN (MM-CNN) to automatically optimize CNN structures for solving multiple binary classification problems to improve parameter efficiency and accuracy in face attribute estimation. We also propose a parameter reduction method called Convolutionalization for Parameter Reduction (CPR), which removes all fully connected layers from MM-CNNs. Through a set of experiments using the CelebA and LFW-a datasets, we demonstrate that MM-CNN with CPR exhibits higher efficiency of face attribute estimation in terms of estimation accuracy and the number of weight parameters than conventional methods.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] Speech Emotion Recognition Based on Multi-Task Learning Using a Convolutional Neural Network
    Kim, Nam Kyun
    Lee, Jiwon
    Ha, Hun Kyu
    Lee, Geon Woo
    Lee, Jung Hyuk
    Kim, Hong Kook
    [J]. 2017 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC 2017), 2017, : 704 - 707
  • [32] Feature Analysis of Unsupervised Learning for Multi-task Classification Using Convolutional Neural Network
    Kim, Jonghong
    Bukhari, Waqas
    Lee, Minho
    [J]. NEURAL PROCESSING LETTERS, 2018, 47 (03) : 783 - 797
  • [33] Feature Analysis of Unsupervised Learning for Multi-task Classification Using Convolutional Neural Network
    Jonghong Kim
    Waqas Bukhari
    Minho Lee
    [J]. Neural Processing Letters, 2018, 47 : 783 - 797
  • [34] SwinFace: A Multi-Task Transformer for Face Recognition, Expression Recognition, Age Estimation and Attribute Estimation
    Qin, Lixiong
    Wang, Mei
    Deng, Chao
    Wang, Ke
    Chen, Xi
    Hu, Jiani
    Deng, Weihong
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (04) : 2223 - 2234
  • [35] Improving Multiview Face Detection with Multi-Task Deep Convolutional Neural Networks
    Zhang, Cha
    Zhang, Zhengyou
    [J]. 2014 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2014, : 1036 - 1041
  • [36] Deep Multi-task Convolutional Neural Networks for Efficient Classification of Face Attributes
    Rohani M.
    Farsi H.
    Mohamadzadeh S.
    [J]. International Journal of Engineering, Transactions A: Basics, 2023, 36 (11): : 2102 - 2111
  • [37] Face Detection Based on Improved Multi-task Cascaded Convolutional Neural Networks
    Jia, Siyu
    Tian, Ying
    [J]. IAENG International Journal of Computer Science, 2024, 51 (02) : 67 - 74
  • [38] Multi-task convolutional neural network system for license plate recognition
    Kim, Hong-Hyun
    Park, Je-Kang
    Oh, Joo-Hee
    Kang, Dong-Joong
    [J]. INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2017, 15 (06) : 2942 - 2949
  • [39] Multi-Task Joint Learning for Graph Convolutional Neural Network Recommendations
    Wang, Yonggui
    Zou, Heyu
    [J]. Computer Engineering and Applications, 2024, 60 (04) : 306 - 314
  • [40] FMT: fusing multi-task convolutional neural network for person search
    Zhai, Sulan
    Liu, Shunqiang
    Wang, Xiao
    Tang, Jin
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (22) : 31605 - 31616