Implementation of multimodal biometric recognition via multi-feature deep learning networks and feature fusion

被引:20
|
作者
Tiong, Leslie Ching Ow [1 ]
Kim, Seong Tae [1 ]
Ro, Yong Man [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Image & Video Syst Lab, 291 Daehak Ro, Daejeon 34141, South Korea
关键词
Deep multimodal learning; Multimodal biometric recognition; Multi-feature fusion layers; Texture descriptor representations; FACE RECOGNITION;
D O I
10.1007/s11042-019-7618-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although there is an abundance of current research on facial recognition, it still faces significant challenges that are related to variations in factors such as aging, poses, occlusions, resolution, and appearances. In this paper, we propose a Multi-feature Deep Learning Network (MDLN) architecture that uses modalities from the facial and periocular regions, with the addition of texture descriptors to improve recognition performance. Specifically, MDLN is designed as a feature-level fusion approach that correlates between the multimodal biometrics data and texture descriptor, which creates a new feature representation. Therefore, the proposed MLDN model provides more information via the feature representation to achieve better performance, while overcoming the limitations that persist in existing unimodal deep learning approaches. The proposed model has been evaluated on several public datasets and through our experiments, we proved that our proposed MDLN has improved biometric recognition performances under challenging conditions, including variations in illumination, appearances, and pose misalignments.
引用
收藏
页码:22743 / 22772
页数:30
相关论文
共 50 条
  • [41] Deep Learning Convolutional Network for Bimodal Biometric Recognition with Information Fusion at Feature Level
    Atenco Vazquez, Juan Carlos
    Moreno Rodriguez, Juan Carlos
    Ramirez Cortes, Juan Manuel
    IEEE LATIN AMERICA TRANSACTIONS, 2023, 21 (05) : 652 - 661
  • [42] Mango Leaves Recognition Using Deep Belief Network with MFO and Multi-feature Fusion
    Pankaja, K.
    Suma, V.
    SMART INTELLIGENT COMPUTING AND APPLICATIONS, VOL 2, 2020, 160 : 557 - 565
  • [43] A Multi-Feature Fusion and SSAE-Based Deep Network for Image Semantic Recognition
    Li, Haifang
    Wang, Zhe
    Yin, Guimei
    Deng, Hongxia
    Yang, Xiaofeng
    Yao, Rong
    Gao, Peng
    Cao, Rui
    2019 IEEE FIFTH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING SERVICE AND APPLICATIONS (IEEE BIGDATASERVICE 2019), 2019, : 322 - 327
  • [44] Multi-Feature Fusion for Enhancing Image Similarity Learning
    Lu, Jian
    Ma, Cheng-Xian
    Zhou, Yan-Ran
    Luo, Mao-Xin
    Zhang, Kai-Bing
    IEEE ACCESS, 2019, 7 : 167547 - 167556
  • [45] Automatic cucumber recognition algorithm for harvesting robots in the natural environment using deep learning and multi-feature fusion
    Mao, Shihan
    Li, Yuhua
    Ma, You
    Zhang, Baohua
    Zhou, Jun
    Wang, Kai
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2020, 170
  • [46] STUDY ON MULTI-BIOMETRIC FEATURE FUSION AND RECOGNITION MODEL
    Cui, Jia
    Li, Jian-Ping
    Lu, Xiao-Jun
    2008 INTERNATIONAL CONFERENCE ON APPERCEIVING COMPUTING AND INTELLIGENCE ANALYSIS (ICACIA 2008), 2008, : 66 - 69
  • [47] Multi-feature Joint Dictionary Learning for Face Recognition
    Yang, Meng
    Wang, Qiangchang
    Wen, Wei
    Lai, Zhihui
    PROCEEDINGS 2017 4TH IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR), 2017, : 629 - 633
  • [48] Feature Level Fusion in Multimodal Biometric Identification
    Belhia, S.
    Gafour, A.
    2012 SECOND INTERNATIONAL CONFERENCE ON INNOVATIVE COMPUTING TECHNOLOGY (INTECH), 2012, : 418 - 423
  • [49] Traffic lights detection and recognition based on multi-feature fusion
    Wang, Wenhao
    Sun, Shanlin
    Jiang, Mingxin
    Yan, Yunyang
    Chen, Xiaobing
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (13) : 14829 - 14846
  • [50] Traffic lights detection and recognition based on multi-feature fusion
    Wenhao Wang
    Shanlin Sun
    Mingxin Jiang
    Yunyang Yan
    Xiaobing Chen
    Multimedia Tools and Applications, 2017, 76 : 14829 - 14846