Identification based on feature fusion of multimodal biometrics and deep learning

被引:0
|
作者
Medjahed, Chahreddine [1 ]
Mezzoudj, Freha [2 ]
Rahmoun, Abdellatif [3 ]
Charrier, Christophe [4 ]
机构
[1] Univ Djillali Liabes Sidi Bel Abbes, Dept Comp Sci, EEDIS Lab, Sidi Bel Abbes, Algeria
[2] Hassiba Benbouali Univ Chlef, Dept Comp Sci, Chlef, Algeria
[3] ESI SBA, Dept Comp Sci, Higher Sch Comp Sci, Sidi Bel Abbes, Algeria
[4] Univ Caen Normandie, Dept Multimedia & Internet, GREYC Lab, Caen, France
关键词
biometrics; multi-biometric system; feature level fusion; score level fusion; deep learning; machine learning; TEXTURE CLASSIFICATION; SCALE; FACE;
D O I
10.1504/IJBM.2023.130649
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a novel methodology for individuals identification based on convolutional neural network (CNN) and machine learning (ML) algorithms. The technique is based on fusioning biometric modalities at the feature level. For this purpose, several hybrid multimodal-biometric systems are used as a benchmark to measure accuracy of identification. In these systems, a CNN is used for each modality to extract modality-specific features for pattern of datasets. Machine learning algorithms are used to identify (classify) individuals. In this paper, we emphasise on performing fusion of biometric modalities at the feature level. We propose to apply the proposed algorithms on two challenging databases: FEI face database and IITD Palm Print V1 dataset. The results are showing good accuracies with many proposed multimodal biometric person identification systems. Through experimental runs on several multi-modal systems, it is clearly shown that best identification performance is obtained when using ResNet18 as deep learning tools for feature extraction along with linear discrimination machine learning algorithm.
引用
收藏
页码:521 / 538
页数:19
相关论文
共 50 条
  • [31] Deep learning and multimodal feature fusion for the aided diagnosis of Alzheimer's disease
    Jia, Hongfei
    Lao, Huan
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (22): : 19585 - 19598
  • [32] Accurate Identification of Submitochondrial Protein Location Based on Deep Representation Learning Feature Fusion
    Sui, Jianan
    Chen, Yuehui
    Cao, Yi
    Zhao, Yaou
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT III, 2023, 14088 : 587 - 596
  • [33] Learning Deep Multimodal Feature Representation with Asymmetric Multi-layer Fusion
    Wang, Yikai
    Sun, Fuchun
    Lu, Ming
    Yao, Anbang
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 3902 - 3910
  • [34] Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis
    Suk, Heung-Il
    Lee, Seong-Whan
    Shen, Dinggang
    NEUROIMAGE, 2014, 101 : 569 - 582
  • [35] RoLiVit: Feature Fusion Approach for Multimodal Sentiment Analysis Using Deep Learning
    Namrata Shroff
    Shreya Patel
    Hemani Shah
    SN Computer Science, 6 (4)
  • [36] Deep learning and multimodal feature fusion for the aided diagnosis of Alzheimer's disease
    Hongfei Jia
    Huan Lao
    Neural Computing and Applications, 2022, 34 : 19585 - 19598
  • [37] Deep Feature Fusion Network Model for Iris and Periocular Biometrics
    Lei S.
    Li Y.
    Shan A.
    Zhang W.
    Gongcheng Kexue Yu Jishu/Advanced Engineering Sciences, 2024, 56 (03): : 240 - 248
  • [38] Deep Feature Fusion for Iris and Periocular Biometrics on Mobile Devices
    Zhang, Qi
    Li, Haiqing
    Sun, Zhenan
    Tan, Tieniu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2018, 13 (11) : 2897 - 2912
  • [39] A Multimodal Feature Fusion-Based Deep Learning Method for Online Fault Diagnosis of Rotating Machinery
    Zhou, Funa
    Hu, Po
    Yang, Shuai
    Wen, Chenglin
    SENSORS, 2018, 18 (10)
  • [40] Step integration based information fusion for multimodal biometrics
    Sharma, Aayush
    2007 14TH INTERNATIONAL WORKSHOP ON SYSTEMS, SIGNALS, & IMAGE PROCESSING & EURASIP CONFERENCE FOCUSED ON SPEECH & IMAGE PROCESSING, MULTIMEDIA COMMUNICATIONS & SERVICES, 2007, : 415 - +