Cross-Modality Face Recognition via Heterogeneous Joint Bayesian

被引:18
|
作者
Shi, Hailin [1 ]
Wang, Xiaobo [1 ]
Yi, Dong [2 ]
Lei, Zhen [3 ]
Zhu, Xiangyu [1 ]
Li, Stan Z. [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
[2] Alibaba Grp, Hangzhou 311121, Peoples R China
[3] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
关键词
Cross modality; heterogeneous face recognition; joint Bayesian (JB);
D O I
10.1109/LSP.2016.2637400
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In many face recognition applications, the modalities of face images between the gallery and probe sets are different, which is known as heterogeneous face recognition. How to reduce the feature gap between images from different modalities is a critical issue to develop a highly accurate face recognition algorithm. Recently, joint Bayesian (JB) has demonstrated superior performance on general face recognition compared to traditional discriminant analysis methods like subspace learning. However, the original JB treats the two input samples equally and does not take into account the modality difference between them and may be suboptimal to address the heterogeneous face recognition problem. In this work, we extend the original JB by modeling the gallery and probe images using two different Gaussian distributions to propose a heterogeneous joint Bayesian (HJB) formulation for cross-modality face recognition. The proposed HJB explicitly models the modality difference of image pairs and, therefore, is able to better discriminate the same/different face pairs accurately. Extensive experiments conducted in the case of visible-near-infrared and ID photo versus spot face recognition problems show the superiority of the HJB over previous methods.
引用
收藏
页码:81 / 85
页数:5
相关论文
共 50 条
  • [21] Adversarial Disentanglement Spectrum Variations and Cross-Modality Attention Networks for NIR-VIS Face Recognition
    Hu, Weipeng
    Hu, Haifeng
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 145 - 160
  • [22] Cross-Modality Feature Learning via Convolutional Autoencoder
    Liu, Xueliang
    Wang, Meng
    Zha, Zheng-Jun
    Hong, Richang
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2019, 15 (01)
  • [23] CROSS-MODALITY MATCHING
    AUERBACH, C
    [J]. QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1973, 25 (NOV): : 492 - 495
  • [24] RGBT tracking via cross-modality message passing
    Yang, Rui
    Wang, Xiao
    Li, Chenglong
    Hu, Jinmin
    Tang, Jin
    [J]. NEUROCOMPUTING, 2021, 462 : 365 - 375
  • [25] Cross-Modality Domain Adaptation for hand-vein recognition
    Yang, Shuqiang
    Qin, Huafeng
    El-Yacoubi, Mmounim A.
    Liu, Chongwen
    [J]. 2021 INTERNATIONAL CONFERENCE ON CYBER-PHYSICAL SOCIAL INTELLIGENCE (ICCSI), 2021,
  • [26] Cross-modality translations improve recognition by reducing false alarms
    Forrin, Noah D.
    MacLeod, Colin M.
    [J]. MEMORY, 2018, 26 (01) : 53 - 58
  • [27] Facial Expression Recognition Through Cross-Modality Attention Fusion
    Ni, Rongrong
    Yang, Biao
    Zhou, Xu
    Cangelosi, Angelo
    Liu, Xiaofeng
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (01) : 175 - 185
  • [28] Exploring Cross-Modality Affective Reactions for Audiovisual Emotion Recognition
    Mariooryad, Soroosh
    Busso, Carlos
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2013, 4 (02) : 183 - 196
  • [29] Cross-modality based celebrity face naming for news image collections
    Su, Xueping
    Peng, Jinye
    Feng, Xiaoyi
    Wu, Jun
    Fan, Jianping
    Cui, Li
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2014, 73 (03) : 1643 - 1661
  • [30] Ensemble based extreme learning machine for cross-modality face matching
    Jin, Yi
    Cao, Jiuwen
    Wang, Yizhi
    Zhi, Ruicong
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2016, 75 (19) : 11831 - 11846