Boosting Multi-modal Ocular Recognition via Spatial Feature Reconstruction and Unsupervised Image Quality Estimation

被引:2
|
作者
Yan, Zihui [1 ]
Wang, Yunlong [1 ]
Zhang, Kunbo [1 ]
Sun, Zhenan [1 ]
He, Lingxiao [2 ]
机构
[1] Chinese Acad Sci, Ctr Res Intelligent Percept & Comp, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
[2] JD AI Res, Beijing 100176, Peoples R China
基金
中国国家自然科学基金;
关键词
Iris recognition; periocular recognition; spatial feature reconstruction; fully convolutional network; flexible matching; unsupervised iris quality assessment; adaptive weight fusion; IRIS RECOGNITION; NETWORK;
D O I
10.1007/s11633-023-1415-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the daily application of an iris-recognition-at-a-distance (IAAD) system, many ocular images of low quality are acquired. As the iris part of these images is often not qualified for the recognition requirements, the more accessible periocular regions are a good complement for recognition. To further boost the performance of IAAD systems, a novel end-to-end framework for multi-modal ocular recognition is proposed. The proposed framework mainly consists of iris/periocular feature extraction and matching, unsupervised iris quality assessment, and a score-level adaptive weighted fusion strategy. First, ocular feature reconstruction (OFR) is proposed to sparsely reconstruct each probe image by high-quality gallery images based on proper feature maps. Next, a brand new unsupervised iris quality assessment method based on random multiscale embedding robustness is proposed. Different from the existing iris quality assessment methods, the quality of an iris image is measured by its robustness in the embedding space. At last, the fusion strategy exploits the iris quality score as the fusion weight to coalesce the complementary information from the iris and periocular regions. Extensive experimental results on ocular datasets prove that the proposed method is obviously better than unimodal biometrics, and the fusion strategy can significantly improve the recognition performance.
引用
收藏
页码:197 / 214
页数:18
相关论文
共 50 条
  • [11] Gesture recognition based on multi-modal feature weight
    Duan, Haojie
    Sun, Ying
    Cheng, Wentao
    Jiang, Du
    Yun, Juntong
    Liu, Ying
    Liu, Yibo
    Zhou, Dalin
    Concurrency and Computation: Practice and Experience, 2021, 33 (05)
  • [12] Fusional Recognition for Depressive Tendency With Multi-Modal Feature
    Wang, Hong
    Zhou, Ying
    Yu, Fengping
    Zhao, Lili
    Wang, Caiyu
    Ren, Yanju
    IEEE ACCESS, 2019, 7 : 38702 - 38713
  • [13] Gesture recognition based on multi-modal feature weight
    Duan, Haojie
    Sun, Ying
    Cheng, Wentao
    Jiang, Du
    Yun, Juntong
    Liu, Ying
    Liu, Yibo
    Zhou, Dalin
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2021, 33 (05):
  • [14] Unsupervised Multi-Modal Medical Image Registration via Discriminator-Free Image-to-Image Translation
    Chen, Zekang
    Wei, Jia
    Li, Rui
    PROCEEDINGS OF THE THIRTY-FIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2022, 2022, : 834 - 840
  • [15] Multi-modal unsupervised domain adaptation for semantic image segmentation
    Hu, Sijie
    Bonardi, Fabien
    Bouchafa, Samia
    Sidibe, Desire
    PATTERN RECOGNITION, 2023, 137
  • [16] Multi-modal feature fusion for geographic image annotation
    Li, Ke
    Zou, Changqing
    Bu, Shuhui
    Liang, Yun
    Zhang, Jian
    Gong, Minglun
    PATTERN RECOGNITION, 2018, 73 : 1 - 14
  • [17] Deep Collaborative Multi-Modal Learning for Unsupervised Kinship Estimation
    Dong, Guan-Nan
    Pun, Chi-Man
    Zhang, Zheng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 4197 - 4210
  • [18] Multi-modal Image Prediction via Spatial Hybrid U-Net
    Zaman, Akib
    Zhang, Lu
    Yan, Jingwen
    Zhu, Dajiang
    MULTISCALE MULTIMODAL MEDICAL IMAGING, MMMI 2019, 2020, 11977 : 1 - 9
  • [19] Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module
    胡振涛
    HU Chonghao
    YANG Haoran
    SHUAI Weiwei
    HighTechnologyLetters, 2024, 30 (01) : 23 - 30
  • [20] Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module
    Hu Z.
    Hu C.
    Yang H.
    Shuai W.
    High Technology Letters, 2024, 30 (01) : 23 - 30