Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics

被引:0
|
作者
Ng, Tiong-Sik [1 ]
Chai, Jacky Chen Long [1 ]
Low, Cheng-Yaw [2 ]
Beng Jin Teoh, Andrew [1 ]
机构
[1] Yonsei University, School of Electrical and Electronic Engineering, College of Engineering, Seoul,03722, Korea, Republic of
[2] Institute for Basic Science, Data Science Group, Center for Mathematical and Computational Sciences, Daejeon,34126, Korea, Republic of
关键词
Biological system modeling - Biometric (access control) - Channel-wise self-attention - Correlation - Face - Features extraction - Inter-modal matching - Intra-modal matching - Modal matching - Modality alignment loss - Periocular - Self-supervised learning;
D O I
暂无
中图分类号
学科分类号
摘要
Periocular and face are two common biometric modalities for identity management. Recently, the emergence of conditional biometrics has enabled the exploitation of the correlation between face and periocular to enhance each modality's performance, in which we coin intra-modal matching in this paper. However, limitations arise in each modality, particularly when wearing sunglasses or helmets, causing the absence of periocular or facial occlusion. A biometric system empowered with inter-modal matching capability between periocular and face is essential to mitigate these challenges. This paper presents a novel reciprocal learning model that utilizes periocular and face conditioning to facilitate flexible intra-modal and inter-modal matching. To address the intra-modal matching challenge, we devise a lightweight Gated Convolutional Channel-wise Self-Attention Network that enables selective attention to shared salient periocular and face features. On the other hand, to bridge the modality gap without sacrificing the intra-modal matching performance, we propose a modality and augmentation-aware contrastive loss that incorporates semi-supervised positive sampling and alignment-specific logit rescaling. Extensive identification and verification experiments on five face-periocular datasets under the open-set protocol attest to the efficacy of our proposed methods. Codes are publicly available at https://github.com/tiongsikng/gc2sa_net. © 2005-2012 IEEE.
引用
收藏
页码:3251 / 3264
相关论文
共 50 条
  • [21] Slot Self-Attentive Dialogue State Tracking
    Ye, Fanghua
    Manotumruksa, Jarana
    Zhang, Qiang
    Li, Shenghui
    Yilmaz, Emine
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 1598 - 1608
  • [22] Learning Face Image Super-Resolution Through Facial Semantic Attribute Transformation and Self-Attentive Structure Enhancement
    Li, Mengyan
    Zhang, Zhaoyu
    Yu, Jun
    Chen, Chang Wen
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 468 - 483
  • [23] A hierarchical self-attentive neural extractive summarizer via reinforcement learning (HSASRL)
    Mohsen, Farida
    Wang, Jiayang
    Al-Sabahi, Kamal
    APPLIED INTELLIGENCE, 2020, 50 (09) : 2633 - 2646
  • [24] Self-attentive deep learning method for online traffic classification and its interpretability
    Xie, Guorui
    Li, Qing
    Jiang, Yong
    COMPUTER NETWORKS, 2021, 196
  • [25] Learning to Predict Charges for Legal Judgment via Self-Attentive Capsule Network
    Le, Yuquan
    He, Congqing
    Chen, Meng
    Wu, Youzheng
    He, Xiaodong
    Zhou, Bowen
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1802 - 1809
  • [26] Self-Attentive Sequential Recommendations with Hyperbolic Representations
    Frolov, Evgeny
    Matveeva, Tatyana
    Mirvakhabova, Leyla
    Oseledets, Ivan
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 981 - 986
  • [27] AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks
    Song, Weiping
    Shi, Chence
    Xiao, Zhiping
    Duan, Zhijian
    Xu, Yewen
    Zhang, Ming
    Tang, Jian
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 1161 - 1170
  • [28] Self-Attentive Subset Learning over a Set-Based Preference in Recommendation
    Liu, Kunjia
    Chen, Yifan
    Tang, Jiuyang
    Huang, Hongbin
    Liu, Lihua
    APPLIED SCIENCES-BASEL, 2023, 13 (03):
  • [29] SAIN: Self-Attentive Integration Network for Recommendation
    Yun, Seoungjun
    Kim, Raehyun
    Ko, Miyoung
    Kang, Jaewoo
    PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, : 1205 - 1208
  • [30] Learning Relevant Molecular Representations via Self-Attentive Graph Neural Networks
    Kikuchi, Shoma
    Takigawa, Ichigaku
    Oyama, Satoshi
    Kurihara, Masahiro
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 5364 - 5369