Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics

被引:0
|
作者
Ng, Tiong-Sik [1 ]
Chai, Jacky Chen Long [1 ]
Low, Cheng-Yaw [2 ]
Beng Jin Teoh, Andrew [1 ]
机构
[1] Yonsei University, School of Electrical and Electronic Engineering, College of Engineering, Seoul,03722, Korea, Republic of
[2] Institute for Basic Science, Data Science Group, Center for Mathematical and Computational Sciences, Daejeon,34126, Korea, Republic of
关键词
Biological system modeling - Biometric (access control) - Channel-wise self-attention - Correlation - Face - Features extraction - Inter-modal matching - Intra-modal matching - Modal matching - Modality alignment loss - Periocular - Self-supervised learning;
D O I
暂无
中图分类号
学科分类号
摘要
Periocular and face are two common biometric modalities for identity management. Recently, the emergence of conditional biometrics has enabled the exploitation of the correlation between face and periocular to enhance each modality's performance, in which we coin intra-modal matching in this paper. However, limitations arise in each modality, particularly when wearing sunglasses or helmets, causing the absence of periocular or facial occlusion. A biometric system empowered with inter-modal matching capability between periocular and face is essential to mitigate these challenges. This paper presents a novel reciprocal learning model that utilizes periocular and face conditioning to facilitate flexible intra-modal and inter-modal matching. To address the intra-modal matching challenge, we devise a lightweight Gated Convolutional Channel-wise Self-Attention Network that enables selective attention to shared salient periocular and face features. On the other hand, to bridge the modality gap without sacrificing the intra-modal matching performance, we propose a modality and augmentation-aware contrastive loss that incorporates semi-supervised positive sampling and alignment-specific logit rescaling. Extensive identification and verification experiments on five face-periocular datasets under the open-set protocol attest to the efficacy of our proposed methods. Codes are publicly available at https://github.com/tiongsikng/gc2sa_net. © 2005-2012 IEEE.
引用
收藏
页码:3251 / 3264
相关论文
共 50 条
  • [31] A hierarchical self-attentive neural extractive summarizer via reinforcement learning (HSASRL)
    Farida Mohsen
    Jiayang Wang
    Kamal Al-Sabahi
    Applied Intelligence, 2020, 50 : 2633 - 2646
  • [32] Multivariate Sleep Stage Classification using Hybrid Self-Attentive Deep Learning Networks
    Yuan, Ye
    Jia, Kebin
    Ma, Fenglong
    Xun, Guangxu
    Wang, Yaqing
    Su, Lu
    Zhang, Aidong
    PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2018, : 963 - 968
  • [33] Learning Granger Causality from Instance-wise Self-attentive Hawkes Processes
    Wu, Dongxia
    Ide, Tsuyoshi
    Lozano, Aurelie
    Kollias, Georgios
    Navratil, Jiri
    Abe, Naoki
    Ma, Yi-An
    Yu, Rose
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [34] SAFE: Sequential Attentive Face Embedding with Contrastive Learning for Deepfake Video Detection
    Jung, Juho
    Kang, Chaewon
    Yoon, Jeewoo
    Woo, Simon S.
    Han, Jinyoung
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3993 - 3997
  • [35] Learning Dynamic Graph Embedding for Traffic Flow Forecasting: A Graph Self-Attentive Method
    Kang, Zifeng
    Xu, Hanwen
    Hu, Jianming
    Pei, Xin
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 2570 - 2576
  • [36] SALADNET: SELF-ATTENTIVE MULTISOURCE LOCALIZATION IN THE AMBISONICS DOMAIN
    Grumiaux, Pierre-Amaury
    Kitic, Srdan
    Srivastava, Prerak
    Girin, Laurent
    Guerin, Alexandre
    2021 IEEE WORKSHOP ON APPLICATIONS OF SIGNAL PROCESSING TO AUDIO AND ACOUSTICS (WASPAA), 2021, : 336 - 340
  • [37] Self-Attentive Moving Average for Time Series Prediction
    Su, Yaxi
    Cui, Chaoran
    Qu, Hao
    APPLIED SCIENCES-BASEL, 2022, 12 (07):
  • [38] Energy-based Self-attentive Learning of Abstractive Communities for Spoken Language Understanding
    Shang, Guokan
    Tixier, Antoine J-P
    Vazirgiannis, Michalis
    Lorre, Jean-Pierre
    1ST CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 10TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (AACL-IJCNLP 2020), 2020, : 313 - 327
  • [39] Learning Transferable Self-Attentive Representations for Action Recognition in Untrimmed Videos with Weak Supervision
    Zhang, Xiao-Yu
    Shi, Haichao
    Li, Changsheng
    Zheng, Kai
    Zhu, Xiaobin
    Duan, Lixin
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 9227 - 9234
  • [40] Graph convolutional network and self-attentive for sequential recommendation
    Guo, Kaifeng
    Zeng, Guolei
    PEERJ COMPUTER SCIENCE, 2023, 9