Periocular Recognition in the Wild: Implementation of RGB-OCLBCP Dual-Stream CNN

被引:16
|
作者
Tiong, Leslie Ching Ow [1 ]
Lee, Yunli [2 ]
Teoh, Andrew Beng Jin [3 ]
机构
[1] Korea Inst Sci & Technol, Computat Sci Res Ctr, Bldg L0243 14 Gil,5 Hwarangro, Seoul 02792, South Korea
[2] Sunway Univ, Sch Sci & Technol, 5 Jalan Univ, Petaling Jaya 47500, Selangor, Malaysia
[3] Yonsei Univ, Sch Elect & Elect Engn, 50 Yonsei Ro, Seoul 03722, South Korea
来源
APPLIED SCIENCES-BASEL | 2019年 / 9卷 / 13期
基金
新加坡国家研究基金会;
关键词
periocular recognition in the wild; convolutional neural network; colour-based local binary coded pattern; BIOMETRICS; FUSION;
D O I
10.3390/app9132709
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Periocular recognition remains challenging for deployments in the unconstrained environments. Therefore, this paper proposes an RGB-OCLBCP dual-stream convolutional neural network, which accepts an RGB ocular image and a colour-based texture descriptor, namely Orthogonal Combination-Local Binary Coded Pattern (OCLBCP) for periocular recognition in the wild. The proposed network aggregates the RGB image and the OCLBCP descriptor by using two distinct late-fusion layers. We demonstrate that the proposed network benefits from the RGB image and thee OCLBCP descriptor can gain better recognition performance. A new database, namely an Ethnic-ocular database of periocular in the wild, is introduced and shared for benchmarking. In addition, three publicly accessible databases, namely AR, CASIA-iris distance and UBIPr, have been used to evaluate the proposed network. When compared against several competing networks on these databases, the proposed network achieved better performances in both recognition and verification tasks.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Periocular Recognition in the Wild with Orthogonal Combination of Local Binary Coded Pattern in Dual-stream Convolutional Neural Network
    Tiong, Leslie Ching Ow
    Teoh, Andrew Beng Jin
    Lee, Yunli
    2019 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB), 2019,
  • [2] Surface electromyography based gesture recognition based on dual-stream CNN
    Wei W.
    Li Y.
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2022, 28 (01): : 124 - 131
  • [3] Dual-stream Network for Visual Recognition
    Mao, Mingyuan
    Gao, Peng
    Zhang, Renrui
    Zheng, Honghui
    Ma, Teli
    Peng, Yan
    Ding, Errui
    Zhang, Baochang
    Han, Shumin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [4] A Dual-Stream CNN-BiLSTM for Human Motion Recognition With Raw Radar Data
    Gong, Shufeng
    Yan, Xinyue
    Fang, Yiming
    Paul, Agyemang
    Wu, Zhefu
    Chen, Jie
    IEEE SENSORS JOURNAL, 2024, 24 (15) : 25094 - 25105
  • [5] Dual-stream cross-modality fusion transformer for RGB-D action recognition
    Liu, Zhen
    Cheng, Jun
    Liu, Libo
    Ren, Ziliang
    Zhang, Qieshi
    Song, Chengqun
    KNOWLEDGE-BASED SYSTEMS, 2022, 255
  • [6] DUAL-STREAM CNN FOR STRUCTURED TIME SERIES CLASSIFICATION
    Weng, Shuchen
    Li, Wenbo
    Zhang, Yi
    Lyu, Siwei
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3187 - 3191
  • [7] An efficient speech emotion recognition based on a dual-stream CNN-transformer fusion network
    Tellai M.
    Gao L.
    Mao Q.
    International Journal of Speech Technology, 2023, 26 (02) : 541 - 557
  • [8] A Dual-Stream Fusion CNN-LSTM Gesture Recognition Algorithm for Command and Control Systems
    Han, Shengjie
    Li, Guang
    Ding, Xiaotong
    Lu, Meng
    PROCEEDINGS OF 2023 7TH INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION TECHNOLOGY AND COMPUTER ENGINEERING, EITCE 2023, 2023, : 926 - 931
  • [9] Feature Fusion for Dual-Stream Cooperative Action Recognition
    Chen, Dong
    Wu, Mengtao
    Zhang, Tao
    Li, Chuanqi
    IEEE ACCESS, 2023, 11 : 116732 - 116740
  • [10] Dual-stream encoded fusion saliency detection based on RGB and grayscale images
    Xu, Tao
    Zhao, Weishuo
    Chai, Haojie
    Cai, Lei
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (30) : 47327 - 47346