Multi-view Based Gabor Features Fusion for Iris Recognition

被引:0
|
作者
Jiang, Liang [1 ]
Zeng, Shan [1 ]
Kang, Zhen [1 ]
Zeng, Sen [2 ]
机构
[1] College of Mathematics and Computer Science, Wuhan Polytechnic University, Wuhan,430023, China
[2] College of Economics and Management, Wuhan Polytechnic University, Wuhan,430023, China
关键词
Access authentications - Biometric applications - Feature fusion - Inter-class distance - Iris recognition - Non-linear optimization problems - Particle swarm optimization algorithm - Spatial informations;
D O I
10.3966/199115992019083004009
中图分类号
学科分类号
摘要
Iris is one of the most reliable biometrics because of its uniqueness and stability, hence has played an important role in many biometric applications such as access authentication. While existing Gabor features have demonstrated great success for Iris recognition methods, they are not designed for visually characterize Iris patterns. In this paper, we purposely design 6 Gabor filters after analyzing the texture and spatial information of the iris to extract features. However, the features may have redundant information and some non-effective features, which interfere with the matching process. To fuse the Gabor features obtained through these Gabor filters, we propose a weighted multi-view feature fusion algorithm by minimizing intra-class distance and maximizing inter-class distance. The Particle Swarm Optimization (PSO) algorithm is utilized to solve this nonlinear optimization problem. Experimental results on the popular benchmark dataset CASIA-IrisV4-Thousdand demonstrate that our proposed method utilizing the novel Gabor features and multi-view fusion algorithm outperforms other Gabor feature based methods. © 2019 Computer Society of the Republic of China. All rights reserved.
引用
收藏
页码:106 / 112
相关论文
共 50 条
  • [21] Human activity recognition algorithm in video sequences based on the fusion of multiple features for realistic and multi-view environment
    Arati Kushwaha
    Ashish Khare
    Om Prakash
    Multimedia Tools and Applications, 2024, 83 : 22727 - 22748
  • [22] Half Iris Gabor Based Iris Recognition
    Ali, Musab A. M.
    Tahir, Nooritawati Md
    2014 IEEE 10TH INTERNATIONAL COLLOQUIUM ON SIGNAL PROCESSING & ITS APPLICATIONS (CSPA 2014), 2014, : 282 - 287
  • [23] Automatic Multi-view Action Recognition with Robust Features
    Chou, Kuang-Pen
    Prasad, Mukesh
    Li, Dong-Lin
    Bharill, Neha
    Lin, Yu-Feng
    Hussain, Farookh
    Lin, Chin-Teng
    Lin, Wen-Chieh
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT III, 2017, 10636 : 554 - 563
  • [24] Multi-View Learning of Acoustic Features for Speaker Recognition
    Livescu, Karen
    Stoehr, Mark
    2009 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION & UNDERSTANDING (ASRU 2009), 2009, : 82 - +
  • [25] Multi-View gait recognition method based on dynamic and static feature fusion
    Zhang Weihu
    Zhang Meng
    Wei Fan
    2018 INTERNATIONAL CONFERENCE ON SENSOR NETWORKS AND SIGNAL PROCESSING (SNSP 2018), 2018, : 287 - 291
  • [26] Cattle Facial Matching Recognition Algorithm Based on Multi-View Feature Fusion
    Weng, Zhi
    Liu, Shaoqing
    Zheng, Zhiqiang
    Zhang, Yong
    Gong, Caili
    ELECTRONICS, 2023, 12 (01)
  • [27] Optimization of Human Posture Recognition Based on Multi-view Skeleton Data Fusion
    Xu, Yahong
    Wei, Shoulin
    Yin, Jibin
    INFORMATION TECHNOLOGY AND CONTROL, 2024, 53 (02): : 542 - 553
  • [28] DVANet: Disentangling View and Action Features for Multi-View Action Recognition
    Siddiqui, Nyle
    Tirupattur, Praveen
    Shah, Mubarak
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4873 - 4881
  • [29] Multi-View and Multi-Modal Action Recognition with Learned Fusion
    Ardianto, Sandy
    Hang, Hsueh-Ming
    2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2018, : 1601 - 1604
  • [30] Multi-view recognition of fruit packing boxes based on features clustering angle
    李鑫宁
    Wu Hu
    Yang Xianhai
    High Technology Letters, 2021, 27 (02) : 200 - 209