An attentional residual feature fusion mechanism for sheep face recognition

被引:1
|
作者
Pang, Yue [1 ]
Yu, Wenbo [1 ]
Zhang, Yongan [2 ]
Xuan, Chuanzhong [1 ]
Wu, Pei [1 ]
机构
[1] Inner Mongolia Agr Univ, Coll Mech & Elect Engn, Hohhot 010018, Peoples R China
[2] Inner Mongolia Agr Univ, Coll Comp & Informat Engn, Hohhot 010018, Peoples R China
来源
SCIENTIFIC REPORTS | 2023年 / 13卷 / 01期
关键词
IDENTIFICATION; CATTLE;
D O I
10.1038/s41598-023-43580-2
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In the era of globalization and digitization of livestock markets, sheep are considered an essential source of food production worldwide. However, sheep behavior monitoring, disease prevention, and precise management pose urgent challenges in the development of smart ranches. To address these problems, individual identification of sheep has become an increasingly viable solution. Despite the benefits of traditional sheep individual identification methods, such as accurate tracking and record-keeping, they are labor-intensive and inefficient. Popular convolutional neural networks (CNNs) are unable to extract features for specific problems, further complicating the issue. To overcome these limitations, an Attention Residual Module (ARM) is proposed to aggregate the feature mapping between different layers of the CNN. This approach enables the general model of the CNN to be more adaptable to task-specific feature extraction. Additionally, a targeted sheep face recognition dataset containing 4490 images of 38 individual sheep has been constructed. Furthermore, the experimental data was expanded using image enhancement techniques such as rotation and panning. The results of the experiments indicate that the accuracy of the VGG16, GoogLeNet, and ResNet50 networks with the ARM improved by 10.2%, 6.65%, and 4.38%, respectively, compared to these recognition networks without the ARM. Therefore, the proposed method for specific sheep face recognition tasks has been proven effective.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] An attentional residual feature fusion mechanism for sheep face recognition
    Yue Pang
    Wenbo Yu
    Yongan Zhang
    Chuanzhong Xuan
    Pei Wu
    [J]. Scientific Reports, 13 (1)
  • [2] Sheep Face Recognition Model Based on Deep Learning and Bilinear Feature Fusion
    Wan, Zhuang
    Tian, Fang
    Zhang, Cheng
    [J]. ANIMALS, 2023, 13 (12):
  • [3] Face Recognition Based on Feature Fusion
    Qian, Zhi-Ming
    Qin, Haifei
    Liu, Xiaoqing
    Zhao, Yongchao
    [J]. PROCEEDINGS OF THE 2015 2ND INTERNATIONAL CONFERENCE ON ELECTRICAL, COMPUTER ENGINEERING AND ELECTRONICS (ICECEE 2015), 2015, 24 : 863 - 866
  • [4] Multiple Feature Fusion for Face Recognition
    Kong, Shu
    Wang, Xikui
    Wang, Donghui
    Wu, Fei
    [J]. 2013 10TH IEEE INTERNATIONAL CONFERENCE AND WORKSHOPS ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG), 2013,
  • [5] Fusion facial semantic feature and incremental learning mechanism for efficient face recognition
    Zhong, Rui
    Wu, Huaiyu
    Chen, Zhihuan
    Zhong, Qi
    [J]. SOFT COMPUTING, 2021, 25 (14) : 9347 - 9363
  • [6] Fusion facial semantic feature and incremental learning mechanism for efficient face recognition
    Rui Zhong
    Huaiyu Wu
    Zhihuan Chen
    Qi Zhong
    [J]. Soft Computing, 2021, 25 : 9347 - 9363
  • [7] A face recognition algorithm based on feature fusion
    Zhang, Jiwei
    Yan, Xiaodan
    Cheng, Zelei
    Shen, Xueqi
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (14):
  • [8] A Face Recognition Method for Sports Video Based on Feature Fusion and Residual Recurrent Neural Network
    Yan, Xu
    [J]. Informatica (Slovenia), 2024, 48 (12): : 137 - 152
  • [9] Attentional Feature Fusion
    Dai, Yimian
    Gieseke, Fabian
    Oehmcke, Stefan
    Wu, Yiquan
    Barnard, Kobus
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3559 - 3568
  • [10] Feature fusion with covariance matrix regularization in face recognition
    Lu, Ze
    Jiang, Xudong
    Kot, Alex
    [J]. SIGNAL PROCESSING, 2018, 144 : 296 - 305