Human Pose Estimation Based on Lightweight Multi-Scale Coordinate Attention

被引:2
|
作者
Li, Xin [1 ]
Guo, Yuxin [1 ]
Pan, Weiguo [1 ]
Liu, Hongzhe [1 ]
Xu, Bingxin [1 ]
机构
[1] Beijing Union Univ, Beijing Key Lab Informat Serv Engn, Beijing 100101, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 06期
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
human pose estimation; attention mechanism; multi-scale feature extraction; NETWORK;
D O I
10.3390/app13063614
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Heatmap-based traditional approaches for estimating human pose usually suffer from drawbacks such as high network complexity or suboptimal accuracy. Focusing on the issue of multi-person pose estimation without heatmaps, this paper proposes an end-to-end, lightweight human pose estimation network using a multi-scale coordinate attention mechanism based on the Yolo-Pose network to improve the overall network performance while ensuring the network is lightweight. Specifically, the lightweight network GhostNet was first integrated into the backbone to alleviate the problem of model redundancy and produce a significant number of effective feature maps. Then, by combining the coordinate attention mechanism, the sensitivity of our proposed network to direction and location perception was enhanced. Finally, the BiFPN module was fused to balance the feature information of different scales and further improve the expression ability of convolutional features. Experiments on the COCO 2017 dataset showed that, compared with the baseline method YOLO-Pose, the average accuracy of the proposed network on the COCO 2017 validation dataset was improved by 4.8% while minimizing the amount of network parameters and calculations. The experimental results demonstrated that our proposed method can improve the detection accuracy of human pose estimation while ensuring that the model is lightweight.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Multi-scale Attention Aided Multi-Resolution Network for Human Pose Estimation
    Selvam, Srinika
    Mishra, Deepak
    [J]. PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2019, PT I, 2019, 11941 : 461 - 472
  • [2] Lightweight head pose estimation without keypoints based on multi-scale lightweight neural network
    Xiaolei Chen
    Yubing Lu
    Baoning Cao
    Dongmei Lin
    Ishfaq Ahmad
    [J]. The Visual Computer, 2023, 39 (6) : 2455 - 2469
  • [3] Lightweight head pose estimation without keypoints based on multi-scale lightweight neural network
    Chen, Xiaolei
    Lu, Yubing
    Cao, Baoning
    Lin, Dongmei
    Ahmad, Ishfaq
    [J]. VISUAL COMPUTER, 2023, 39 (06): : 2455 - 2469
  • [4] A lightweight pose estimation network with multi-scale receptive field
    Li, Shuo
    Dai, Ju
    Chen, Zhangmeng
    Pan, Junjun
    [J]. VISUAL COMPUTER, 2023, 39 (08): : 3429 - 3440
  • [5] A lightweight pose estimation network with multi-scale receptive field
    Shuo Li
    Ju Dai
    Zhangmeng Chen
    Junjun Pan
    [J]. The Visual Computer, 2023, 39 : 3429 - 3440
  • [6] Multi-Scale Collaborative Network for Human Pose Estimation
    Guo, Chunsheng
    Zhou, Jialuo
    Du, Wenlong
    Zhang, Xuguang
    [J]. INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS, 2019, 16 (04)
  • [7] Multi-Scale Contrastive Learning for Human Pose Estimation
    Bao, Wenxia
    Lin, An
    Huang, Hua
    Yang, Xianjun
    Chen, Hemu
    [J]. IEICE Transactions on Information and Systems, 2024, E107.D (10) : 1332 - 1341
  • [8] MULTI-SCALE SUPERVISED NETWORK FOR HUMAN POSE ESTIMATION
    Ke, Lipeng
    Chang, Ming-Ching
    Qi, Honggang
    Lyu, Siwei
    [J]. 2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 564 - 568
  • [9] Lightweight 2D Human Pose Estimation Based on Joint Channel Coordinate Attention Mechanism
    Li, Zuhe
    Xue, Mengze
    Cui, Yuhao
    Liu, Boyi
    Fu, Ruochong
    Chen, Haoran
    Ju, Fujiao
    [J]. ELECTRONICS, 2024, 13 (01)
  • [10] Enhancement and optimisation of human pose estimation with multi-scale spatial attention and adversarial data augmentation
    Zhang, Tong
    Li, Qilin
    Wen, Jingtao
    Chen, C. L. Philip
    [J]. INFORMATION FUSION, 2024, 111