Self-Attention Network for Human Pose Estimation

被引:2
|
作者
Xia, Hailun [1 ,2 ,3 ]
Zhang, Tianyang [1 ,2 ,3 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing Key Lab Network Syst Architecture & Conve, Beijing 100876, Peoples R China
[2] Beijing Univ Posts & Telecommun, Beijing Lab Adv Informat Networks, Beijing 100876, Peoples R China
[3] Beijing Univ Posts & Telecommun, Sch Informat & Commun Engn, Beijing 100876, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 04期
基金
中国国家自然科学基金;
关键词
human pose estimation; self-attention network; joint learning framework; local and nonlocal consistencies; end-to-end training;
D O I
10.3390/app11041826
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Estimating the positions of human joints from monocular single RGB images has been a challenging task in recent years. Despite great progress in human pose estimation with convolutional neural networks (CNNs), a central problem still exists: the relationships and constraints, such as symmetric relations of human structures, are not well exploited in previous CNN-based methods. Considering the effectiveness of combining local and nonlocal consistencies, we propose an end-to-end self-attention network (SAN) to alleviate this issue. In SANs, attention-driven and long-range dependency modeling are adopted between joints to compensate for local content and mine details from all feature locations. To enable an SAN for both 2D and 3D pose estimations, we also design a compatible, effective and general joint learning framework to mix up the usage of different dimension data. We evaluate the proposed network on challenging benchmark datasets. The experimental results show that our method has significantly achieved competitive results on Human3.6M, MPII and COCO datasets.
引用
收藏
页码:1 / 15
页数:14
相关论文
共 50 条
  • [1] Research on Lightweight High-resolution Network Human Pose Estimation Based on Self-attention
    Liu, Guangyu
    Zhong, Xiaoling
    Ma, Lizhi
    [J]. 2023 IEEE 8TH INTERNATIONAL CONFERENCE ON BIG DATA ANALYTICS, ICBDA, 2023, : 142 - 146
  • [2] Lightweight human pose estimation algorithm based on polarized self-attention
    Liu, Shengjie
    He, Ning
    Wang, Cheng
    Yu, Haigang
    Han, Wenjing
    [J]. MULTIMEDIA SYSTEMS, 2023, 29 (01) : 197 - 210
  • [3] Lightweight human pose estimation algorithm based on polarized self-attention
    Shengjie Liu
    Ning He
    Cheng Wang
    Haigang Yu
    Wenjing Han
    [J]. Multimedia Systems, 2023, 29 : 197 - 210
  • [4] IMPROVING HUMAN POSE ESTIMATION WITH SELF-ATTENTION GENERATIVE ADVERSARIAL NETWORKS
    Cao, Zhongzheng
    Wang, Rui
    Wang, Xiangyang
    Liu, Zhi
    Zhu, Xiaoqiang
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 567 - 572
  • [5] Improving Human Pose Estimation With Self-Attention Generative Adversarial Networks
    Wang, Xiangyang
    Cao, Zhongzheng
    Wang, Rui
    Liu, Zhi
    Zhu, Xiaoqiang
    [J]. IEEE ACCESS, 2019, 7 : 119668 - 119680
  • [6] Prior-knowledge-based self-attention network for 3D human pose estimation
    Chen, Shu
    Xu, Yaxin
    Zou, Beiji
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2023, 225
  • [7] Combining self-attention and depth-wise convolution for human pose estimation
    Zhang, Fan
    Shi, Qingxuan
    Ma, Yanli
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (8-9) : 5647 - 5661
  • [8] Stacked Hourglass Networks Based on Polarized Self-Attention for Human Pose Estimation
    Luo, Xiaoxia
    Li, Feibiao
    [J]. SECOND IYSF ACADEMIC SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND COMPUTER ENGINEERING, 2021, 12079
  • [9] CHANNEL-POSITION SELF-ATTENTION WITH QUERY REFINEMENT SKELETON GRAPH NEURAL NETWORK IN HUMAN POSE ESTIMATION
    Chu, Shek Wai
    Zhang, Chaoyi
    Song, Yang
    Cai, Weidong
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 971 - 975
  • [10] Parallel Self-Attention and Spatial-Attention Fusion for Human Pose Estimation and Running Movement Recognition
    Wu, Qingtian
    Zhang, Yu
    Zhang, Liming
    Yu, Haoyong
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (01) : 358 - 368