IMPROVING HUMAN POSE ESTIMATION WITH SELF-ATTENTION GENERATIVE ADVERSARIAL NETWORKS

被引:14
|
作者
Cao, Zhongzheng [1 ]
Wang, Rui [1 ]
Wang, Xiangyang [1 ]
Liu, Zhi [1 ]
Zhu, Xiaoqiang [1 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
基金
中国国家自然科学基金;
关键词
Human Pose Estimation; Convolutional Neural Networks; Stacked Hourglass Networks; Self-Attention GAN;
D O I
10.1109/ICMEW.2019.00103
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Human pose estimation in images is challenging and important for many computer vision applications. Large improvements have been achieved with the development of convolutional neural networks. However, when encountered some difficult cases, even the state-of-the-art models may still fail to predict all the body joints correctly. Some recent works try to refine the pose estimator. GAN (Generative Adversarial Networks) has been proved to be efficient to learn local body joints structural constrains. In this paper, we propose to apply Self-Attention GAN to further improve the performance of human pose estimation. With attention mechanism in the discriminator, we can learn long-range body joints dependencies, therefore enforce the entire body joints structural constrains to make all the body joints to be consistent. Experiments on two standard benchmarks demonstrate the effectiveness of our method.
引用
收藏
页码:567 / 572
页数:6
相关论文
共 50 条
  • [31] Advancing financial fraud detection: Self-attention generative adversarial networks for precise and effective identification
    Zhao, Chuanjun
    Sun, Xuzhuang
    Wu, Meiling
    Kang, Lu
    [J]. FINANCE RESEARCH LETTERS, 2024, 60
  • [32] SA-CapsGAN: Using Capsule Networks with embedded self-attention for Generative Adversarial Network
    Sun, Guangcong
    Ding, Shifei
    Sun, Tongfeng
    Zhang, Chenglong
    [J]. NEUROCOMPUTING, 2021, 423 (423) : 399 - 406
  • [33] SADD: Generative Adversarial Networks via Self-attention and Dual Discriminator in Unsupervised Domain Adaptation
    Dai, Zaiyan
    Yang, Jun
    Fan, Anfei
    Jia, Jinyin
    Chen, Junfan
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 473 - 484
  • [34] Unsupervised Training Data Generation of Handwritten Formulas using Generative Adversarial Networks with Self-Attention
    Springstein, Matthias
    Mueller-Budack, Eric
    Ewerth, Ralph
    [J]. MMPT '21: PROCEEDINGS OF THE 2021 WORKSHOP ON MULTI-MODAL PRE-TRAINING FOR MULTIMEDIA UNDERSTANDING, 2021, : 46 - 54
  • [35] Combining self-attention and depth-wise convolution for human pose estimation
    Zhang, Fan
    Shi, Qingxuan
    Ma, Yanli
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (8-9) : 5647 - 5661
  • [36] Enhancing MIMO-OFDM channel estimation in 5G and beyond with conditional self-attention generative adversarial networks
    Alqahtani, Abdullah Saleh
    Pandiaraj, Saravanan
    Alshmrany, Sami
    Almalki, Ali Jaber
    Prabhu, Sandeep
    Kumar, U. Arun
    [J]. WIRELESS NETWORKS, 2024, 30 (03) : 1719 - 1736
  • [37] Enhancing MIMO-OFDM channel estimation in 5G and beyond with conditional self-attention generative adversarial networks
    Abdullah Saleh Alqahtani
    Saravanan Pandiaraj
    Sami Alshmrany
    Ali Jaber Almalki
    Sandeep Prabhu
    U. Arun Kumar
    [J]. Wireless Networks, 2024, 30 : 1719 - 1736
  • [38] SUPER-RESOLUTION AND SELF-ATTENTION WITH GENERATIVE ADVERSARIAL NETWORK FOR IMPROVING MALIGNANCY CHARACTERIZATION OF HEPATOCELLULAR CARCINOMA
    Li, Yunling
    Huang, Hui
    Zhang, Lijuan
    Wang, Guangyi
    Zhang, Honglai
    Zhou, Wu
    [J]. 2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020), 2020, : 1556 - 1560
  • [39] SAPCGAN: Self-Attention based Generative Adversarial Network for Point Clouds
    Li, Yushi
    Baciu, George
    [J]. PROCEEDINGS OF 2020 IEEE 19TH INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS & COGNITIVE COMPUTING (ICCI*CC 2020), 2020, : 52 - 59
  • [40] Unsupervised Monocular Depth Estimation and Visual Odometry Based on Generative Adversarial Network and Self-attention Mechanism
    Ye X.
    He Y.
    Ru S.
    [J]. Jiqiren/Robot, 2021, 43 (02): : 203 - 213