OaIF: Occlusion-Aware Implicit Function for Clothed Human Re-construction

被引:0
|
作者
Tan, Yudi [1 ]
Guan, Boliang [2 ]
Zhou, Fan [1 ]
Su, Zhuo [1 ]
机构
[1] Sun Yat Sen Univ, Natl Engn Res Ctr Digital Life, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[2] Foshan Univ, Sch Elect & Informat Engn, Foshan, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
modelling; image-based modelling; implicit surfaces;
D O I
10.1111/cgf.14798
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Clothed human re-construction from a monocular image is challenging due to occlusion, depth-ambiguity and variations of body poses. Recently, shape representation based on an implicit function, compared to explicit representation such as mesh and voxel, is more capable with complex topology of clothed human. This is mainly achieved by using pixel-aligned features, facilitating implicit function to capture local details. But such methods utilize an identical feature map for all sampled points to get local features, making their models occlusion-agnostic in the encoding stage. The decoder, as implicit function, only maps features and does not take occlusion into account explicitly. Thus, these methods fail to generalize well in poses with severe self-occlusion. To address this, we present OaIF to encode local features conditioned in visibility of SMPL vertices. OaIF projects SMPL vertices onto image plane to obtain image features masked by visibility. Vertices features integrated with geometry information of mesh are then feed into a GAT network to encode jointly. We query hybrid features and occlusion factors for points through cross attention and learn occupancy fields for clothed human. The experiments demonstrate that OaIF achieves more robust and accurate re-construction than the state of the art on both public datasets and wild images.
引用
收藏
页数:13
相关论文
共 28 条
  • [1] Part-Level Occlusion-Aware Human Pose Estimation
    Chu Z.
    Mi Q.
    Ma W.
    Xu S.
    Zhang X.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2022, 59 (12): : 2760 - 2769
  • [2] AONet: Attentional Occlusion-Aware Network for Occluded Person Re-identification
    Gao, Guangyu
    Wang, Qianxiang
    Ge, Jing
    Zhang, Yan
    COMPUTER VISION - ACCV 2022, PT V, 2023, 13845 : 21 - 36
  • [3] Occlusion-Aware Feature Recover Model for Occluded Person Re-Identification
    Bian, Yuan
    Liu, Min
    Wang, Xueping
    Tang, Yi
    Wang, Yaonan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5284 - 5295
  • [4] Occlusion-aware Dynamic Human Emotion Recognition Using Landmark Detection
    Engoor, Smitha
    Sendhilkumar, S.
    Sharon, Hepsibah C.
    Mahalakshmi, G. S.
    2020 6TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTING AND COMMUNICATION SYSTEMS (ICACCS), 2020, : 795 - 799
  • [5] Torso Orientation: A New Clue for Occlusion-Aware Human Pose Estimation
    Yu, Yang
    Yang, Baoyao
    Yuen, Pong C.
    2016 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2016, : 908 - 912
  • [6] Occlusion-Aware Human Mesh Model-Based Gait Recognition
    Xu, Chi
    Makihara, Yasushi
    Li, Xiang
    Yagi, Yasushi
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1309 - 1321
  • [7] GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras
    Yuan, Ye
    Iqbal, Umar
    Molchanov, Pavlo
    Kitani, Kris
    Kautz, Jan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11028 - 11039
  • [8] CONet: Crowd and occlusion-aware network for occluded human pose estimation
    Bai, Xiuxiu
    Wei, Xing
    Wang, Zengying
    Zhang, Miao
    NEURAL NETWORKS, 2024, 172
  • [9] Occlusion-Aware Transformer With Second-Order Attention for Person Re-Identification
    Li, Yanping
    Liu, Yizhang
    Zhang, Hongyun
    Zhao, Cairong
    Wei, Zhihua
    Miao, Duoqian
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 3200 - 3211
  • [10] Occlusion-Aware Networks for 3D Human Pose Estimation in Video
    Cheng, Yu
    Yang, Bo
    Wang, Bo
    Yan, Wending
    Tan, Robby T.
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 723 - 732