Localization of Human 3D Joints Based on Binocular Vision

被引:0
|
作者
Xu, Zheng [1 ]
Li, Jinping [2 ,3 ]
Yin, Jianqin [4 ]
Wu, Yanchun [1 ,2 ,3 ,4 ]
机构
[1] Jinan Univ, Shandong Prov Key Lab Network Based Intelligent C, Sch Informat Sci & Engn, Jinan 250022, Shandong, Peoples R China
[2] Jinan Univ, Shandong Coll, Jinan 250022, Shandong, Peoples R China
[3] Jinan Univ, Univ Key Lab Informat Proc & Cognit Comp 13 Five, Jinan 250022, Shandong, Peoples R China
[4] Beijing Univ Posts & Telecommun, Sch Automat, Beijing 100876, Peoples R China
基金
中国国家自然科学基金;
关键词
Pose estimation; 3D joints localization; Binocular vision; SINGLE; POSE;
D O I
10.1007/978-981-13-7986-4_6
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
With the development of image/video based 3D pose estimation techniques, service robots, human-computer interaction, and 3D somatosensory games have been developed rapidly. However, 3D pose estimation is still one of the most challenging tasks in computer vision. On the one hand, diversity of poses, occlusion and self-occlusion, change in illumination, and complex background increase the complexity of human pose estimation. On the other hand, many application scenarios require high real-time performance for 3D pose estimation. Therefore, we present a 3D pose estimation method based on binocular vision in this paper. For each frame of the binocular videos, the human body is detected firstly; Then Stacked-Hourglass network is used to detect the human joints, and the pixel coordinates of the key joints of all the human bodies in the binocular images are obtained. Finally, with the calibrated camera internal parameters and external parameters, the 3D coordinates of the major joints in the world coordinate system are estimated. This method does not rely on 3D data sets for training. It only requires binocular cameras to perform 3D pose estimation. The experimental results show that the method can locate key joints precisely and the real-time performance is achieved in complex background.
引用
收藏
页码:65 / 75
页数:11
相关论文
共 50 条
  • [1] Passive 3D Reconstruction Based on Binocular Vision
    Zhang, Jingjun
    Du, Ruoxia
    Gao, Ruizhen
    [J]. TENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2018), 2019, 11069
  • [2] 3D Surface Reconstruction Based on Binocular Vision
    Li, Xuesheng
    Qin, Kaiyu
    Yao, Ping
    Yu, Jun
    Wu, Wenjie
    Chen, Lu
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2014), 2014, : 1861 - 1865
  • [3] Research on 3D Measuring Based Binocular Vision
    Yan, Long
    Zhao, Xingfang
    Du, Huiqiu
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON CONTROL SCIENCE AND SYSTEMS ENGINEERING, 2014, : 18 - 26
  • [4] 3D Reconstruction of Surface Based on Binocular Vision
    Hu, Xiaoping
    Peng, Tao
    Xie, Ke
    [J]. SIXTH INTERNATIONAL SYMPOSIUM ON PRECISION MECHANICAL MEASUREMENTS, 2013, 8916
  • [5] 3D Reconstruction Based on Binocular Stereo Vision of Robot
    Niu, Zhigang
    Li, Lijun
    Wang, Tie
    [J]. PRODUCT DESIGN AND MANUFACTURING, 2011, 338 : 645 - 648
  • [6] 3D Reconstruction of Traditional Handicrafts Based on Binocular Vision
    Qin, Yi
    Xu, Zhipeng
    [J]. ADVANCES IN MULTIMEDIA, 2022, 2022
  • [7] 3D reconstruction in diameter measurement based on binocular vision
    Yan-Yu, Liu
    De-Liang, Li
    Fei-Long, Zhang
    Zhi-Yong, Yin
    [J]. ISTM/2007: 7TH INTERNATIONAL SYMPOSIUM ON TEST AND MEASUREMENT, VOLS 1-7, CONFERENCE PROCEEDINGS, 2007, : 1677 - 1680
  • [8] 3D Reconstruction of Traditional Handicrafts Based on Binocular Vision
    Qin, Yi
    Xu, Zhipeng
    [J]. ADVANCES IN MULTIMEDIA, 2022, 2022
  • [9] 3D Human behavior recognition based on binocular vision and face-hand feature
    Ye, Qing
    Dong, Junfeng
    Zhang, Yongmei
    [J]. OPTIK, 2015, 126 (23): : 4712 - 4717
  • [10] Tensor Completion based 3D Reconstruction of Binocular Stereo Vision
    Liu, Ze-Hua
    Rong, Hai-Jun
    Yang, Zhao-Xu
    Yang, Zhi-Xin
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL ENGINEERING AND ENGINEERING MANAGEMENT (IEEM), 2019, : 968 - 972