Robust Driver Head Pose Estimation in Naturalistic Conditions from Point-Cloud Data

被引:0
|
作者
Hu, Tiancheng [1 ]
Jha, Sumit [1 ]
Busso, Carlos [1 ]
机构
[1] Univ Texas Dallas, Dept Elect Engn, Richardson, TX 75083 USA
关键词
GAZE; EYE;
D O I
10.1109/iv47402.2020.9304592
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Head pose estimation has been a key task in computer vision since a broad range of applications often requires accurate information about the orientation of the head. Achieving this goal with regular RGB cameras faces challenges in automotive applications due to occlusions, extreme head poses and sudden changes in illumination. Most of these challenges can be attenuated with algorithms relying on depth cameras. This paper proposes a novel point-cloud based deep learning approach to estimate the driver's head pose from depth camera data, addressing these challenges. The proposed algorithm is inspired by the PointNet++ framework, where points are sampled and grouped before extracting discriminative features. We demonstrate the effectiveness of our algorithm by evaluating our approach on a naturalistic driving database consisting of 22 drivers, where the benchmark for the orientation of the driver's head is obtained with the Fi-Cap device. The experimental evaluation demonstrates that our proposed approach relying on point-cloud data achieves predictions that are almost always more reliable than state-of-the-art head pose estimation methods based on regular cameras. Furthermore, our approach provides predictions even for extreme rotations, which is not the case for the baseline methods. To the best of our knowledge, this is the first study to propose head pose estimation using deep learning on point-cloud data.
引用
收藏
页码:1176 / 1182
页数:7
相关论文
共 50 条
  • [1] Temporal Head Pose Estimation From Point Cloud in Naturalistic Driving Conditions
    Hu, Tiancheng
    Jha, Sumit
    Busso, Carlos
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) : 8063 - 8076
  • [2] Pose Estimation of Mobile Robot Using Image and Point-Cloud Data
    An, Sung Won
    Park, Hong Seong
    [J]. JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, 2024, : 5367 - 5377
  • [3] Driver Head Pose Detection From Naturalistic Driving Data
    Chai, Weiheng
    Chen, Jiajing
    Wang, Jiyang
    Velipasalar, Senem
    Venkatachalapathy, Archana
    Adu-Gyamfi, Yaw
    Merickel, Jennifer
    Sharma, Anuj
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (09) : 9368 - 9377
  • [4] Point-Cloud Instance Segmentation-Based Robust Multi-Target Pose Estimation
    Liu Yaohua
    Ma Yue
    Xu Min
    [J]. LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (04)
  • [5] Estimation of pedestrian pose and velocity considering arm swing using point-cloud data
    Matsuyama, Masato
    Nonaka, Kenichiro
    Sekiguchi, Kazuma
    [J]. 2021 60TH ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS OF JAPAN (SICE), 2021, : 99 - 104
  • [6] The construction of geometric models from point-cloud data
    Smith, G
    Claustre, T
    [J]. 16TH INTERNATIONAL CONFERENCE ON COMPUTER-AIDED PRODUCTION ENGINEERING - CAPE 2000, 2000, 2000 (05): : 3 - 11
  • [7] Head pose estimation for driver monitoring
    Zhu, YD
    Fujimura, K
    [J]. 2004 IEEE INTELLIGENT VEHICLES SYMPOSIUM, 2004, : 501 - 506
  • [8] NLP based Skeletal Pose Estimation using mmWave Radar Point-Cloud: A Simulation Approach
    Sengupta, Arindam
    Jin, Feng
    Cao, Siyang
    [J]. 2020 IEEE RADAR CONFERENCE (RADARCONF20), 2020,
  • [9] Head pose estimation for driver assistance systems: A robust algorithm and experimental evaluation
    Murphy-Chutorian, Erik
    Doshi, Anup
    Trivedi, Mohan Manubhai
    [J]. 2007 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE, VOLS 1 AND 2, 2007, : 1049 - 1054
  • [10] Estimation of physical activities of people in offices from time-series point-cloud data
    Kizawa, Koki
    Shinkuma, Ryoichi
    Trovato, Gabriele
    [J]. 2023 IEEE 20TH CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2023,