3-D motion estimation using range data

被引:11
|
作者
Gharavi, Hamid [1 ]
Gao, Shaoshuai [1 ]
机构
[1] US Dept Commerce, Natl Inst Stand & Technol, Gaithersburg, MD 20899 USA
关键词
intelligent transport; ladar; laser scanners; object tracking; range image; three-dimensional (3-D) motion estimation; vehicle safety;
D O I
10.1109/TITS.2006.883112
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Advanced vehicle-based safety and warning systems use laser scanners to measure road geometry (position and curvature) and range to obstacles in order to warn a driver of an impending crash and/or to activate safety devices (air bags, brakes, and steering). In order to objectively quantify the performance of such a system, the reference system must be an order of magnitude more accurate than the sensors used by the warning system. This can be achieved by using high-resolution range images that can accurately perform object tracking and velocity estimation. Currently, this is very difficult to achieve when the measurements are taken from fast moving vehicles. Thus, the main objective is to improve motion estimation, which involves both the rotational and translation movements of objects. In this respect, an innovative recursive motion-estimation technique that can take advantage of the in-depth resolution (range) to perform accurate estimation of objects that have undergone three-dimensional (3-D) translational and rotational movements is presented. This approach iteratively aims at minimizing the error between the object in the current frame and its compensated object using estimated-motion displacement from the previous range measurements. In addition, in order to use the range data on the nonrectangular grid in the Cartesian coordinate, two approaches have been considered: 1) membrane fit, which interpolates the nonrectangular grid to the rectangular grid, and 2) the nonrectangular-grid range data by employing derivative filters and the proposed transformation between the Cartesian coordinates and the sensor-centered coordinates. The effectiveness of the proposed scheme is demonstrated for sequences of moving-range images.
引用
收藏
页码:133 / 143
页数:11
相关论文
共 50 条
  • [41] 3-D translational motion estimation from 2-D displacements
    Garcia, C
    Tziritas, G
    [J]. 2001 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL II, PROCEEDINGS, 2001, : 945 - 948
  • [42] 3-D range data interpolation using B-Spline surface fitting
    Li, ST
    Zhao, DM
    [J]. VISUAL COMMUNICATIONS AND IMAGE PROCESSING 2000, PTS 1-3, 2000, 4067 : 1570 - 1578
  • [43] Human Detection in Laser Range Data Using Deep Learning and 3-D Objects
    Nasiriyan, Fariba
    Khotanlou, Hassan
    [J]. 2015 7TH CONFERENCE ON INFORMATION AND KNOWLEDGE TECHNOLOGY (IKT), 2015,
  • [44] Using multiple-hypothesis disparity maps and image velocity for 3-D motion estimation
    Demirdjian, D
    Darrell, T
    [J]. IEEE WORKSHOP ON STEREO AND MULTI-BASELINE VISION, PROCEEDINGS, 2001, : 121 - 128
  • [45] Robust ego-motion estimation and 3-D model refinement using surface parallax
    Agrawal, A
    Chellappa, R
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2006, 15 (05) : 1215 - 1225
  • [46] Using Multiple-Hypothesis Disparity Maps and Image Velocity for 3-D Motion Estimation
    D. Demirdjian
    T. Darrell
    [J]. International Journal of Computer Vision, 2002, 47 : 219 - 228
  • [47] Using multiple-hypothesis disparity maps and image velocity for 3-D motion estimation
    Demirdjian, D
    Darrell, T
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2002, 47 (1-3) : 219 - 228
  • [48] 3-D Ego-Motion Estimation Using Multi-Channel FMCW Radar
    Yuan, Sen
    Zhu, Simin
    Fioranelli, Francesco
    Yarovoy, Alexander G.
    [J]. IEEE Transactions on Radar Systems, 2023, 1 : 368 - 381
  • [49] ROBUST 3-D 3-D POSE ESTIMATION
    ZHUANG, XH
    HUANG, Y
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1994, 16 (08) : 818 - 824
  • [50] Computing internally constrained motion of 3-D sensor data for motion interpretation
    Kanatani, Kenichi
    Matsunaga, Chikara
    [J]. PATTERN RECOGNITION, 2013, 46 (06) : 1700 - 1709