A Motion Planning Strategy for the Active Vision-Based Mapping of Ground-Level Structures

被引:22
|
作者
Ramanagopal, Manikandasriram Srinivasan [1 ]
Nguyen, Andre Phu-Van [2 ,3 ]
Ny, Jerome Le [2 ,3 ]
机构
[1] Univ Michigan, Robot Inst, Ann Arbor, MI 48109 USA
[2] Polytech Montreal, Dept Elect Engn, Montreal, PQ H3T 1J4, Canada
[3] GERAD, Montreal, PQ H3T 1J4, Canada
基金
加拿大创新基金会; 加拿大自然科学与工程研究理事会;
关键词
Active sensing; active simultaneous localization and mapping (SLAM); autonomous inspection; autonomous mapping; motion planning; 3D RECONSTRUCTION; EXPLORATION; COVERAGE; TERRAIN;
D O I
10.1109/TASE.2017.2762088
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents a strategy to guide a mobile ground robot equipped with a camera or depth sensor, in order to autonomously map the visible part of a bounded 3-D structure. We describe motion planning algorithms that determine appropriate successive viewpoints and attempt to fill holes automatically in a point cloud produced by the sensing and perception layer. The emphasis is on accurately reconstructing a 3-D model of a structure of moderate size rather than mapping large open environments, with applications for example in architecture, construction, and inspection. The proposed algorithms do not require any initialization in the form of a mesh model or a bounding box, and the paths generated are well adapted to situations where the vision sensor is used simultaneously for mapping and for localizing the robot, in the absence of additional absolute positioning system. We analyze the coverage properties of our policy, and compare its performance with the classic frontier-based exploration algorithm. We illustrate its efficacy for different structure sizes, levels of localization accuracy, and range of the depth sensor, and validate our design on a real-world experiment. Note to Practitioners-The objective of this paper is to automate the process of building a 3-D model of a structure of interest that is as complete as possible, using a mobile camera or depth sensor, in the absence of any prior information about this structure. Given that increasingly robust solutions for the visual simultaneous localization and mapping problem are now readily available, the key challenge that we address here is to develop motion planning policies to control the trajectory of the sensor in a way that improves the mapping performance. We target in particular scenarios where no external absolute positioning system is available, such as mapping certain indoor environments where GPS signals are blocked. In this case, it is often important to revisit previously seen locations relatively quickly, in order to avoid excessive drift in the dead-reckoning localization system. Our system works by first determining the boundaries of the structure, before attempting to fill the holes in the constructed model. Its performance is illustrated through simulations, and a real-world experiment performed with a depth sensor carried by a mobile manipulator.
引用
收藏
页码:356 / 368
页数:13
相关论文
共 50 条
  • [1] Ground-Level Mapping and Navigating for Agriculture Based on IoT and Computer Vision
    Zhao, Wei
    Wang, Xuan
    Qi, Bozhao
    Runge, Troy
    [J]. IEEE ACCESS, 2020, 8 : 221975 - 221985
  • [2] Direct Motion Planning for Vision-Based Control
    Pieters, Roel
    Ye, Zhenyu
    Jonker, Pieter
    Nijmeijer, Henk
    [J]. IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2014, 11 (04) : 1282 - 1288
  • [3] Dynamic visibility checking for vision-based motion planning
    Leonard, Simon
    Croft, Elizabeth A.
    Little, James J.
    [J]. 2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9, 2008, : 2283 - +
  • [4] Vision-based Ground Test for Active Debris Removal
    Lim, Seong-Min
    Kim, Hae-Dong
    Seong, Jae-Dong
    [J]. JOURNAL OF ASTRONOMY AND SPACE SCIENCE, 2013, 30 (04) : 279 - 290
  • [5] Unifying configuration space and sensor for vision-based motion planning
    Sharma, R
    Sutanto, H
    [J]. 1996 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, PROCEEDINGS, VOLS 1-4, 1996, : 3572 - 3577
  • [6] Vision-based motion planning and exploration algorithms for mobile robots
    Taylor, CJ
    Kriegman, DJ
    [J]. IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1998, 14 (03): : 417 - 426
  • [7] Motion Planning for Vision-based Stevedoring Tasks on Industrial Robots
    Wang, Shijun
    Guo, Hao
    Cao, Xuewei
    Chai, Xiaojie
    Wen, Feng
    Yuan, Kui
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION, 2015, : 1264 - 1269
  • [8] Vision-Based Local-Level Frame Mapping and Planning in Spherical Coordinates for Miniature Air Vehicles
    Yu, Huili
    Beard, Randal W.
    [J]. IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2013, 21 (03) : 695 - 703
  • [9] Vision-based Local-level Frame Mapping and Planning in Spherical Coordinates for Miniature Air Vehicles
    Yu, Huili
    Beard, Randal W.
    [J]. 2011 50TH IEEE CONFERENCE ON DECISION AND CONTROL AND EUROPEAN CONTROL CONFERENCE (CDC-ECC), 2011, : 558 - 563
  • [10] Optimized vision-based robot motion planning from multiple demonstrations
    Shen, Tiantian
    Radmard, Sina
    Chan, Ambrose
    Croft, Elizabeth A.
    Chesi, Graziano
    [J]. AUTONOMOUS ROBOTS, 2018, 42 (06) : 1117 - 1132