Planning Beyond The Sensing Horizon Using a Learned Context

被引:0
|
作者
Everettt, Michael [1 ]
Miller, Justin [2 ]
How, Jonathan P. [1 ]
机构
[1] MIT, Aerosp Controls Lab, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[2] Ford Motor Co, Robot & Intelligent Vehicles, Dearborn, MI 48121 USA
关键词
NAVIGATION; OBJECT;
D O I
10.1109/iros40897.2019.8967550
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Last-mile delivery systems commonly propose the use of autonomous robotic vehicles to increase scalability and efficiency. The economic inefficiency of collecting accurate prior maps for navigation motivates the use of planning algorithms that operate in unmapped environments. However, these algorithms typically waste time exploring regions that are unlikely to contain the delivery destination. Context is key information about structured environments that could guide exploration toward the unknown goal location, but the abstract idea is difficult to quantify for use in a planning algorithm. Some approaches specifically consider contextual relationships between objects, but would perform poorly in object-sparse environments like outdoors. Recent deep learning-based approaches consider context too generally, making training/transferability difficult. Therefore, this work proposes a novel formulation of utilizing context for planning as an image-to-image translation problem, which is shown to extract terrain context from semantic gridmaps, into a metric that an exploration-based planner can use. The proposed framework has the benefit of training on a static dataset instead of requiring a time-consuming simulator. Across 42 test houses with layouts from satellite images, the trained algorithm enables a robot to reach its goal 189% faster than with a context-unaware planner, and within 63% of the optimal path computed with a prior map. The proposed algorithm is also implemented on a vehicle with a forward-facing camera in a high-fidelity, Unreal simulation of neighborhood houses.
引用
收藏
页码:1064 / 1071
页数:8
相关论文
共 50 条