Bridging the Appearance Gap: Multi-Experience Localization for Long-Term Visual Teach and Repeat

被引:0
|
作者
Paton, Michael [1 ]
MacTavish, Kirk [1 ]
Warren, Michael [1 ]
Barfoot, Timothy D. [1 ]
机构
[1] Univ Toronto, Inst Aerosp Studies UTIAS, 4925 Dufferin St, Toronto, ON M3H 5T6, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision-based, route-following algorithms enable autonomous robots to repeat manually taught paths over long distances using inexpensive vision sensors. However, these methods struggle with long-term, outdoor operation due to the challenges of environmental appearance change caused by lighting, weather, and seasons. While techniques exist to address appearance change by using multiple experiences over different environmental conditions, they either provide topological-only localization, require several manually taught experiences in different conditions, or require extensive offline mapping to produce metric localization. For real-world use, we would like to localize metrically to a single manually taught route and gather additional visual experiences during autonomous operations. Accordingly, we propose a novel multi-experience localization (MEL) algorithm developed specifically for route following applications; it provides continuous, six-degree-of-freedom (6DOF) localization with relative uncertainty to a privileged (manually taught) path using several experiences simultaneously. We validate our algorithm through two experiments: i) an offline performance analysis on a 9km subset of a challenging 27km route-traversal dataset and ii) an online field trial where we demonstrate autonomy on a small 250m loop over the course of a sunny day. Both exhibit significant appearance change due to lighting variation. Through these experiments we show that safe localization can he achieved by bridging the appearance gap.
引用
收藏
页码:1918 / 1925
页数:8
相关论文
共 50 条
  • [1] DeepMEL: Compiling Visual Multi-Experience Localization into a Deep Neural Network
    Gridseth, Mona
    Barfoot, Timothy D.
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 1674 - 1681
  • [2] Multidimensional Particle Filter for Long-Term Visual Teach and Repeat in Changing Environments
    Rozsypalek, Zdenek
    Roucek, Tomas
    Vintr, Tomas
    Krajnik, Tomas
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (04) : 1951 - 1958
  • [3] Appearance-Based Landmark Selection for Efficient Long-Term Visual Localization
    Buerki, Mathias
    Gilitschenski, Igor
    Stumm, Elena
    Siegwart, Roland
    Nieto, Juan
    [J]. 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), 2016, : 4137 - 4143
  • [4] Selective memory: Recalling relevant experience for long-term visual localization
    MacTavish, Kirk
    Paton, Michael
    Barfoot, Timothy D.
    [J]. JOURNAL OF FIELD ROBOTICS, 2018, 35 (08) : 1265 - 1292
  • [5] Robust and Long-term Monocular Teach and Repeat Navigation using a Single-experience Map
    Sun, Li
    Taher, Marwan
    Wild, Christopher
    Zhao, Cheng
    Zhang, Yu
    Majer, Filip
    Yan, Zhi
    Krajnik, Tomas
    Prescott, Tony
    Duckett, Tom
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 2635 - 2642
  • [6] Long-Term Visual Localization Revisited
    Toft, Carl
    Maddern, Will
    Torii, Akihiko
    Hammarstrand, Lars
    Stenborg, Erik
    Safari, Daniel
    Okutomi, Masatoshi
    Pollefeys, Marc
    Sivic, Josef
    Pajdla, Tomas
    Kahl, Fredrik
    Sattler, Torsten
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (04) : 2074 - 2088
  • [7] Bridging the emergency relief and long-term development gap
    Fitzmaurice, Stephen
    [J]. PROCEEDINGS OF THE INSTITUTION OF CIVIL ENGINEERS-MUNICIPAL ENGINEER, 2016, 169 (03) : 138 - 145
  • [8] Long-term Visual Localization with Mobile Sensors
    Yan, Shen
    Liu, Yu
    Wang, Long
    Shen, Zehong
    Peng, Zhen
    Liu, Haomin
    Zhang, Maojun
    Zhang, Guofeng
    Zhou, Xiaowei
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 17245 - 17255
  • [9] Semantic Match Consistency for Long-Term Visual Localization
    Toft, Carl
    Stenborg, Erik
    Hammarstrand, Lars
    Brynte, Lucas
    Pollefeys, Marc
    Sattler, Torsten
    Kahl, Fredrik
    [J]. COMPUTER VISION - ECCV 2018, PT II, 2018, 11206 : 391 - 408
  • [10] Semantic Segmentation in the Task of Long-Term Visual Localization
    Bures, Lukas
    Mueller, Ludek
    [J]. INTERACTIVE COLLABORATIVE ROBOTICS (ICR 2021), 2021, 12998 : 27 - 39