Robust UAV Visual Teach and Repeat Using Only Sparse Semantic Object Features

被引:8
|
作者
Toudeshki, Amirmasoud Ghasemi [1 ]
Shamshirdar, Faraz [1 ]
Vaughan, Richard [1 ]
机构
[1] Simon Fraser Univ, Sch Comp Sci, Auton Lab, Burnaby, BC, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
NAVIGATION;
D O I
10.1109/CRV.2018.00034
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We demonstrate the use of semantic object detections as robust features for Visual Teach and Repeat (VTR). Recent CNN-based object detectors are able to reliably detect objects of tens or hundreds of categories in video at frame rates. We show that such detections are repeatable enough to use as landmarks for VTR, without any low-level image features. Since object detections are highly invariant to lighting and surface appearance changes, our VTR can cope with global lighting changes and local movements of the landmark objects. In the teaching phase we build extremely compact scene descriptors: a list of detected object labels and their image-plane locations. In the repeating phase, we use Seq-SLAM-like relocalization to identify the most similar learned scene, then use a motion control algorithm based on the funnel lane theory to navigate the robot along the previously piloted trajectory. We evaluate the method on a commodity UAV, examining the robustness of the algorithm to new viewpoints, lighting conditions, and movements of landmark objects. The results suggest that semantic object features could be useful due to their invariance to superficial appearance changes compared to low-level image features.
引用
收藏
页码:182 / 189
页数:8
相关论文
共 50 条
  • [1] Robust Visual Teach and Repeat for UGVs Using 3D Semantic Maps
    Mahdavian, Mohammad
    Yin, KangKang
    Chen, Mo
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04): : 8590 - 8597
  • [2] A UAV Visual Relocalization Method Using Semantic Object Features Based on Internet of Things
    Wang, Maolin
    Wang, Hongyu
    Wang, Zhi
    Li, Yumeng
    [J]. WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [3] Spatial and semantic convolutional features for robust visual object tracking
    Jianming Zhang
    Xiaokang Jin
    Juan Sun
    Jin Wang
    Arun Kumar Sangaiah
    [J]. Multimedia Tools and Applications, 2020, 79 : 15095 - 15115
  • [4] Spatial and semantic convolutional features for robust visual object tracking
    Zhang, Jianming
    Jin, Xiaokang
    Sun, Juan
    Wang, Jin
    Sangaiah, Arun Kumar
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (21-22) : 15095 - 15115
  • [5] Road Mapping and Localization Using Sparse Semantic Visual Features
    Cheng, Wentao
    Yang, Sheng
    Zhou, Maomin
    Liu, Ziyuan
    Chen, Yiming
    Li, Mingyang
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) : 8118 - 8125
  • [6] Sparse Temporal Encoding of Visual Features for Robust Object Recognition by Spiking Neurons
    Zheng, Yajing
    Li, Shixin
    Yan, Rui
    Tang, Huajin
    Tan, Kay Chen
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (12) : 5823 - 5833
  • [7] Robust Visual Teach and Repeat Navigation for Unmanned Aerial Vehicles
    Kozak, Viktor
    Pivonka, Tomas
    Avgoustinakis, Paylos
    Majer, Lukas
    Kulich, Miroslav
    Pfeucil, Libor
    Camara, Luis G.
    [J]. 10TH EUROPEAN CONFERENCE ON MOBILE ROBOTS (ECMR 2021), 2021,
  • [8] Eyes in the Back of Your Head: Robust Visual Teach & Repeat Using Multiple Stereo Cameras
    Paton, Michael
    Pomerleau, Francois
    Barfoot, Timothy D.
    [J]. 2015 12TH CONFERENCE ON COMPUTER AND ROBOT VISION CRV 2015, 2015, : 46 - 53
  • [9] Semantic and context features integration for robust object tracking
    Yao, Jinzhen
    Zhang, Jianlin
    Wang, Zhixing
    Shao, Linsong
    [J]. IET IMAGE PROCESSING, 2022, 16 (05) : 1268 - 1279
  • [10] Incorporating semantic object features into a visual attention model
    Li N.
    Zhao X.
    [J]. Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, 2020, 52 (05): : 99 - 105