An Unmanned Aerial Vehicle Indoor Low-Computation Navigation Method Based on Vision and Deep Learning

被引:1
|
作者
Hsieh, Tzu-Ling [1 ]
Jhan, Zih-Syuan [1 ]
Yeh, Nai-Jui [1 ]
Chen, Chang-Yu [1 ]
Chuang, Cheng-Ta [1 ]
机构
[1] Natl Taipei Univ Technol, Dept Intelligent Automat Engn, Taipei 10608, Taiwan
关键词
indoor; unmanned aerial vehicles (UAV); obstacle avoidance; path following;
D O I
10.3390/s24010190
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Recently, unmanned aerial vehicles (UAVs) have found extensive indoor applications. In numerous indoor UAV scenarios, navigation paths remain consistent. While many indoor positioning methods offer excellent precision, they often demand significant costs and computational resources. Furthermore, such high functionality can be superfluous for these applications. To address this issue, we present a cost-effective, computationally efficient solution for path following and obstacle avoidance. The UAV employs a down-looking camera for path following and a front-looking camera for obstacle avoidance. This paper refines the carrot casing algorithm for line tracking and introduces our novel line-fitting path-following algorithm (LFPF). Both algorithms competently manage indoor path-following tasks within a constrained field of view. However, the LFPF is superior at adapting to light variations and maintaining a consistent flight speed, maintaining its error margin within +/- 40 cm in real flight scenarios. For obstacle avoidance, we utilize depth images and YOLOv4-tiny to detect obstacles, subsequently implementing suitable avoidance strategies based on the type and proximity of these obstacles. Real-world tests indicated minimal computational demands, enabling the Nvidia Jetson Nano, an entry-level computing platform, to operate at 23 FPS.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Vision based navigation for an unmanned aerial vehicle
    Sinopoli, B
    Micheli, M
    Donato, G
    Koo, TJ
    2001 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, 2001, : 1757 - 1764
  • [2] Obstacle Avoidance Algorithm for Unmanned Aerial Vehicle Vision Based on Deep Learning
    Zhang, Xiangzhu
    Zhang, Lijia
    Song, Yifan
    Pei, Hailong
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2022, 50 (01): : 101 - 108
  • [3] Combined Laser and Vision-aided Inertial Navigation for an Indoor Unmanned Aerial Vehicle
    Magree, Daniel
    Johnson, Eric N.
    2014 AMERICAN CONTROL CONFERENCE (ACC), 2014,
  • [4] Vision-Based Deep Reinforcement Learning of Unmanned Aerial Vehicle (UAV) Autonomous Navigation Using Privileged Information
    Wang, Junqiao
    Yu, Zhongliang
    Zhou, Dong
    Shi, Jiaqi
    Deng, Runran
    DRONES, 2024, 8 (12)
  • [5] A Pavement Crack Detection Method via Deep Learning and a Binocular-Vision-Based Unmanned Aerial Vehicle
    Zhang, Jiahao
    Xia, Haiting
    Li, Peigen
    Zhang, Kaomin
    Hong, Wenqing
    Guo, Rongxin
    APPLIED SCIENCES-BASEL, 2024, 14 (05):
  • [6] Robotic vision based obstacle avoidance for navigation of unmanned aerial vehicle using fuzzy rule based optimal deep learning model
    Varma, K. N. V. Suresh
    Kumari, S. Lalitha
    EVOLUTIONARY INTELLIGENCE, 2024, 17 (04) : 2193 - 2212
  • [7] Deep Learning Based Unmanned Aerial Vehicle Landcover Image Segmentation Method
    Liu W.
    Zhao L.
    Zhou Y.
    Zong S.
    Luo Y.
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2020, 51 (02): : 221 - 229
  • [8] Research on the unmanned aerial vehicle image recognition method based on deep learning
    Wei, Guoli
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2019, 125 : 120 - 121
  • [9] Research on Visual Autonomous Navigation Indoor for Unmanned Aerial Vehicle
    张洋
    吕强
    林辉灿
    马建业
    JournalofShanghaiJiaotongUniversity(Science), 2017, 22 (02) : 252 - 256
  • [10] Research on visual autonomous navigation indoor for unmanned aerial vehicle
    Zhang Y.
    Lü Q.
    Lin H.
    Ma J.
    Journal of Shanghai Jiaotong University (Science), 2017, 22 (2) : 252 - 256