A Framework for Vision-Based Building Detection and Entering for Autonomous Delivery Drones

被引:2
|
作者
Mirtajadini, Seyed Hojat [1 ]
Fahimi, Hamidreza [2 ]
Shahbazi, Mohammad [3 ]
机构
[1] Univ Tehran, Fac New Sci & Technol, 16th Azar St, Tehran 1417935840, Iran
[2] Amirkabir Univ Technol, Dept Aerosp Engn, Hafez Ave, Tehran 1591634311, Iran
[3] Iran Univ Sci & Technol, Sch Mech Engn, Hengam St, Tehran 1684613114, Iran
关键词
Facade detection; Vision-based guidance; Building entrance; Autonomous delivery drone; IMAGES; EXPANSION; QUADROTOR; FOCUS;
D O I
10.1007/s10846-023-01834-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Autonomous delivery by aerial robots inside urban environments is getting closer to be operational everyday, but several concerns about proper navigation sources are to be addressed. As lot of GPS glitches are observed in urban environments, hereon a fully vision-based navigation framework is developed. The platform is assumed to have the ability to get into a vicinity of its destination using available "imperfect" navigation methods like GPS. Hereupon, the autonomous execution of mission is divided into four consecutive phases, all relying on a forward-looking camera: I) A novel vision-based method for detecting and confirming the target facade in the batch of neighboring buildings is introduced that performs matching using neural networks exploiting the texture features of facade segments; II) The intended window among the facade windows array is detected in real-time; III) The robot is guided to autonomously approach the intended window by a proper visual tracking algorithm; IV) Finally, a collision-free passage through the portal section of window based on visual Time-To-Contact estimation is commanded for the safe entrance. The network is trained using approximately 34,000 feature samples from a set of real-world building facades. As a result, a 95.4% accuracy along with 81.4% classification precision and 87.6% recall are achieved for the trained network in correct facade detection and confirmation. Also, the success rate of overall entrance mission is found to be 13 out of 15 in real world experiments, provided the initial distance being less than 20 meters.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] A Framework for Vision-Based Building Detection and Entering for Autonomous Delivery Drones
    Seyed Hojat Mirtajadini
    Hamidreza Fahimi
    Mohammad Shahbazi
    Journal of Intelligent & Robotic Systems, 2023, 107
  • [2] A Vision-Based Guidance Algorithm for Entering Buildings Through Windows for Delivery Drones
    Fahimi, Hamidreza
    Mirtajadini, Seyed Hojat
    Shahbazi, Mohammad
    IEEE AEROSPACE AND ELECTRONIC SYSTEMS MAGAZINE, 2022, 37 (07) : 32 - 43
  • [3] Vision-Based Object Detection and Localization for Autonomous Airborne Payload Delivery
    Sewell, James
    Van Niekerk, Theo
    Phillips, Russell
    Mooney, Paul
    Stopforth, Riaan
    CONTROLO 2020, 2021, 695 : 602 - 615
  • [4] Convex Vision-Based Negative Obstacle Detection Framework for Autonomous Vehicles
    Dodge, Daniel
    Yilmaz, Muhittin
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (01): : 778 - 789
  • [5] Vision-based Autonomous Detection of Lane and Pedestrians
    Kim, Dong-Uk
    Park, Sung-Ho
    Ban, Jong-Hee
    Lee, Taek-Min
    Do, Yongtae
    2016 IEEE INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP), 2016, : 680 - 683
  • [6] Robust Precision Landing for Autonomous Drones Combining Vision-based and Infrared Sensors
    Badakis, Giannis
    Koutsoubelias, Manos
    Lalis, Spyros
    2021 IEEE SENSORS APPLICATIONS SYMPOSIUM (SAS 2021), 2021,
  • [7] An efficient and lightweight small target detection framework for vision-based autonomous road cleaning
    Hu C.
    Ni M.
    Cao D.
    Multimedia Tools and Applications, 2024, 83 (41) : 88587 - 88612
  • [8] A Vision-based Autonomous Detection Scheme for Obstacles on the Runway
    Zhou, Yanxing
    Dong, Zhuoning
    2017 CHINESE AUTOMATION CONGRESS (CAC), 2017, : 832 - 838
  • [9] Framework for Evaluating Vision-based Autonomous Steering Control Model
    Kwon, Soon
    Park, Jaehyeong
    Jung, Heechul
    Jung, Jihun
    Choi, Min-Kook
    Tayibnapis, Iman R.
    Lee, Jin-Hee
    Won, Woong-Jae
    Youn, Sung-Hoon
    Kim, Kwang-Hoe
    Kim, Tae Hun
    2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2018, : 1310 - 1316
  • [10] Computer Vision-based road surveillance system using autonomous drones and sensor fusion
    Zubasti Recalde, Pablo
    Saiz Fernandez, Mario
    Garcia Herrero, Jesus
    Molina Lopez, Jose Manuel
    2024 27TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION, FUSION 2024, 2024,