A Ground-Truth Video Dataset for the Development and Evaluation of Vision-based Sense-and-Avoid systems

被引:0
|
作者
Carrio, Adrian [1 ]
Fu, Changhong [1 ]
Pestana, Jesus [1 ]
Campoy, Pascual [1 ]
机构
[1] UPM CSIC, CVG, CAR, Madrid 28006, Spain
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The importance of vision-based systems for Sense-and-Avoid is increasing nowadays as remotely piloted and autonomous UAVs become part of the non-segregated airspace. The development and evaluation of these systems demand flight scenario images which are expensive and risky to obtain. Currently Augmented Reality techniques allow the compositing of real flight scenario images with 3D aircraft models to produce useful realistic images for system development and benchmarking purposes at a much lower cost and risk. With the techniques presented in this paper, 3D aircraft models are positioned firstly in a simulated 3D scene with controlled illumination and rendering parameters. Realistic simulated images are then obtained using an image processing algorithm which fuses the images obtained from the 3D scene with images from real UAV flights taking into account on board camera vibrations. Since the intruder and camera poses are user-defined, ground truth data is available. These ground truth annotations allow to develop and quantitatively evaluate aircraft detection and tracking algorithms. This paper presents the software developed to create a public dataset of 24 videos together with their annotations and some tracking application results.
引用
收藏
页码:441 / 446
页数:6
相关论文
共 50 条
  • [1] SIGS: Synthetic Imagery Generating Software for the Development and Evaluation of Vision-based Sense-And-Avoid Systems
    Adrian Carrio
    Changhong Fu
    Jean-François Collumeau
    Pascual Campoy
    [J]. Journal of Intelligent & Robotic Systems, 2016, 84 : 559 - 574
  • [2] SIGS: Synthetic Imagery Generating Software for the Development and Evaluation of Vision-based Sense-And-Avoid Systems
    Carrio, Adrian
    Fu, Changhong
    Collumeau, Jean-Francois
    Campoy, Pascual
    [J]. JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2016, 84 (1-4) : 559 - 574
  • [3] Vision-Based Hand Rotation Recognition Technique with Ground-Truth Dataset
    Kim, Hui-Jun
    Kim, Jung-Soon
    Kim, Sung-Hee
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (01):
  • [4] A vision-based sense-and-avoid system tested on a ScanEagle UAV
    Bratanov, Dmitry
    Mejias, Luis
    Ford, Jason J.
    [J]. 2017 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS (ICUAS'17), 2017, : 1134 - 1142
  • [5] Vision-Based Sense-and-Avoid Framework for Unmanned Aerial Vehicles
    Huh, Sungsik
    Cho, Sungwook
    Jung, Yeondeuk
    Shim, David Hyunchul
    [J]. IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2015, 51 (04) : 3427 - 3439
  • [6] Learning to Detect Aircraft for Long-Range Vision-Based Sense-and-Avoid Systems
    James, Jasmin
    Ford, Jason J.
    Molloy, Timothy L.
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04): : 4383 - 4390
  • [7] Ground-Based Sense-and-Avoid System for Small Unmanned Aircraft
    Sahawneh, Laith R.
    Wikle, Jared K.
    Roberts, A. Kaleo
    Spencer, Jonathan C.
    McLain, Timothy W.
    Warnick, Karl F.
    Beard, Randal W.
    [J]. JOURNAL OF AEROSPACE INFORMATION SYSTEMS, 2018, 15 (08): : 501 - 517
  • [8] Objective Performance Evaluation of Video Segmentation Algorithms with Ground-Truth
    杨高波
    张兆扬
    [J]. Advances in Manufacturing, 2004, (01) : 70 - 74
  • [9] A Dataset of Stationary, Fixed-wing Aircraft on a Collision Course for Vision-Based Sense and Avoid
    Martin, J.
    Riseley, J.
    Ford, J. J.
    [J]. 2022 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS (ICUAS), 2022, : 144 - 149
  • [10] Vision-Based Unmanned Aerial Vehicle Detection and Tracking for Sense and Avoid Systems
    Sapkota, Krishna Raj
    Roelofsen, Steven
    Rozantsev, Artem
    Lepetit, Vincent
    Gillet, Denis
    Fua, Pascal
    Martinoli, Alcherio
    [J]. 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), 2016, : 1556 - 1561