Robust Scene-Matching Algorithm Based on Relative Velocity Model for Aerial Images

被引:0
|
作者
Choi, Sung Hyuk [1 ]
Park, Chan Gook [1 ,2 ]
机构
[1] Seoul Natl Univ, Dept Mech & Aerosp Engn, Seoul 08826, South Korea
[2] Seoul Natl Univ, Automat & Syst Res Inst, Seoul 08826, South Korea
关键词
REGISTRATION; LOCALIZATION; UAV;
D O I
10.33012/2019.17053
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Vision-based navigation can be categorized largely as relative and absolute navigation. The relative navigation based on a vision sensor has the disadvantage of accumulating errors over time caused by various factors. In contrast, Scene matching based absolute navigation has the advantage that the position of the vehicle can be independently calculated without needing external information, making it possible to calculate a stable navigation solution without cumulative errors. However the conventional image based navigation based on scene matching algorithm has a disadvantage also. Which is mismatching problem between large database and camera input image. Mismatching problem is a dangerous to navigation system. Because matching result of the image based navigation can be estimated far from the true position of the vehicle, like an outlier. The large position errors caused by unnecessary feature point clusters that have not been removed during outlier rejection. Extracted feature points would be created, disappeared, or tracked in next time step. The created or disappeared features are not our interesting points and can be filtered out by matching consecutive images. We propose the robust scene matching algorithm with only time invariant feature points via relative velocity model and uncertainty. Only trackable feature points are needed for a matching with database and improve the matching result. So that we use the relative velocity model. Which represents the feature's moving motion by opposite direction of the vehicle's velocity. Proposed relative velocity model consist of the acceleration of the vehicle. For a short time, the velocity from accelerometer is very accurate and precise. Because an accumulated error is relatively small. Also we propose a pixel boundary. Which expresses the uncertainty that the feature point is expected to be located. If the feature point extracted by next time-step is in the proposed boundary, the bounded feature point is used for matching with the database. The algorithm is summarized as follows. First, the process of determining the time-invariant feature points extracts and matches the feature points in a continuous camera input image. Through this process, feature points that disappear or generated can be removed. The second step is to propagate the position of the feature point used at the previous point to the position expected to be located at the current point through the proposed relative velocity model. The third step is to calculate the uncertainty of the propagated feature point, that is, feature point that are expected to be present. Finally, it is determined whether the feature point detected at the present exists within the uncertainty bound of the predicted feature point. If extract points are in uncertainty boundary, they are the feature points to be matched. After the proposed filtering process, feature points are very reliable and enough to mitigate mis-matching problem. Because terminal feature candidates are propagated by accurate velocity model and bounded its uncertainty. Proposed algorithm is verified by simulation using real flight experimental data. Flight vehicle is DJI-Company Mavic-pro model with embedded down looking gimbaled camera. The simulation result include extracted feature point and its boundary. And the navigation accuracy with its covariance. We combine advantages of both inertial navigation system and vision-based navigation to mitigate mismatching problem in scene-matching. Because inertial navigation system is very accurate in a short time and scene-matching has bounded solution. We defined the trackable and bounded features as time-invariant feature point. And we use relative velocity model to estimate the location of the feature point. Selected feature points are used for the scene-matching algorithm and are bounded by model uncertainty Finally, the proposed algorithm is verified by a simulation using real flight experimental data.
引用
收藏
页码:2512 / 2520
页数:9
相关论文
共 50 条
  • [1] Robust aerial scene-matching algorithm based on relative velocity model
    Choi, Sung Hyuk
    Park, Chan Gook
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2020, 124
  • [2] TROG: a fast and robust scene-matching algorithm for geo-referenced images
    Yang, Hongrui
    Zhu, Qiju
    Mei, Chunbo
    Yang, Pengxiang
    Gu, Hao
    Fan, Zhenhui
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2024,
  • [3] Adaptive Scene-Matching Algorithm based on Frequency Pattern Analysis for Aerial Vehicle
    Choi, Sung Hyuk
    Park, Chan Gook
    2019 12TH ASIAN CONTROL CONFERENCE (ASCC), 2019, : 1455 - 1459
  • [4] Scene matching of aerial images
    Deka, Bitumani
    Pan, Pranit
    Mukherjee, Jayanta
    Chatterji, B.N.
    IETE Journal of Research, 1995, 41 (03): : 165 - 174
  • [5] Scene matching of aerial images
    Deka, B
    Pan, PK
    Mukherjee, J
    Chatterji, BN
    JOURNAL OF THE INSTITUTION OF ELECTRONICS AND TELECOMMUNICATION ENGINEERS, 1995, 41 (03): : 165 - 174
  • [6] Nonuniformity correction of infrared image based on scene-matching
    Jiang, Guang
    Jia, Jing
    Liu, Shangqian
    Proceedings of SPIE - The International Society for Optical Engineering, 2001, 4548 : 280 - 283
  • [7] Nonuniformity correction of infrared image based on scene-matching
    Jiang, G
    Jia, J
    Liu, SQ
    MULTISPECTRAL AND HYPERSPECTRAL IMAGE ACQUISITION AND PROCESSING, 2001, 4548 : 280 - 283
  • [8] ROBUST MATCHING OF AERIAL IMAGES WITH LOW OVERLAP
    Mizotin, M.
    Krivovyaz, G.
    Velizhev, A.
    Chernyavskiy, A.
    Sechin, A.
    PCV 2010 - PHOTOGRAMMETRIC COMPUTER VISION AND IMAGE ANALYSIS, PT I, 2010, 38 : 13 - 18
  • [9] SCENE-MATCHING LOCALIZATION FOR AUTONOMOUS EVTOL BASED ON BAG-OF-VISUAL-WORDS
    Xiang, Senwei
    Ye, Minxiang
    Wang, Ting
    Zhang, Yifei
    Men, Zehua
    Xie, Anhuan
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 4772 - 4775
  • [10] A Fast Algorithm for Matching Remote Scene Images
    Liu Jin
    Yan Li
    GEO-SPATIAL INFORMATION SCIENCE, 2008, 11 (03) : 197 - 200