A graph-matching approach for cross-view registration of over-view and street-view based point clouds

被引:10
|
作者
Ling, Xiao [1 ,2 ]
Qin, Rongjun [1 ,2 ,3 ,4 ]
机构
[1] Ohio State Univ, Geospatial Data Analyt Lab, 218B Bolz Hall,2036 Neil Ave, Columbus, OH 43210 USA
[2] Ohio State Univ, Dept Civil Environm & Geodet Engn, 218B Bolz Hall,2036 Neil Ave, Columbus, OH 43210 USA
[3] Ohio State Univ, Dept Elect & Comp Engn, 205 Dreese Lab,2036 Neil Ave, Columbus, OH 43210 USA
[4] Ohio State Univ, Translat Data Analyt Inst, Columbus, OH 43210 USA
关键词
Cross-view registration; Global optimization; Multi-view satellite image; DIGITAL SURFACE MODEL; 3D; SEGMENTATION; GENERATION; IMAGES;
D O I
10.1016/j.isprsjprs.2021.12.013
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
Wide-area 3D data generation for complex urban environments often needs to leverage a mixed use of data collected from both air and ground platforms, such as from aerial surveys, satellite, and mobile vehicles. On one hand, such kind of data with information from drastically different views (ca. 90 and more) forming cross-view data, which due to very limited overlapping region caused by the drastically different line of sight of the sensors, is difficult to be registered without significant manual efforts. On the other hand, the registration of such data often suffers from non-rigid distortion of the street-view data (e.g., non-rigid trajectory drift), which cannot be simply rectified by a similarity transformation. In this paper, based on the assumption that the object boundaries (e.g., buildings) from the over-view data should coincide with footprints of facade 3D points generated from street-view photogrammetric images, we aim to address this problem by proposing a fully automated georegistration method for cross-view data, which utilizes semantically segmented object boundaries as view invariant features under a global optimization framework through graph-matching: taking the over-view point clouds generated from stereo/multi-stereo satellite images and the street-view point clouds generated from monocular video images as the inputs, the proposed method models segments of buildings as nodes of graphs, both detected from the satellite-based and street-view based point clouds, thus to form the registration as a graph-matching problem to allow non-rigid matches; to enable a robust solution and fully utilize the topological relations between these segments, we propose to address the graph-matching problem on its conjugate graph solved through a belief-propagation algorithm. The matched nodes will be subject to a further optimization to allow precise-registration, followed by a constrained bundle adjustment on the street-view image to keep 2D-3D consistencies, which yields well-registered street-view images and point clouds to the satellite point clouds. Our proposed method assumes no or little prior pose information (e.g. very sparse locations from consumer-grade GPS (global positioning system)) for the street-view data and has been applied to a large cross-view dataset with significant scale difference containing 0.5 m GSD (Ground Sampling Distance) satellite data and 0.005 m GSD street-view data, 1.5 km in length involving 12 GB of data. The experiment shows that the proposed method has achieved promising results (1.27 m accuracy in 3D), evaluated using collected LiDAR point clouds. Furthermore, we included additional experiments to demonstrate that this method can be generalized to process different types of over-view and street-view data sources, e.g., the open street view maps and the semantic labeling maps.
引用
收藏
页码:2 / 15
页数:14
相关论文
共 50 条
  • [1] A Systematic Registration Method for Cross-source Point Clouds Based on Cross-view Image Matching
    Chu, Guanghan
    Fan, Dazhao
    Li, Ming
    Zhang, Haijun
    [J]. FOURTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING, ICGIP 2022, 2022, 12705
  • [2] Cross-view graph matching for incomplete multi-view clustering
    Yang, Jing-Hua
    Fu, Le-Le
    Chen, Chuan
    Dai, Hong-Ning
    Zheng, Zibin
    [J]. NEUROCOMPUTING, 2023, 515 : 79 - 88
  • [3] Scene parsing using graph matching on street-view data
    Yu, Tianshu
    Wang, Ruisheng
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2016, 145 : 70 - 80
  • [4] Beyond Geo-localization: Fine-grained Orientation of Street-view Images by Cross-view Matching with Satellite Imagery
    Hu, Wenmiao
    Zhang, Yichen
    Liang, Yuxuan
    Yin, Yifang
    Georgescu, Andrei
    Tran, An
    Kruppa, Hannes
    Ng, See-Kiong
    Zimmermann, Roger
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6155 - 6164
  • [5] Unpaired Multi-View Graph Clustering With Cross-View Structure Matching
    Wen, Yi
    Wang, Siwei
    Liao, Qing
    Liang, Weixuan
    Liang, Ke
    Wan, Xinhang
    Liu, Xinwang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (11) : 1 - 15
  • [6] STREET VIEW CROSS-SOURCED POINT CLOUD MATCHING AND REGISTRATION
    Peng, Furong
    Wu, Qiang
    Fan, Lixin
    Zhang, Jian
    You, Yu
    Lu, Jianfeng
    Yang, Jing-Yu
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 2026 - 2030
  • [7] A coarse-to-fine algorithm for registration in 3D street-view cross-source point clouds
    Huang, Xiaoshui
    Zhang, Jian
    Wu, Qiang
    Fan, Lixin
    Yuan, Chun
    [J]. 2016 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2016, : 53 - 58
  • [8] Semantic Cross-View Matching
    Castaldo, Francesco
    Zamir, Amir
    Angst, Roland
    Palmieri, Francesco
    Savarese, Silvio
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOP (ICCVW), 2015, : 1044 - 1052
  • [9] Cross-view vehicle re-identification based on graph matching
    Chao Zhang
    Chule Yang
    Dayan Wu
    Hongbin Dong
    Baosong Deng
    [J]. Applied Intelligence, 2022, 52 : 14799 - 14810
  • [10] Cross-view vehicle re-identification based on graph matching
    Zhang, Chao
    Yang, Chule
    Wu, Dayan
    Dong, Hongbin
    Deng, Baosong
    [J]. APPLIED INTELLIGENCE, 2022, 52 (13) : 14799 - 14810