HGS-Mapping: Online Dense Mapping Using Hybrid Gaussian Representation in Urban Scenes

被引:0
|
作者
Wu, Ke [1 ,2 ]
Zhang, Kaizhao [1 ]
Zhang, Zhiwei [1 ]
Tie, Muer [1 ]
Yuan, Shanshuai [1 ]
Zhao, Jieru [3 ]
Gan, Zhongxue [1 ]
Ding, Wenchao [1 ]
机构
[1] Fudan Univ, Acad Engn & Technol, Shanghai 200000, Peoples R China
[2] State Key Lab Intelligent Vehicle Safety Technol, Chongqing 400000, Peoples R China
[3] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200000, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Mapping; RGB-D perception; sensor fusion;
D O I
10.1109/LRA.2024.3460410
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Online dense mapping of urban scenes forms a fundamental cornerstone for scene understanding and navigation of autonomous vehicles. Recent advancements in dense mapping methods are mainly based on NeRF, whose rendering speed is too slow to meet online requirements. 3D Gaussian Splatting (3DGS), with its rendering speed hundreds of times faster than NeRF, holds greater potential in online dense mapping. However, integrating 3DGS into a street-view dense mapping framework still faces two challenges, including incomplete reconstruction due to the absence of geometric information beyond the LiDAR coverage area and extensive computation for reconstruction in large urban scenes. To this end, we propose HGS-Mapping, an online dense mapping framework in unbounded large-scale scenes. To attain complete construction, our framework introduces Hybrid Gaussian Representation, which models different parts of the entire scene using Gaussians with distinct properties. Furthermore, we employ a hybrid Gaussian initialization mechanism and an adaptive update method to achieve high-fidelity and rapid reconstruction. To the best of our knowledge, we are the first to integrate Gaussian representation into online dense mapping of urban scenes. Our approach achieves SOTA reconstruction accuracy while only employing 66% number of Gaussians, leading to 20% faster reconstruction speed.
引用
收藏
页码:9573 / 9580
页数:8
相关论文
共 50 条
  • [31] Object representation using ID displacement mapping
    Xu, Y
    Yang, YH
    GRAPHICS INTERFACE 2004, PROCEEDINGS, 2004, : 33 - 40
  • [32] Crowdsourced mapping of land use in urban dense environments: An assessment of Toronto
    Vaz, Eric
    Arsanjani, Jamal Jokar
    CANADIAN GEOGRAPHER-GEOGRAPHE CANADIEN, 2015, 59 (02): : 246 - 255
  • [33] Design hydrograph and routing scheme for flood mapping in a dense urban area
    Ravazzani, Giovanni
    Mancini, Marco
    Meroni, Claudio
    URBAN WATER JOURNAL, 2009, 6 (03) : 221 - 231
  • [34] Mapping Urban Impervious Surfaces of Nanjing from the Dense Landsat Imagery
    Shen, Wenjuan
    Wu, Ting
    Li, Mingshi
    2012 5TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING (CISP), 2012, : 1068 - 1072
  • [35] TEXT SEGMENTATION IN NATURAL SCENES USING TOGGLE-MAPPING
    Fabrizio, J.
    Marcotegui, B.
    Cord, M.
    2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-6, 2009, : 2373 - +
  • [36] Palmprint identification using sparse and dense hybrid representation
    Al Maadeed, Somaya
    Jiang, Xudong
    Rida, Imad
    Bouridane, Ahmed
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (05) : 5665 - 5679
  • [37] Palmprint identification using sparse and dense hybrid representation
    Somaya Al Maadeed
    Xudong Jiang
    Imad Rida
    Ahmed Bouridane
    Multimedia Tools and Applications, 2019, 78 : 5665 - 5679
  • [38] Mapping dynamic environment using Gaussian mixture model
    Wang, Hongming
    Hou, Zengguang
    Tan, Min
    PROCEEDINGS OF THE SIXTH IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS, 2007, : 424 - +
  • [39] DenseFusion: Large-Scale Online Dense Pointcloud and DSM Mapping for UAVs
    Chen, Lin
    Zhao, Yong
    Xu, Shibiao
    Bu, Shuhui
    Han, Pengcheng
    Wan, Gang
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 4766 - 4773
  • [40] GeoRefine: Self-supervised Online Depth Refinement for Accurate Dense Mapping
    Ji, Pan
    Yan, Qingan
    Ma, Yuxin
    Xu, Yi
    COMPUTER VISION - ECCV 2022, PT I, 2022, 13661 : 360 - 377