Multi-sensor Fusion for Autonomous Positioning of Indoor Robots

被引:0
|
作者
Shuai, Zipei [1 ]
Yu, Hongyang [1 ]
机构
[1] Univ Elect Sci & Technol China UESTC, Chengdu, Peoples R China
关键词
D O I
10.33012/2021.17870
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Achieving more accurate autonomous positioning of mobile robots in indoor environments is the basis for realizing robot indoor navigation applications. In order to achieve accurate autonomous localization of mobile robot in indoor scene, we need to choose several localization algorithms suitable for indoor environmental conditions from the existing algorithms for autonomous localization of robots. At this stage, scholars have proposed many algorithms to realize robot autonomous position, but these algorithms often not only have problems such as high computational complexity, system accuracy and stability affected by the wrong depth matching, but also some technologies that cannot be used indoors, such as satellites positioning, even some technologies have very strict requirements for the scene, such as geomagnetic positioning. If the robot is in an indoor environment that is severely affected by magnetic forces, the geomagnetic positioning will fail and accurate positioning will not be possible. Therefore, in order to achieve a more robust and efficient autonomous localization algorithm for indoor mobile robot. In this paper, an algorithm is proposed to realize the autonomous localization of indoor mobile robot in known scenes by fusing monocular camera and LiDAR (Light-Laser Detection and Ranging) technology. The algorithm makes full use of the information provided by depth learning model and laser point cloud. Firstly, the monocular camera is used to make the training data set: the 3D point cloud map of the known scene is projected into the grid map, define the appropriate coordinate axis for the grid map, determine the precise coordinates of each grid, and use a monocular camera to take a certain number of pictures for the scene, and assign a certain grid to each picture, so that each picture has specific coordinates corresponding to it. After the training set is established, a deep learning algorithm is used to train a model that can use two-dimensional RGB images to determine the coordinates of the camera's location. This is where the mobile robot is. Finally, combined with the LiDAR information of the indoor mobile robot, the estimated coordinates are modified more carefully to get more accurate positioning. The algorithm of indoor mobile robot localization based on the fusion of vision sensor and LiDAR proposed in this paper is an improvement over the existing single sensor localization algorithm, which combines the existing deep learning model with the information provided by LiDAR. It effectively improves the positioning accuracy of the indoor mobile robot, improves the working efficiency of the mobile robot, and provides a better and higher guarantee for the navigation and other operations of the mobile robot in indoor scene, and promotes the better development of robot industry.
引用
收藏
页码:105 / 112
页数:8
相关论文
共 50 条
  • [41] Multi-sensor data fusion structures in autonomous systems: A review
    Huang, XH
    Wang, M
    [J]. PROCEEDINGS OF THE 2003 IEEE INTERNATIONAL SYMPOSIUM ON INTELLIGENT CONTROL, 2003, : 817 - 821
  • [42] Localization of Autonomous Cars using Multi-sensor Data Fusion
    Wang, Xiaohua
    Lian, Yanru
    Li, Li
    [J]. 2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 4152 - 4155
  • [43] Multi-sensor Fusion and Cooperative Perception for Autonomous Driving A Review
    Xiang, Chao
    Feng, Chen
    Xie, Xiaopo
    Shi, Botian
    Lu, Hao
    Lv, Yisheng
    Yang, Mingchuan
    Niu, Zhendong
    [J]. IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2023, 15 (05) : 36 - 58
  • [44] An underwater autonomous robot based on multi-sensor data fusion
    Yang, Qingmei
    Sun, Jianmin
    [J]. WCICA 2006: SIXTH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-12, CONFERENCE PROCEEDINGS, 2006, : 172 - 172
  • [45] Design of a Reconfigurable Multi-Sensor Testbed for Autonomous Vehicles and Ground Robots
    Gong, Zheng
    Xue, Wuyang
    Liu, Ziang
    Zhao, Yimo
    Miao, Ruihang
    Ying, Rendong
    Liu, Peilin
    [J]. 2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2019,
  • [46] Malicious Attacks against Multi-Sensor Fusion in Autonomous Driving
    Zhu, Yi
    Miao, Chenglin
    Xue, Hongfei
    Yu, Yunnan
    Su, Lu
    Qiao, Chunming
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND NETWORKING, ACM MOBICOM 2024, 2024, : 436 - 451
  • [47] Mobile Robot Indoor Autonomous Navigation Based on Multi-sensor Integration
    Tang, Hui
    Kim, Don
    [J]. PROCEEDINGS OF THE 25TH INTERNATIONAL TECHNICAL MEETING OF THE SATELLITE DIVISION OF THE INSTITUTE OF NAVIGATION (ION GNSS 2012), 2012, : 1253 - 1262
  • [48] Multi-ultrasonic sensor fusion for autonomous mobile robots
    Yi, Z
    Khing, HY
    Seng, CC
    Wei, ZX
    [J]. SENSOR FUSION: ARCHITECTURES, ALGORITHMS, AND APPLICATIONS IV, 2000, 4051 : 314 - 321
  • [49] A Novel Multi-Sensor and Multi-Topological Database for Indoor Positioning on Fingerprint Techniques
    Bozkurt, Sinem
    Yazici, Ahmet
    Gunal, Serkan
    Yayan, Ugur
    Inan, Fatih
    [J]. 2015 INTERNATIONAL SYMPOSIUM ON INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS (INISTA) PROCEEDINGS, 2015, : 55 - 61
  • [50] State Estimation and Localization Based on Sensor Fusion for Autonomous Robots in Indoor Environment
    Doumbia, Mamadou
    Cheng, Xu
    [J]. COMPUTERS, 2020, 9 (04) : 1 - 15