Outdoor scene understanding of mobile robot via multi-sensor information fusion

被引:9
|
作者
Zhang, Fu-sheng [1 ,2 ]
Ge, Dong-yuan [3 ]
Song, Jun [4 ,5 ,7 ]
Xiang, Wen-jiang [6 ]
机构
[1] Changshu Inst Technol, Sch Mech Engn, Suzhou 215000, Jiangsu, Peoples R China
[2] Coll Intelligent Elevator Ind, Key Lab Intelligent Safety Elevator Univ Jiangsu P, Changshu Inst Technol, Changshu 215500, Jiangsu, Peoples R China
[3] Guangxi Univ Sci & Technol, Sch Mech & Transportat Engn, Liuzhou 545006, Peoples R China
[4] Shandong Jiaotong Univ, Sch Civil Engn, Jinan 250357, Shandong, Peoples R China
[5] China Commun Second Highway Survey Design & Res In, Wuhan 430050, Hubei, Peoples R China
[6] Shaoyang Univ, Sch Mech & Energy Engn, Shaoyang 422004, Peoples R China
[7] Huazhong Univ Sci & Technol, Sch Civil Engn, Wuhan 430074, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Mobile robot; Multi; -sensor; Information fusion technology; Outdoor scene fusion; Scene recognition;
D O I
10.1016/j.jii.2022.100392
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The present research on the multi-sensor information fusion technology of mobile robots aims to better understand the outdoor scene and improve the robot's perception of the environment. Firstly, a conversion algorithm is proposed based on two point-cloud-to-image algorithms, including the point cloud plane fitting and point cloud projection transformation. Moreover, the elevation map is constructed to describe the terrain characteristics of the scene based on the three-dimensional laser ranging data. Meanwhile, the conditional random field model is used to obtain landform characteristics from visual information. Besides, the projection transformation and information statistics methods are used to effectively integrate the laser information and the visual information with the grid in the elevation map as the carrier. Ultimately, the convolution neural network is used to realize the three-dimensional scene understanding. It is found that the average recognition rate of the outdoor scene understanding model based on multi-sensor information fusion is as high as 89.36%, and the image segmentation time of the proposed algorithm is not more than 180 ms.The latest research results refer to the use of SSAE in combination with the CRF algorithm. On the whole, the proposed model improves the real-time performance of the mobile robot under the premise of accuracy, and realizes the recognition and analysis ability of complex scenes through the construction of multi-sensor information. This study has important practical significance for promoting the development of the mobile robot autonomous industry.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Sensor selection using information complexity for multi-sensor mobile robot localization
    Sukumar, Sreenivas R.
    Bozdogan, Hamparsum
    Page, David L.
    Koschan, Andreas F.
    Abidi, Mongi A.
    PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-10, 2007, : 4158 - +
  • [22] Mobile Robot Self-localization System Based on Multi-sensor Information Fusion in Indoor Environment
    Xie, Linhai
    Xu, Xiaohong
    PROCEEDINGS OF THE 2015 CHINESE INTELLIGENT AUTOMATION CONFERENCE: INTELLIGENT TECHNOLOGY AND SYSTEMS, 2015, 338 : 61 - 69
  • [23] Multi-Sensor Perceptual System for Mobile Robot and Sensor Fusion-based Localization
    Hoang, T. T.
    Duong, P. M.
    Van, N. T. T.
    Viet, D. A.
    Vinh, T. Q.
    2012 INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND INFORMATION SCIENCES (ICCAIS), 2012, : 259 - 264
  • [24] A robot self-locating method with multi-sensor information fusion
    Qi, XH
    Shan, GL
    Wang, CP
    ISTM/2001: 4TH INTERNATIONAL SYMPOSIUM ON TEST AND MEASUREMENT, VOLS 1 AND 2, CONFERENCE PROCEEDINGS, 2001, : 771 - 773
  • [25] Multi-Sensor Fusion for Reduced Uncertainty in Autonomous Mobile Robot Docking and Recharging
    Luo, Ren C.
    Liao, Chung T.
    Lin, Shih C.
    2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 2009, : 2203 - 2208
  • [26] Research on Omnidirectional Indoor Mobile Robot System Based on Multi-sensor Fusion
    Tan, Xiangquan
    Zhang, Shuliang
    Wu, Qingwen
    2021 5TH INTERNATIONAL CONFERENCE ON VISION, IMAGE AND SIGNAL PROCESSING (ICVISP 2021), 2021, : 111 - 117
  • [27] Research on Multi-Sensor Fusion of Layered Intelligent System for Indoor Mobile Robot
    Tian, Yingzhong
    Gao, Xu
    Luan, Mingxuan
    Li, Long
    PROCEEDINGS OF THE 2017 2ND INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ARTIFICIAL INTELLIGENCE (CAAI 2017), 2017, 134 : 81 - 84
  • [28] Research on navigation based on multi-sensor fusion and dynamic window for a mobile robot
    Yang, GS
    Chen, XJ
    Hou, ZG
    Tan, M
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS 2003, VOL 1-3, 2003, : 889 - 894
  • [29] Accurate 2D Localization for Mobile Robot by Multi-sensor Fusion
    Ruan, Xiaogang
    Liu, Shaoda
    Ren, Dingqi
    Zhu, Xiaoqing
    PROCEEDINGS OF 2018 IEEE 4TH INFORMATION TECHNOLOGY AND MECHATRONICS ENGINEERING CONFERENCE (ITOEC 2018), 2018, : 839 - 843
  • [30] Multi-sensor based intelligent measurement control and integrated position system for an outdoor mobile robot
    Wang, H
    He, KZ
    Sun, HH
    Guo, M
    ISTM/99: 3RD INTERNATIONAL SYMPOSIUM ON TEST AND MEASUREMENT, 1999, : 924 - 928