Warehouse AGV Navigation Based on Multimodal Information Fusion

被引:0
|
作者
Zhang Bo [1 ]
Zhang Yinlong [2 ,3 ,4 ]
Liang Wei [2 ,3 ,4 ]
Wang Xin [1 ]
Yang Yutuo [2 ,3 ,4 ]
机构
[1] Shenyang Jianzhu Univ, Sch Elect & Control Engn, Shenyang 110168, Liaoning, Peoples R China
[2] Chinese Acad Sci, Key Lab Networked Control Syst, Shenyang 110169, Liaoning, Peoples R China
[3] Chinese Acad Sci, Inst Robot & Intelligent Mfg Innovat, Shenyang 110169, Liaoning, Peoples R China
[4] Chinese Acad Sci, Shenyang Inst Automat, State Key Lab Robot, Shenyang 110169, Liaoning, Peoples R China
关键词
LiDAR; simultaneous localization and mapping; inertial measurement unit preintegration; quick response code; factor graph optimization;
D O I
10.3788/AOS231613
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Objective In the rapidly evolving domain of warehouse logistics, the deployment of automated guided vehicles (AGVs) with advanced navigation capabilities is becoming increasingly essential. This research is driven by the need to address significant challenges in existing laser-inertial navigation systems used in warehouse environments. These challenges include susceptibility to inertial bias drift, compromised real-time performance, and reduced pose estimation accuracy, particularly in areas with repetitive structures or dynamic environmental changes. The study aims to not only enhance the operational efficiency of AGVs but also significantly contribute to the broader field of industrial automation and intelligent robotics systems. By improving the precision and reliability of AGV navigation, the research endeavors to optimize warehouse operations, reduce operational costs, and increase throughput. This objective is critical in addressing the limitations of current navigation systems and ensuring the adaptability and effectiveness of AGVs in complex warehouse settings, thereby contributing to the evolution of automated logistics and enhancing overall supply chain management. Methods A comprehensive methodology was developed to enhance AGV navigation in warehouse environments, integrating a multimodal fusion of laser light detection and ranging (LiDAR), inertial measurement unit (IMU), and quick response (QR) code technologies. This fusion approach was meticulously engineered to synergistically combine the unique strengths of each sensing modality, thereby overcoming the inherent limitations of traditional laser-inertial navigation systems. In the warehouse setting, QR codes were strategically affixed to the floor at intervals of 1200 mm. When an AGV scanned a QR code, the system received precise positional and angular information, providing an essential absolute reference for recalibrating the AGV s navigational state. Furthermore, IMUs were uniquely calibrated using QR code data to compensate for inertial bias drift, significantly enhancing inertial measurement accuracy. In addition to considering inertial residuals, a reprojection error between the 3D point q at position x q and frame i was defined, incorporating error analysis from the downward-facing sensor for QR codes on top of the laser reprojection error. According to the bundle adjustment for LiDAR mapping (BALM) algorithm, an innovative layered local bundle adjustment (BA) optimization process integrated with QR code data was introduced. This process streamlined the BA procedure, markedly reducing computational load and optimization time. The optimization process was structured from the bottom layer to the top, with each layer consisting of a set number of LiDAR frames. Keyframes within these layers, particularly those identified through QR code scans, were used to construct a more precise and consistent global trajectory for the AGV. During the layered BA optimization, specific keyframes within each window were maintained without participating in the BA optimization. Following this layered optimization, a top-down pose graph optimization was implemented, crucial for minimizing cumulative pose estimation errors that might have propagated through the bottom-up optimization process. This phase of the optimization considered common features within each window of frames, particularly focusing on frames associated with QR code scans. The fixed positions from QR code scans ensured high confidence in pose estimates, significantly enhancing the overall accuracy of the navigation system. This dual optimization process effectively addressed scale drift and time-consuming issues commonly encountered in incremental mapping methods, ensuring a more accurate and efficient navigation system for AGVs. The integration of QR code data not only provided high positional accuracy but also contributed to the robustness and reliability of the AGV navigation system in complex warehouse environments. Results and Discussions In our research, we address the challenge of inertial bias drift by proposing an IMU preintegration model integrated with QR code data. This model utilizes the rigid constraint information provided by QR codes to update inertial biases. By considering inertial residuals and jointly optimizing the errors from laser-inertial and downwardfacing camera systems, we establish a robust initial state estimation using the absolute pose derived from the QR codes captured by the downward-facing camera. This approach ensures a solid starting point for the joint optimization, accelerating the convergence speed, and enhancing the accuracy of the estimates. Experimental validations have been conducted on linear and rectangular trajectories. The performance of our method is compared with open-source algorithms such as LeGO-LOAM, BALM, LIO-SAM, and LIC-Fusion2. Notably, as the trajectory length increases from 24000 mm to 60000 mm, the absolute translational and rotational errors of our method only grow by approximately 2 mm and 0.5 degrees, respectively. This represents a 1. 4 times improvement in overall positioning accuracy (Table 2 and Table 3). To address the issue of real-time performance, we propose a globally consistent optimization model, selectively incorporating keyframes and QR codes to execute a layered local BA optimization from the bottom layer to the top. This process significantly enhances the consistency and precision of LiDAR mapping and AGV positioning. During the layered optimization process, the pose of specific keyframes (derived from QR code solutions) is maintained constant and not involved in the optimization process, ensuring accuracy while significantly reducing optimization time. In our experimental setup within a warehouse logistics environment, our algorithm demonstrates a substantial improvement in time efficiency, outperforming LeGO-LOAM, BALM, LIO-SAM, and LIC-Fusion2 by 49.40%, 20.03%, 19.95%, and 37.29%, respectively (Table 4). Finally, leveraging factor graph optimization, we propose a globally consistent navigation framework that fuses laser-inertial and QR code data. This framework integrates pre-integration factors, tracking factors, loop closure factors, and QR code factors into the factor graph model, realizing multi-level data fusion. This approach effectively reduces cumulative errors and provides a globally consistent AGV navigation outcome (Fig. 4). This innovative navigation system represents a significant advancement in AGV technology, offering enhanced accuracy, efficiency, and consistency in complex warehouse environments (Fig. 4). Conclusions To address the challenges inherent in laser-inertial-based navigation methods in warehouse logistics environments, such as inertial bias drift, poor real-time performance, and low pose estimation accuracy in degraded scenarios, we present a precise laser-inertial-QR fusion navigation method for autonomous and accurate AGV navigation in warehouse logistics settings. By integrating the IMU pre-integration model with QR data and employing a globally consistent optimization approach, we successfully estimate and correct inertial biases while reducing optimization time. The tight coupling of LiDAR, IMU, and barcode data facilitates multi-level data fusion, significantly enhancing positioning accuracy and robustness. The method has been extensively compared with leading laser-inertial navigation methods on a developed navigation platform. Experimental results demonstrate the superior time efficiency and reduced pose errors of the algorithm that maintains translational and rotational errors below 0.02 m and 2 degrees, respectively, regardless of the trajectory length. Future research will explore deeper multi-sensor fusion by integrating visual sensors to further enhance navigational accuracy. This includes capturing feature points using high-precision cameras and synergistically optimizing them with laser and IMU data using visual SLAM techniques, thereby strengthening system performance in variable lighting conditions or feature-deprived scenarios. Additionally, the development of a new real-time adaptive calibration method within the multisensor fusion algorithm is considered. This method aims to utilize real-time sensor data for continuous adjustment of sensor model parameters. The key lies in employing advanced filtering techniques, such as Kalman filters or particle filters, to estimate and correct sensor errors in real time, potentially achieving significant improvement in system accuracy and reliability.
引用
收藏
页数:13
相关论文
共 28 条
  • [1] Stereo Visual Inertial Odometry for Robots with Limited Computational Resources
    Bahnam, Stavrow
    Pfeiffer, Sven
    de Croon, Guido C. H. E.
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 9154 - 9159
  • [2] Optimized Modulation and Coding for Dual Modulated QR Codes
    Barron, Irving R.
    Sharma, Gaurav
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 2800 - 2810
  • [3] Bhattacharjee A, 2023, Lecture notes in electrical engineering, V927, P805
  • [4] BoW3D: Bag of Words for Real-Time Loop Closing in 3D LiDAR SLAM
    Cui, Yunge
    Chen, Xieyuanli
    Zhang, Yinlong
    Dong, Jiahua
    Wu, Qingxiao
    Zhu, Feng
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05): : 2828 - 2835
  • [5] On-Manifold Preintegration for Real-Time Visual-Inertial Odometry
    Forster, Christian
    Carlone, Luca
    Dellaert, Frank
    Scaramuzza, Davide
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2017, 33 (01) : 1 - 21
  • [6] Ganesan Prakash, 2022, 2022 Algorithms, Computing and Mathematics Conference (ACM), P42, DOI 10.1109/ACM57404.2022.00015
  • [7] Real-Time Hybrid Mapping of Populated Indoor Scenes using a Low-Cost Monocular UAV
    Golodetz, Stuart
    Vankadari, Madhu
    Everitt, Aluna
    Shin, Sangyun
    Markham, Andrew
    Trigoni, Niki
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 325 - 332
  • [8] Real-Time Dead Reckoning and Mapping Approach Based on Three-Dimensional Point Cloud
    Li, Shuaixin
    Li, Guangyun
    Zhou, Yanglin
    Wang, Li
    Fu, Jingyang
    [J]. CHINA SATELLITE NAVIGATION CONFERENCE (CSNC) 2018 PROCEEDINGS, VOL III, 2018, 499 : 643 - 662
  • [9] Optimization-Based Visual-Inertial SLAM Tightly Coupled with Raw GNSS Measurements
    Liu, Jinxu
    Gao, Wei
    Hu, Zhanyi
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 11612 - 11618
  • [10] BALM: Bundle Adjustment for Lidar Mapping
    Liu, Zheng
    Zhang, Fu
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02): : 3184 - 3191