MVINS: Tightly Coupled Mocap-Visual-Inertial Fusion for Global and Drift-Free Pose Estimation

被引:1
|
作者
Liu, Meng [1 ]
Xie, Liang [2 ]
Wang, Wei [1 ]
Shi, Zhongchen [2 ]
Chen, Wei [2 ]
Yan, Ye [2 ]
Yin, Erwei [1 ,2 ]
机构
[1] Harbin Engn Univ, Coll Intelligent Syst Sci & Engn, Harbin 150001, Peoples R China
[2] Acad Mil Sci, Def Innovat Inst, Beijing 100071, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 11期
基金
中国国家自然科学基金;
关键词
Markerless motion capture (Mocap); pose estimation; sensor fusion; visual-inertial navigation system (VINS); ROBUST;
D O I
10.1109/JIOT.2024.3367417
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Augmented reality (AR), a prominent application within the Internet of Things (IoT) domain, demands high-performance pose estimation. Presently, the visual-inertial navigation system (VINS) is acknowledged as an essential method for providing 6-DoF poses. However, VINS builds the local frame at random during the system initialization stage, making it difficult to establish a connection with the global frame. In addition, VINS is prone to drifting. In this article, we propose an innovative method that tightly couples markerless motion capture (Mocap) with vision and an inertial measurement unit (IMU) to achieve global and drift-free pose estimation for AR glasses. To address the issue of pose initialization and establish a connection between the IMU and Mocap, we introduce a coarse-to-fine initialization strategy, enabling data fusion for Mocap, vision, and the IMU under a unified global frame. Furthermore, we formulate the Mocap factor alongside the visual and inertial factors and integrate them into a factor graph framework to constrain the system states. With a spatiotemporal calibration method, the IMU-Mocap extrinsic parameter and time offset are calibrated online to improve the pose estimation accuracy. Experimental evaluations in real-world experiments demonstrate the capability of our method to accurately estimate drift-free poses in the global frame. Compared to the state-of-the-art VINS-Fusion, ORB-SLAM3, and GVIS, we achieve improvements of 81%, 42%, and 33% in translation accuracy and improvements of 58%, 33%, and 72% in rotation accuracy, respectively. Moreover, we also evaluate our system for the EuRoC data set, further indicating the effectiveness of the proposed work.
引用
收藏
页码:19776 / 19789
页数:14
相关论文
共 41 条
  • [1] Tightly Coupled Visual-Inertial Fusion for Attitude Estimation of Spacecraft
    Yi, Jinhui
    Ma, Yuebo
    Long, Hongfeng
    Zhu, Zijian
    Zhao, Rujin
    [J]. REMOTE SENSING, 2024, 16 (16)
  • [2] TIGHTLY COUPLED VISUAL AND INERTIAL MEASUREMENTS FOR MOTION ESTIMATION
    Ma, Song-Hui
    Shi, Ming-Ming
    Wang, Peng
    [J]. JOURNAL OF RESIDUALS SCIENCE & TECHNOLOGY, 2016, 13 (01) : 120 - 126
  • [3] Drift-Free Position Estimation for Periodic Movements Using Inertial Units
    Millor, Nora
    Lecumberri, Pablo
    Gomez, Marisol
    Martinez-Ramirez, Alicia
    Izquierdo, Mikel
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2014, 18 (04) : 1131 - 1137
  • [4] Tightly Coupled Algorithm of Visual Inertial Odometry and Magnetometer Fusion
    Liu, Shiqi
    Li, Maohai
    Lin, Rui
    Ni, Zhikang
    [J]. PROCEEDINGS OF 2020 IEEE 5TH INFORMATION TECHNOLOGY AND MECHATRONICS ENGINEERING CONFERENCE (ITOEC 2020), 2020, : 330 - 335
  • [5] GLIO: Tightly-Coupled GNSSLiDARIMU Integration for Continuous and Drift-Free State Estimation of Intelligent Vehicles in Urban Areas
    Liu, Xikun
    Wen, Weisong
    Hsu, Li-Ta
    [J]. IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 1412 - 1422
  • [6] Visual SLAM With Drift-Free Rotation Estimation in Manhattan World
    Liu, Jiacheng
    Meng, Ziyang
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 6512 - 6519
  • [7] GVINS: Tightly Coupled GNSS-Visual-Inertial Fusion for Smooth and Consistent State Estimation
    Cao, Shaozu
    Lu, Xiuyuan
    Shen, Shaojie
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2022, 38 (04) : 2004 - 2021
  • [8] Drift-Free Humanoid State Estimation fusing Kinematic, Inertial and LIDAR sensing
    Fallon, Maurice F.
    Antone, Matthew
    Roy, Nicholas
    Teller, Seth
    [J]. 2014 14TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2014, : 112 - 119
  • [9] Scale Drift-free Visual-inertial Odometry for Ground Vehicles in Highway Scenarios
    Wen, Tuopu
    Jiang, Kun
    Miao, Jinyu
    Wijaya, Benny
    Yang, Mengmeng
    Yang, Diange
    [J]. 2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 4629 - 4636
  • [10] LiDAR-Inertial-GNSS Fusion Positioning System in Urban Environment: Local Accurate Registration and Global Drift-Free
    He, Xuan
    Pan, Shuguo
    Gao, Wang
    Lu, Xinyu
    [J]. REMOTE SENSING, 2022, 14 (09)