First-Person Indoor Navigation via Vision-Inertial Data Fusion

被引:0
|
作者
Farnoosh, Amirreza [1 ]
Nabian, Mohsen [1 ]
Closas, Pau [1 ]
Ostadabbas, Sarah [1 ]
机构
[1] Northeastern Univ, Elect & Comp Engn Dept, Boston, MA 02115 USA
关键词
Computer vision; Data fusion; Expectation maximization algorithm; In-door navigation; Simultaneous localization and mapping (SLAM);
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we aim to enhance the first-person indoor navigation and scene understanding experience by fusing inertial data collected from a smartphone carried by the user with the vision information obtained through the phone's camera. We employed the concept of vanishing directions together with the orthogonality constraints of the man-made environments in an expectation maximization framework to estimate person's orientation with respect to the known indoor coordinates from video frames. This framework allows to include prior information about camera rotation axis for better estimations as well as to select candidate edge-lines for estimation of hallways' depth and width from monocular video frames, and 31) modeling of the scene. Our proposed algorithm concurrently combines the vision-based estimated orientation with the inertial data using a Kalman filter in order to reline estimations and remove substantial measurement drift from inertial sensors. We evaluated the performance of our vision-inertial data fusion method on an IMU-augmented video recorded from a rotary hallway in which a participant completed a full lap. We demonstrated that this fusion provides virtually drift-free instantaneous information about the person's relative orientation. We were able to estimate hallways' depth and width, and generate a closed-path map from the rotary hallway over a roughly 60-meter lap.
引用
收藏
页码:1213 / 1222
页数:10
相关论文
共 50 条
  • [1] First-Person Vision
    Kanade, Takeo
    Hebert, Martial
    [J]. PROCEEDINGS OF THE IEEE, 2012, 100 (08) : 2442 - 2453
  • [2] Visual Motif Discovery via First-Person Vision
    Yonetani, Ryo
    Kitani, Kris M.
    Sato, Yoichi
    [J]. COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 : 187 - 203
  • [3] First-Person Computational Vision
    Grauman, Kristen
    [J]. FRONTIERS OF ENGINEERING, 2017, : 17 - 24
  • [4] Calibrating and Aligning a Low-Cost Vision-Inertial Navigation System
    Center, Julian L., Jr.
    [J]. BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING, 2012, 1443 : 346 - 353
  • [5] Improving Stereo Vision based SLAM by Integrating Inertial Measurements for Person Indoor Navigation
    Albrecht, Alexander
    Heide, Nina
    [J]. CONFERENCE PROCEEDINGS OF 2018 4TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS (ICCAR), 2018, : 327 - 331
  • [6] Learning Action Maps of Large Environments via First-Person Vision
    Rhinehart, Nicholas
    Kitani, Kris M.
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 580 - 588
  • [7] IMAGE MATCHING OPTIMIZATION VIA VISION AND INERTIAL DATA FUSION: APPLICATION TO NAVIGATION OF THE VISUALLY IMPAIRED
    Pissaloux, Edwige
    Chen, Yong
    Velazquez, Ramiro
    [J]. INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2010, 10 (04) : 545 - 558
  • [8] A CNN-SIFT Hybrid Pedestrian Navigation Method Based on First-Person Vision
    Zhao, Qi
    Zhang, Boxue
    Lyu, Shuchang
    Zhang, Hong
    Sun, Daniel
    Li, Guoqiang
    Feng, Wenquan
    [J]. REMOTE SENSING, 2018, 10 (08)
  • [9] Sensor Data Fusion in UWB-Supported Inertial Navigation Systems for Indoor Navigation
    Zwirello, Lukasz
    Li, Xuyang
    Zwick, Thomas
    Ascher, Christian
    Werling, Sebastian
    Trommer, Gert F.
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 3154 - 3159
  • [10] A PREDICTOR OF MOVING OBJECTS FOR FIRST-PERSON VISION
    Sanchez-Matilla, Ricardo
    Cavallaro, Andrea
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2189 - 2193