Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach

被引:50
|
作者
Liu, Mengyun [1 ]
Chen, Ruizhi [1 ,2 ]
Li, Deren [1 ,2 ]
Chen, Yujin [3 ]
Guo, Guangyi [1 ]
Cao, Zhipeng [1 ]
Pan, Yuanjin [1 ]
机构
[1] Wuhan Univ, State Key Lab Informat Engn Surveying Mapping & R, Wuhan 430079, Hubei, Peoples R China
[2] Wuhan Univ, Collaborat Innovat Ctr Geospatial Technol, Wuhan 430079, Hubei, Peoples R China
[3] Wuhan Univ, Sch Geodesy & Geomat, Wuhan 430079, Hubei, Peoples R China
关键词
indoor scene recognition; deep learning; indoor localization; WiFi; magnetic field strength; particle filter; smartphone; WIFI;
D O I
10.3390/s17122847
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to "see" which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] A Multi-sensor Fusion Approach for Intention Detection
    Singh, Rahul Kumar
    Varghese, Rejin John
    Liu, Jindong
    Zhang, Zhiqiang
    Lo, Benny
    [J]. CONVERGING CLINICAL AND ENGINEERING RESEARCH ON NEUROREHABILITATION III, 2019, 21 : 454 - 458
  • [42] Multi-Sensor Fusion for Activity Recognition-A Survey
    Aguileta, Antonio A.
    Brena, Ramon F.
    Mayora, Oscar
    Molino-Minero-Re, Erik
    Trejo, Luis A.
    [J]. SENSORS, 2019, 19 (17)
  • [43] Multi-sensor fusion: an Evolutionary algorithm approach
    Maslov, Igor V.
    Gertner, Izidor
    [J]. INFORMATION FUSION, 2006, 7 (03) : 304 - 330
  • [44] Occupancy inference using infrastructure elements in indoor environment: a multi-sensor data fusion
    Dipti Trivedi
    Venkataramana Badarla
    Ravi Bhandari
    [J]. CCF Transactions on Pervasive Computing and Interaction, 2023, 5 : 255 - 275
  • [45] Multi-Sensor Fusion-Based Indoor Single-Track Semantic Map Construction and Localization
    Chai, Wennan
    Li, Chao
    Li, Qingdang
    [J]. IEEE SENSORS JOURNAL, 2023, 23 (03) : 2470 - 2480
  • [46] Mobile Robot Self-localization System Based on Multi-sensor Information Fusion in Indoor Environment
    Xie, Linhai
    Xu, Xiaohong
    [J]. PROCEEDINGS OF THE 2015 CHINESE INTELLIGENT AUTOMATION CONFERENCE: INTELLIGENT TECHNOLOGY AND SYSTEMS, 2015, 338 : 61 - 69
  • [47] Indoor localization for pedestrians with real-time capability using multi-sensor smartphones
    Ehrlich, Catia Real
    Blankenbach, Joerg
    [J]. GEO-SPATIAL INFORMATION SCIENCE, 2019, 22 (02) : 73 - 88
  • [48] Multi-sensor Fusion Robust Localization for Indoor Mobile Robots Based on A Set-membership Estimator
    Zhou, Bo
    Qian, Kun
    Fang, Fang
    Ma, Xudong
    Dai, Xianzhong
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2015, : 157 - 162
  • [49] Occupancy inference using infrastructure elements in indoor environment: a multi-sensor data fusion
    Trivedi, Dipti
    Badarla, Venkataramana
    Bhandari, Ravi
    [J]. CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION, 2023, 5 (03) : 255 - 275
  • [50] Physical Activity Recognition Using Multi-Sensor Fusion and Extreme Learning Machines
    Wang, Honggang
    Yan, Weizhong
    Liu, Shaopeng
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,