MultiSense: Cross-labelling and Learning Human Activities Using Multimodal Sensing Data

被引:1
|
作者
Zhang, Lan [1 ,2 ]
Zheng, Daren [3 ]
Yuan, Mu [3 ]
Han, Feng [3 ]
Wu, Zhengtao [3 ]
Liu, Mengjing [3 ]
Li, Xiang-Yang [3 ]
机构
[1] Univ Sci & Technol China, Hefei 230026, Anhui, Peoples R China
[2] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei 230026, Anhui, Peoples R China
[3] Univ Sci & Technol China, Hefei 230026, Anhui, Peoples R China
基金
国家重点研发计划;
关键词
Multimodel sensing data; cross-labelling; cross-learning;
D O I
10.1145/3578267
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To tap into the gold mine of data generated by Internet of Things (IoT) devices with unprecedented volume and value, there is an urgent need to efficiently and accurately label raw sensor data. To this end, we explore and leverage the hidden connections among the multimodal data collected by various sensing devices and propose to let different modal data complement and learn from each other. But it is challenging to align and fuse multimodal data without knowing their perception (and thus the correct labels). In this work, we propose MultiSense, a paradigm for automatically mining potential perception, cross-labelling each modal data, and then updating the learning models for recognizing human activity to achieve higher accuracy or even recognize new activities. We design innovative solutions for segmenting, aligning, and fusing multimodal data from different sensors, as well as model updating mechanism. We implement our framework and conduct comprehensive evaluations on a rich set of data. Our results demonstrate that MultiSense significantly improves the data usability and the power of the learning models. With nine diverse activities performed by users, our framework automatically labels multimodal sensing data generated by five different sensing mechanisms (video, smart watch, smartphone, audio, and wireless-channel) with an average accuracy 98.5%. Furthermore, it enables models of some modalities to learn unknown activities from other modalities and greatly improves the activity recognition ability.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] MultiSense: Cross Labelling and Learning Human Activities Using Multimodal Sensing Data
    Zhang, Lan
    Zheng, Daren
    Wu, Zhengtao
    Liu, Mengjing
    Yuan, Mu
    Han, Feng
    Li, Xiang-Yang
    2021 IEEE 18TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2021), 2021, : 401 - 409
  • [2] Poster: Cross Labelling and Learning Unknown Activities Among Multimodal Sensing Data
    Zhang, Lan
    Zheng, Daren
    Wu, Zhengtao
    Liu, Mengjing
    Yuan, Mu
    Han, Feng
    Li, Xiang-Yang
    MOBICOM'19: PROCEEDINGS OF THE 25TH ANNUAL INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND NETWORKING, 2019,
  • [3] A Deep Learning Approach for Human Activities Recognition From Multimodal Sensing Devices
    Ihianle, Isibor Kennedy
    Nwajana, Augustine O.
    Ebenuwa, Solomon Henry
    Otuka, Richard, I
    Owa, Kayode
    Orisatoki, Mobolaji O.
    IEEE ACCESS, 2020, 8 (08): : 179028 - 179038
  • [4] A Hybrid Deep Learning Model for Human Activity Recognition Using Multimodal Body Sensing Data
    Gumaei, Abdu
    Hassan, Mohammad Mehedi
    Alelaiwi, Abdulhameed
    Alsalman, Hussain
    IEEE ACCESS, 2019, 7 : 99152 - 99160
  • [5] Encoding human activities using multimodal wearable sensory data
    Khan, Muhammad Hassan
    Shafiq, Hadia
    Farid, Muhammad Shahid
    Grzegorzek, Marcin
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 261
  • [6] Egocentric Human Activities Recognition With Multimodal Interaction Sensing
    Hao, Yuzhe
    Kanezaki, Asako
    Sato, Ikuro
    Kawakami, Rei
    Shinoda, Koichi
    IEEE SENSORS JOURNAL, 2024, 24 (05) : 7085 - 7096
  • [7] Multimodal Learning of Sensing Data and Skeletal Data for Estimation of Worker Behavior
    Komura K.
    Horikawa M.
    Okamoto A.
    Journal of Japan Industrial Management Association, 2023, 74 (02) : 31 - 39
  • [8] Exploration of Human Activities Using Sensing Data via Deep Embedded Determination
    Wang, Yiqi
    Zhu, En
    Liu, Qiang
    Chen, Yingwen
    Yin, Jianping
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS (WASA 2018), 2018, 10874 : 473 - 484
  • [9] A Novel Approach to Incomplete Multimodal Learning for Remote Sensing Data Fusion
    Chen, Yuxing
    Zhao, Maofan
    Bruzzone, Lorenzo
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 14
  • [10] Deep learning in multimodal remote sensing data fusion: A comprehensive review
    Li, Jiaxin
    Hong, Danfeng
    Gao, Lianru
    Yao, Jing
    Zheng, Ke
    Zhang, Bing
    Chanussot, Jocelyn
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2022, 112