Human Activities Recognition with RGB-Depth Camera using HMM

被引:0
|
作者
Dubois, Amandine [1 ]
Charpillet, Francois [1 ]
机构
[1] Univ Lorraine, LORIA, UMR 7503, F-54506 Vandoeuvre Les Nancy, France
关键词
FALL DETECTION;
D O I
暂无
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Fall detection remains today an open issue for improving elderly people security. It is all the more pertinent today when more and more elderly people stay longer and longer at home. In this paper, we propose a method to detect fall using a system made up of RGB-Depth cameras. The major benefit of our approach is its low cost and the fact that the system is easy to distribute and install. In few words, the method is based on the detection in real time of the center of mass of any mobile object or person accurately determining its position in the 3D space and its velocity. We demonstrate in this paper that this information is adequate and robust enough for labeling the activity of a person among 8 possible situations. An evaluation has been conducted within a real smart environment with 26 subjects which were performing any of the eight activities (sitting, walking, going up, squatting, lying on a couch, falling, bending and lying down). Seven out of these eight activities were correctly detected among which falling which was detected without false positives.
引用
下载
收藏
页码:4666 / 4669
页数:4
相关论文
共 50 条
  • [41] Facial Expression Recognition via Joint Deep Learning of RGB-Depth Map Latent Representations
    Oyedotun, Oyebade K.
    Demisse, Girum
    Shabayek, Abd El Rahman
    Aouada, Djamila
    Ottersten, Bjorn
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 3161 - 3168
  • [42] A Decision Forest Based Feature Selection Framework for Action Recognition from RGB-Depth Cameras
    Negin, Farhood
    Ozdemir, Flrat
    Yuksel, Karner Ali
    Akgul, Ceyhun Burak
    Ercil, Aytul
    2013 21ST SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2013,
  • [43] Human Action Recognition using Meta Learning for RGB and Depth Information
    Amiri, S. Mohsen
    Pourazad, Mahsa T.
    Nasiopoulos, Panos
    Leung, Victor C. M.
    2014 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS (ICNC), 2014, : 363 - 367
  • [44] Depth Image Rectification Based on an Effective RGB-Depth Boundary Inconsistency Model
    Cao, Hao
    Zhao, Xin
    Li, Ang
    Yang, Meng
    ELECTRONICS, 2024, 13 (16)
  • [45] RDFC-GAN: RGB-Depth Fusion CycleGAN for Indoor Depth Completion
    Wang H.
    Che Z.
    Yang Y.
    Wang M.
    Xu Z.
    Qiao X.
    Qi M.
    Feng F.
    Tang J.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46 (11) : 1 - 14
  • [46] FloW Vision: Depth Image Enhancement by Combining Stereo RGB-Depth Sensor
    Waskitho, Suryo Aji
    Alfarouq, Ardiansyah
    Sukaridhoto, Sritrusta
    Pramadihanto, Dadet
    2016 INTERNATIONAL CONFERENCE ON KNOWLEDGE CREATION AND INTELLIGENT COMPUTING (KCIC), 2016, : 182 - 187
  • [47] RGB-Depth Camera-Based Assessment of Motor Capacity: Normative Data for Six Standardized Motor Tasks
    Roehling, Hanna Marie
    Otte, Karen
    Rekers, Sophia
    Finke, Carsten
    Rust, Rebekka
    Dorsch, Eva-Maria
    Behnia, Behnoush
    Paul, Friedemann
    Schmitz-Huebsch, Tanja
    INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH, 2022, 19 (24)
  • [48] 利用RGB-Depth相机的机械模型建模
    林帅
    程志全
    系统仿真学报, 2013, 25 (09) : 2044 - 2049
  • [49] Real Time 3D Pose Estimation of Both Human Hands via RGB-Depth Camera and Deep Convolutional Neural Networks
    Gi, Geon
    Kim, Tae Yeon
    Park, Hye Min
    Park, Jeong Min
    Dinh, Dong-Luong
    Lee, Soo Yeol
    Kim, Tae-Seong
    7TH INTERNATIONAL CONFERENCE ON THE DEVELOPMENT OF BIOMEDICAL ENGINEERING IN VIETNAM (BME7): TRANSLATIONAL HEALTH SCIENCE AND TECHNOLOGY FOR DEVELOPING COUNTRIES, 2020, 69 : 467 - 471
  • [50] Fast and smooth 3D reconstruction using multiple RGB-Depth sensors
    Alexiadis, Dimitrios
    Zarpalas, Dimitrios
    Daras, Petros
    2014 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING CONFERENCE, 2014, : 173 - 176