Synergistic Integration of Skeletal Kinematic Features for Vision-Based Fall Detection

被引:8
|
作者
Inturi, Anitha Rani [1 ]
Manikandan, Vazhora Malayil [1 ]
Kumar, Mahamkali Naveen [1 ]
Wang, Shuihua [2 ]
Zhang, Yudong [2 ]
机构
[1] SRM Univ, Dept Comp Sci & Engn, Mangalagiri 522240, AP, India
[2] Univ Leicester, Sch Comp & Math Sci, Leicester LE1 7RH, England
基金
英国生物技术与生命科学研究理事会;
关键词
fall detection; video analysis; vision-based human activity recognition; fall prevention; ambient intelligence; assistive technology; signal processing; real-time monitoring; risk assessment;
D O I
10.3390/s23146283
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
According to the World Health Organisation, falling is a major health problem with potentially fatal implications. Each year, thousands of people die as a result of falls, with seniors making up 80% of these fatalities. The automatic detection of falls may reduce the severity of the consequences. Our study focuses on developing a vision-based fall detection system. Our work proposes a new feature descriptor that results in a new fall detection framework. The body geometry of the subject is analyzed and patterns that help to distinguish falls from non-fall activities are identified in our proposed method. An AlphaPose network is employed to identify 17 keypoints on the human skeleton. Thirteen keypoints are used in our study, and we compute two additional keypoints. These 15 keypoints are divided into five segments, each of which consists of a group of three non-collinear points. These five segments represent the left hand, right hand, left leg, right leg and craniocaudal section. A novel feature descriptor is generated by extracting the distances from the segmented parts, angles within the segmented parts and the angle of inclination for every segmented part. As a result, we may extract three features from each segment, giving us 15 features per frame that preserve spatial information. To capture temporal dynamics, the extracted spatial features are arranged in the temporal sequence. As a result, the feature descriptor in the proposed approach preserves the spatio-temporal dynamics. Thus, a feature descriptor of size [m x 15] is formed where m is the number of frames. To recognize fall patterns, machine learning approaches such as decision trees, random forests, and gradient boost are applied to the feature descriptor. Our system was evaluated on the UPfall dataset, which is a benchmark dataset. It has shown very good performance compared to the state-of-the-art approaches.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] Vision-Based Crowd Pedestrian Detection
    Huang, Shih-Shinh
    Chang, Feng-Chia
    Liu, You-Chen
    Hsiao, Pei-Yung
    Ho, Hong-Fa
    2015 IEEE INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2015, : 878 - 881
  • [42] Stereo vision-based vehicle detection
    Bertozzi, M
    Broggi, A
    Fascioli, A
    Nichele, S
    PROCEEDINGS OF THE IEEE INTELLIGENT VEHICLES SYMPOSIUM 2000, 2000, : 39 - 44
  • [43] On vision-based kinematic calibration of a Stewart-Gough platform
    Renaud, P
    Andreff, N
    Gogu, G
    ELEVENTH WORLD CONGRESS IN MECHANISM AND MACHINE SCIENCE, VOLS 1-5, PROCEEDINGS, 2004, : 1906 - 1911
  • [44] Vision-Based Odometric Localization for Humanoids using a Kinematic EKF
    Oriolo, Giuseppe
    Paolillo, Antonio
    Rosa, Lorenzo
    Vendittelli, Marilena
    2012 12TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2012, : 153 - 158
  • [45] Stereo vision-based Kinematic calibration method for the Stewart platforms
    Fu, Lei
    Yang, Ming
    Liu, Zhihua
    Tao, Meng
    Cai, Chenguang
    Huang, Haihui
    OPTICS EXPRESS, 2022, 30 (26) : 47059 - 47069
  • [46] Vision-based kinematic analysis of the Delta robot for object catching
    Kansal, Sachin
    Mukherjee, Sudipto
    ROBOTICA, 2022, 40 (06) : 2010 - 2030
  • [47] Vision-Based Framework to Estimate Robot Configuration and Kinematic Constraints
    Ortenzi, Valerio
    Marturi, Naresh
    Mistry, Michael
    Kuo, Jeffrey
    Stolkin, Rustam
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2018, 23 (05) : 2402 - 2412
  • [48] Kinematic Calibration and Vision-Based Object Grasping for Baxter Robot
    Huang, Yanjiang
    Chen, Xunman
    Zhang, Xianmin
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2016, PT I, 2016, 9834 : 269 - 278
  • [49] A Vision-based Approach to Fire Detection
    Gomes, Pedro
    Santana, Pedro
    Barata, Jose
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2014, 11
  • [50] Vision-Based Crowded Pedestrian Detection
    Huang, Shih-Shinh
    Chen, Chun-Yuan
    2015 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TW), 2015, : 334 - 335