Vision-Based Hand Rotation Recognition Technique with Ground-Truth Dataset

被引:0
|
作者
Kim, Hui-Jun [1 ]
Kim, Jung-Soon [2 ]
Kim, Sung-Hee [1 ]
机构
[1] Dong Eui Univ, Dept Ind ICT Engn, Busan 47340, South Korea
[2] Dong Eui Univ, Dept Artificial Intelligence, Busan 47340, South Korea
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 01期
关键词
cognitive screening; hand movement detection; image processing; MENTAL-STATE-EXAMINATION; COGNITIVE IMPAIRMENT; ALZHEIMERS-DISEASE; EDUCATION; DEMENTIA; PERFORMANCE; DEPRESSION; COMMUNITY; DIAGNOSIS; IMITATION;
D O I
10.3390/app14010422
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The existing question-and-answer screening test has a limitation in that test accuracy varies due to a high learning effect and based on the inspector's competency, which can have consequences for rapid-onset cognitive-related diseases. To solve this problem, a behavioral-data-based screening test is necessary, and there are various types of tasks that can be adopted from previous studies, or new ones can be explored. In this study, we came up with a continuous hand movement, developed a behavioral measurement technology, and conducted validity verification. As a result of analyzing factors that hinder measurement accuracy, this measurement technology used a web camera to measure behavioral data of hand movements in order to lower psychological barriers and to pose no physical risk to subjects. The measured hand motion is a hand rotation that repeatedly performs an action in which the bottom of the hand is seen in front. The number of rotations, rotation angle, and rotation time generated by the hand rotation are derived as measurements; and for calculation, we performed hand recognition (MediaPipe), joint data detection, motion recognition, and motion analysis. To establish the validity of the derived measurements, we conducted a verification experiment by constructing our own ground-truth dataset. The dataset was developed using a robot arm with two-axis degrees of freedom and that quantitatively controls the number, time, and angle of rotations. The dataset includes 540 data points comprising 30 right- and left-handed tasks performed three times each at distances of 57, 77, and 97 cm from the camera. Thus, the accuracy of the number of rotations is 99.21%, the accuracy of the rotation angle is 91.90%, and the accuracy of the rotation time is 68.53%, making the overall rotation measurements more than 90% accurate for input data at 30 FPS for measuring the rotation time. This study is significant in that it not only contributes to the development of technology that can measure new behavioral data in health care but also shares image data and label values that perform quantitative hand movements in the image processing field.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] A Ground-Truth Video Dataset for the Development and Evaluation of Vision-based Sense-and-Avoid systems
    Carrio, Adrian
    Fu, Changhong
    Pestana, Jesus
    Campoy, Pascual
    [J]. 2014 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS (ICUAS), 2014, : 441 - 446
  • [2] A DATASET WITH GROUND-TRUTH FOR HYPERSPECTRAL UNMIXING
    Zhao, Min
    Chen, Jie
    [J]. IGARSS 2018 - 2018 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2018, : 5077 - 5080
  • [3] uulmMAD - A Human Action Recognition Dataset for Ground-Truth Evaluation and Investigation of View Invariances
    Glodek, Michael
    Layher, Georg
    Heilemann, Felix
    Gawrilowicz, Florian
    Palm, Guenther
    Schwenker, Friedhelm
    Neumann, Heiko
    [J]. MULTIMODAL PATTERN RECOGNITION OF SOCIAL SIGNALS IN HUMAN-COMPUTER-INTERACTION, 2015, 8869 : 77 - 91
  • [4] On an algorithm for Vision-based hand gesture recognition
    Ghosh, Dipak Kumar
    Ari, Samit
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2016, 10 (04) : 655 - 662
  • [5] Vision-based hand gesture tracking and recognition
    Huang, T
    [J]. ISSCS 2005: International Symposium on Signals, Circuits and Systems, Vols 1 and 2, Proceedings, 2005, : 403 - 403
  • [6] On an algorithm for Vision-based hand gesture recognition
    Dipak Kumar Ghosh
    Samit Ari
    [J]. Signal, Image and Video Processing, 2016, 10 : 655 - 662
  • [7] CityUPlaces: a new dataset for efficient vision-based recognition
    Haowei Wu
    Gengshen Wu
    Jinming Hu
    Shuaixin Xu
    Songhao Zhang
    Yi Liu
    [J]. Journal of Real-Time Image Processing, 2023, 20
  • [8] CityUPlaces: a new dataset for efficient vision-based recognition
    Wu, Haowei
    Wu, Gengshen
    Hu, Jinming
    Xu, Shuaixin
    Zhang, Songhao
    Liu, Yi
    [J]. JOURNAL OF REAL-TIME IMAGE PROCESSING, 2023, 20 (06)
  • [9] A Public Ground-Truth Dataset for Handwritten Circuit Diagram Images
    Thoma, Felix
    Bayer, Johannes
    Li, Yakun
    Dengel, Andreas
    [J]. DOCUMENT ANALYSIS AND RECOGNITION, ICDAR 2021 WORKSHOPS, PT I, 2021, 12916 : 20 - 27
  • [10] Providing a Single Ground-Truth for Illuminant Estimation for the ColorChecker Dataset
    Hemrit, Ghalia
    Finlayson, Graham D.
    Gijsenij, Arjan
    Gehler, Peter
    Bianco, Simone
    Drew, Mark S.
    Funt, Brian
    Shi, Lilong
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (05) : 1286 - 1287