iHearken: Chewing sound signal analysis based food intake recognition system using Bi-LSTM softmax network

被引:9
|
作者
Khan, Mohammad Imroze [1 ]
Acharya, Bibhudendra [1 ]
Chaurasiya, Rahul Kumar [2 ]
机构
[1] Natl Inst Technol Raipur, Dept Elect & Commun Engn, GE Rd, Raipur 492010, India
[2] Maulana Azad Natl Inst Technol, Dept Elect & Commun Engn, Near Mata Mandir,Link Rd, Bhopal 462003, Madhya Pradesh, India
关键词
Chew events; Chewing sound analysis; Deep learning; Food intake classification; Performance valuation; Wearable sensors; NEURAL-NETWORKS; AUDIO; MOBILE;
D O I
10.1016/j.cmpb.2022.106843
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and Objective: Food ingestion is an integral part of health and wellness. Continues monitoring of different food types and observing the amount being consumed prevents gastrointestinal diseases and weight-related issues. Food intake recognition (FIR) systems, thus have significant impact on everyday life. The purpose of this study is to develop an automatic approach for the FIR using a contemporary wearable hardware and machine learning technique. This will assist clinicians and concern person to manage health issues associated with food intake. Methods: In this work, we present a novel hardware iHearken, a headphone-like wearable sensor-based system to monitor eating activities and recognize food intake type in the free-living condition. State-ofthe-art hardware is designed for data acquisition where 16 subjects are recruited and 20 different food items are used for data collection. Further, chewing sound signals are analyzed for FIR using bottleneck features. The proposed model is divided into 4 distinct phases: data acquisition, event detection using a pre-trained model, bottleneck feature extraction, and classification based on bidirectional long short-term memory (Bi-LSTM) softmax model. The Bi-LSTM network with softmax function is applied to calculate the identification score for apiece chewing signal which further classifies the chewing signal data into liquid / solid food classes. Results: The results of proposed model performance is evaluated in (%) for accuracy, precision, recall and F-score as 97.422, 96.808, 98.0, and 97.512, respectively, and root mean square error (RMSE), and mean absolute percentage error (MAPE) as 0.160 1.030 respectively for numbers of correct food type recognized. Further, we also evaluated our model's performance for food classification into solid and liquid and achieved an accuracy (96.66%), precision (96.40%), recall (95.230%), F-score (95.79%), RMSE (0.182), and MAPE (2.22). We also demonstrated that the food recognition accuracy of different models with the proposed model differed statistically. Conclusion: An informatics complexity study of the proposed model was subsequently explored to review the effectiveness of the proposed wearable device and the methodology. The medical importance of this investigation is the reliable monitoring of the clinical development of the food intake classification methods via food chew event detection in the ambulatory environment has been justified. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Emotion Recognition Based on Dynamic Energy Features Using a Bi-LSTM Network
    Zhu, Meili
    Wang, Qingqing
    Luo, Jianglin
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2022, 15
  • [2] Research on Seismic Phase Recognition Method Based on Bi-LSTM Network
    Wang, Li
    Cai, Jianxian
    Duan, Li
    Guo, Lili
    Shi, Xingxing
    Cai, Huanyu
    APPLIED SCIENCES-BASEL, 2024, 14 (16):
  • [3] Sentiment Analysis based on Bi-LSTM using Tone
    Li, Huakang
    Wang, Lei
    Wang, Yongchao
    Sun, Guozi
    2019 15TH INTERNATIONAL CONFERENCE ON SEMANTICS, KNOWLEDGE AND GRIDS (SKG 2019), 2019, : 30 - 35
  • [4] An adaptive search optimizer-based deep Bi-LSTM for emotion recognition using electroencephalogram signal
    Khubani, Jitendra
    Kulkarni, Shirish
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 93
  • [5] Recognition of motion state by smartphone sensors using Bi-LSTM neural network
    Zhao, Hong
    Hou, Chunning
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2018, 35 (02) : 1733 - 1742
  • [6] Recognition of Transportation State by Smartphone Sensors Using Deep Bi-LSTM Neural Network
    Zhao, Hong
    Hou, Chunning
    Alrobassy, Hala
    Zeng, Xiangyan
    JOURNAL OF COMPUTER NETWORKS AND COMMUNICATIONS, 2019, 2019
  • [7] Convolutional Bi-LSTM Based Human Gait Recognition Using Video Sequences
    Amin, Javaria
    Anjum, Muhammad Almas
    Sharif, Muhammad
    Kadry, Seifedine
    Nam, Yunyoung
    Wang, ShuiHua
    CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 68 (02): : 2693 - 2709
  • [8] Effective infant cry signal analysis and reasoning using IARO based leaky Bi-LSTM model
    Mala, B. M.
    Darandale, Smita Sandeep
    COMPUTER SPEECH AND LANGUAGE, 2024, 86
  • [9] Bi-LSTM Based Deep Learning Algorithm for NOMA-MIMO Signal Detection System
    Kumar, Arun
    Gaur, Nishant
    Nanthaamornphong, Aziz
    NATIONAL ACADEMY SCIENCE LETTERS-INDIA, 2024,
  • [10] Wearable Sensor-Based Human Activity Recognition System Employing Bi-LSTM Algorithm
    Tehrani, Amir
    Yadollahzadeh-Tabari, Meisam
    Zehtab-Salmasi, Aidin
    Enayatifar, Rasul
    COMPUTER JOURNAL, 2023, 67 (03): : 961 - 975