Vision-Based Methods for Food and Fluid Intake Monitoring: A Literature Review

被引:5
|
作者
Chen, Xin [1 ]
Kamavuako, Ernest N. [1 ,2 ]
机构
[1] Kings Coll London, Dept Engn, London WC2R 2LS, England
[2] Univ Kindu, Fac Med, Site Lwama II, Kindu, Maniema, Rep Congo
关键词
intake monitoring; drinking action detection; dietary monitoring; vision-based methods; DIETARY ASSESSMENT; WEARABLE CAMERA; RECOGNITION; DEHYDRATION; DEVICE; IMAGES; HYDRATION; ACCURACY; CAPTURE; SYSTEM;
D O I
10.3390/s23136137
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Food and fluid intake monitoring are essential for reducing the risk of dehydration, malnutrition, and obesity. The existing research has been preponderantly focused on dietary monitoring, while fluid intake monitoring, on the other hand, is often neglected. Food and fluid intake monitoring can be based on wearable sensors, environmental sensors, smart containers, and the collaborative use of multiple sensors. Vision-based intake monitoring methods have been widely exploited with the development of visual devices and computer vision algorithms. Vision-based methods provide non-intrusive solutions for monitoring. They have shown promising performance in food/beverage recognition and segmentation, human intake action detection and classification, and food volume/fluid amount estimation. However, occlusion, privacy, computational efficiency, and practicality pose significant challenges. This paper reviews the existing work (253 articles) on vision-based intake (food and fluid) monitoring methods to assess the size and scope of the available literature and identify the current challenges and research gaps. This paper uses tables and graphs to depict the patterns of device selection, viewing angle, tasks, algorithms, experimental settings, and performance of the existing monitoring systems.
引用
收藏
页数:31
相关论文
共 50 条
  • [21] A Review of Machine Vision-Based Structural Health Monitoring: Methodologies and Applications
    Ye, X. W.
    Dong, C. Z.
    Liu, T.
    [J]. JOURNAL OF SENSORS, 2016, 2016
  • [22] Recent advances in vision-based indoor navigation: A systematic literature review
    Khan, Dawar
    Cheng, Zhanglin
    Uchiyama, Hideaki
    Ali, Sikandar
    Asshad, Muhammad
    Kiyokawa, Kiyoshi
    [J]. COMPUTERS & GRAPHICS-UK, 2022, 104 : 24 - 45
  • [23] A Vision-Based Cattle Recognition System Using TensorFlow for Livestock Water Intake Monitoring
    Biglari, Amin
    Tang, Wei
    [J]. IEEE SENSORS LETTERS, 2022, 6 (11)
  • [24] A Vision for Vision-based Technologies for Bridge Health Monitoring
    Catbas, N.
    Dong, C. Z.
    Celik, O.
    Khuc, T.
    [J]. MAINTENANCE, SAFETY, RISK, MANAGEMENT AND LIFE-CYCLE PERFORMANCE OF BRIDGES, 2018, : 54 - 62
  • [25] Systematic Literature Review of Food-Intake Monitoring in an Aging Population
    Moguel, Enrique
    Berrocal, Javier
    Garcia-Alonso, Jose
    [J]. SENSORS, 2019, 19 (15)
  • [26] Recognition and Localization Methods for Vision-Based Fruit Picking Robots: A Review
    Tang, Yunchao
    Chen, Mingyou
    Wang, Chenglin
    Luo, Lufeng
    Li, Jinhui
    Lian, Guoping
    Zou, Xiangjun
    [J]. FRONTIERS IN PLANT SCIENCE, 2020, 11
  • [27] A Comprehensive Review of Vision-Based 3D Reconstruction Methods
    Zhou, Linglong
    Wu, Guoxin
    Zuo, Yunbo
    Chen, Xuanyu
    Hu, Hongle
    [J]. SENSORS, 2024, 24 (07)
  • [28] Recent methods and databases in vision-based hand gesture recognition: A review
    Pisharady, Pramod Kumar
    Saerbeck, Martin
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2015, 141 : 152 - 165
  • [29] Vision-Based Autonomous Vehicle Systems Based on Deep Learning: A Systematic Literature Review
    Pavel, Monirul Islam
    Tan, Siok Yee
    Abdullah, Azizi
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (14):
  • [30] Development of a vision-based feature extraction for food intake estimation for a robotic assistive eating device
    Solis, Jorge
    Karlsson, Christoffer
    Ogenvall, Mikael
    Lindborg, Ann-Louise
    Takeda, Yukio
    Zhang, Cheng
    [J]. 2018 IEEE 14TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2018, : 1105 - 1109