Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab Sampling

被引:1
|
作者
Jung, Suhun [1 ]
Moon, Yonghwan [2 ,3 ]
Kim, Jeongryul [1 ]
Kim, Keri [3 ,4 ]
机构
[1] Korea Inst Sci & Technol, Artificial Intelligence & Robot Inst, 5,Hwarang Ro 14 Gil, Seoul 02792, South Korea
[2] Korea Univ, Sch Mech Engn, 145 Anam Ro, Seoul 02841, South Korea
[3] Korea Inst Sci & Technol, Augmented Safety Syst Intelligence Sensing & Track, 5 Hwarang Ro 14 Gil, Seoul 02792, South Korea
[4] Univ Sci & Technol, Div Biomed Sci & Technol, 217 Gajeong Ro, Daejeon 34113, South Korea
基金
新加坡国家研究基金会;
关键词
nasopharyngeal swab testing; load cell; fiducial marker; augmented reality; 1-dimensional convolution neural network; ROBOT;
D O I
10.3390/s23208443
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
During the 2019 coronavirus disease pandemic, robotic-based systems for swab sampling were developed to reduce burdens on healthcare workers and their risk of infection. Teleoperated sampling systems are especially appreciated as they fundamentally prevent contact with suspected COVID-19 patients. However, the limited field of view of the installed cameras prevents the operator from recognizing the position and deformation of the swab inserted into the nasal cavity, which highly decreases the operating performance. To overcome this limitation, this study proposes a visual feedback system that monitors and reconstructs the shape of an NP swab using augmented reality (AR). The sampling device contained three load cells and measured the interaction force applied to the swab, while the shape information was captured using a motion-tracking program. These datasets were used to train a one-dimensional convolution neural network (1DCNN) model, which estimated the coordinates of three feature points of the swab in 2D X-Y plane. Based on these points, the virtual shape of the swab, reflecting the curvature of the actual one, was reconstructed and overlaid on the visual display. The accuracy of the 1DCNN model was evaluated on a 2D plane under ten different bending conditions. The results demonstrate that the x-values of the predicted points show errors of under 0.590 mm from P0, while those of P1 and P2 show a biased error of about -1.5 mm with constant standard deviations. For the y-values, the error of all feature points under positive bending is uniformly estimated with under 1 mm of difference, when the error under negative bending increases depending on the amount of deformation. Finally, experiments using a collaborative robot validate its ability to visualize the actual swab's position and deformation on the camera image of 2D and 3D phantoms.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Deep neural network-based fusion model for emotion recognition using visual data
    Luu-Ngoc Do
    Hyung-Jeong Yang
    Hai-Duong Nguyen
    Soo-Hyung Kim
    Guee-Sang Lee
    In-Seop Na
    The Journal of Supercomputing, 2021, 77 : 10773 - 10790
  • [22] A novel deep neural network-based technique for network embedding
    Benbatata, Sabrina
    Saoud, Bilal
    Shayea, Ibraheem
    Alsharabi, Naif
    Alhammadi, Abdulraqeb
    Alferaidi, Ali
    Jadi, Amr
    Daradkeh, Yousef Ibrahim
    PEERJ COMPUTER SCIENCE, 2024, 10 : 1 - 29
  • [23] Adaptive Control of Camera Modality with Deep Neural Network-Based Feedback for Efficient Object Tracking
    Saha, Priyabrata
    Mudassar, Burhan A.
    Mukhopadhyay, Saibal
    2018 15TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS), 2018, : 313 - 318
  • [24] PerAnSel: A Novel Deep Neural Network-Based System for Persian Question Answering
    Mozafari, Jamshid
    Kazemi, Arefeh
    Moradi, Parham
    Nematbakhsh, Mohammad Ali
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [25] Deep Neural Network-based Handheld Diagnosis System for Autism Spectrum Disorder
    Khullar, Vikas
    Singh, Harjit Pal
    Bala, Manju
    NEUROLOGY INDIA, 2021, 69 (01) : 66 - 74
  • [26] Visual Servo Control of COVID-19 Nasopharyngeal Swab Sampling Robot
    Hwang, Guebin
    Lee, Jongwon
    Yang, Sungwook
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 1855 - 1861
  • [27] Neural network-based adaptive passive output feedback control for MIMO uncertain system
    Zhu, Yonghong
    Feng, Qing
    Wang, Jianhong
    Telkomnika, 2012, 10 (06): : 1263 - 1272
  • [28] A Deep Neural Network based Detection System for the Visual Diagnosis of the Blackberry
    Rubio, Alejandro
    Avendano, Carlos
    Martinez, Fredy
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (08) : 736 - 741
  • [29] Mouth Cavity Visual Analysis Based on Deep Learning for Oropharyngeal Swab Robot Sampling
    Gao, Qing
    Ju, Zhaojie
    Chen, Yongquan
    Zhang, Tianwei
    Leng, Yuquan
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2023, 53 (06) : 1083 - 1092
  • [30] Deep neural network-based relation extraction: an overview
    Wang, Hailin
    Qin, Ke
    Zakari, Rufai Yusuf
    Lu, Guoming
    Yin, Jin
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (06): : 4781 - 4801