Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab Sampling

被引:1
|
作者
Jung, Suhun [1 ]
Moon, Yonghwan [2 ,3 ]
Kim, Jeongryul [1 ]
Kim, Keri [3 ,4 ]
机构
[1] Korea Inst Sci & Technol, Artificial Intelligence & Robot Inst, 5,Hwarang Ro 14 Gil, Seoul 02792, South Korea
[2] Korea Univ, Sch Mech Engn, 145 Anam Ro, Seoul 02841, South Korea
[3] Korea Inst Sci & Technol, Augmented Safety Syst Intelligence Sensing & Track, 5 Hwarang Ro 14 Gil, Seoul 02792, South Korea
[4] Univ Sci & Technol, Div Biomed Sci & Technol, 217 Gajeong Ro, Daejeon 34113, South Korea
基金
新加坡国家研究基金会;
关键词
nasopharyngeal swab testing; load cell; fiducial marker; augmented reality; 1-dimensional convolution neural network; ROBOT;
D O I
10.3390/s23208443
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
During the 2019 coronavirus disease pandemic, robotic-based systems for swab sampling were developed to reduce burdens on healthcare workers and their risk of infection. Teleoperated sampling systems are especially appreciated as they fundamentally prevent contact with suspected COVID-19 patients. However, the limited field of view of the installed cameras prevents the operator from recognizing the position and deformation of the swab inserted into the nasal cavity, which highly decreases the operating performance. To overcome this limitation, this study proposes a visual feedback system that monitors and reconstructs the shape of an NP swab using augmented reality (AR). The sampling device contained three load cells and measured the interaction force applied to the swab, while the shape information was captured using a motion-tracking program. These datasets were used to train a one-dimensional convolution neural network (1DCNN) model, which estimated the coordinates of three feature points of the swab in 2D X-Y plane. Based on these points, the virtual shape of the swab, reflecting the curvature of the actual one, was reconstructed and overlaid on the visual display. The accuracy of the 1DCNN model was evaluated on a 2D plane under ten different bending conditions. The results demonstrate that the x-values of the predicted points show errors of under 0.590 mm from P0, while those of P1 and P2 show a biased error of about -1.5 mm with constant standard deviations. For the y-values, the error of all feature points under positive bending is uniformly estimated with under 1 mm of difference, when the error under negative bending increases depending on the amount of deformation. Finally, experiments using a collaborative robot validate its ability to visualize the actual swab's position and deformation on the camera image of 2D and 3D phantoms.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] DeepLoc: Deep Neural Network-based Telco Localization
    Zhang, Yige
    Xiao, Yu
    Zhao, Kai
    Rao, Weixiong
    PROCEEDINGS OF THE 16TH EAI INTERNATIONAL CONFERENCE ON MOBILE AND UBIQUITOUS SYSTEMS: COMPUTING, NETWORKING AND SERVICES (MOBIQUITOUS'19), 2019, : 258 - 267
  • [32] DeepSL: Deep Neural Network-based Similarity Learning
    Tourad M.C.
    Abdelmounaim A.
    Dhleima M.
    Telmoud C.A.A.
    Lachgar M.
    International Journal of Advanced Computer Science and Applications, 2024, 15 (03): : 1394 - 1401
  • [33] A survey on deep neural network-based image captioning
    Liu, Xiaoxiao
    Xu, Qingyang
    Wang, Ning
    VISUAL COMPUTER, 2019, 35 (03): : 445 - 470
  • [34] Deep Neural Network-Based Cooperative Visual Tracking Through Multiple Micro Aerial Vehicles
    Price, Eric
    Lawless, Guilherme
    Ludwig, Roman
    Martinovic, Igor
    Buelthoff, Heinrich H.
    Black, Michael J.
    Ahmad, Aamir
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04): : 3193 - 3200
  • [35] A survey on deep neural network-based image captioning
    Xiaoxiao Liu
    Qingyang Xu
    Ning Wang
    The Visual Computer, 2019, 35 : 445 - 470
  • [36] Analytics of Deep Neural Network-Based Background Subtraction
    Minematsu, Tsubasa
    Shimada, Atsushi
    Uchiyama, Hideaki
    Taniguchi, Rin-ichiro
    JOURNAL OF IMAGING, 2018, 4 (06)
  • [37] Deep neural network-based relation extraction: an overview
    Hailin Wang
    Ke Qin
    Rufai Yusuf Zakari
    Guoming Lu
    Jin Yin
    Neural Computing and Applications, 2022, 34 : 4781 - 4801
  • [38] DeepSL: Deep Neural Network-based Similarity Learning
    Tourad, Mohamedou Cheikh
    Abdelmounaim, Abdali
    Dhleima, Mohamed
    Telmoud, Cheikh Abdelkader Ahmed
    Lachgar, Mohamed
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (03) : 1394 - 1401
  • [39] Deep neural network-based underwater OFDM receiver
    Zhang, Jing
    Cao, Yu
    Han, Guangyao
    Fu, Xiaomei
    IET COMMUNICATIONS, 2019, 13 (13) : 1998 - 2002
  • [40] Analytic Deep Neural Network-Based Robot Control
    Nguyen, Huu-Thiet
    Cheah, Chien Chern
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2022, 27 (04) : 2176 - 2184