Bolus Insulin calculation without meal information. A reinforcement learning approach

被引:7
|
作者
Ahmad, Sayyar [1 ]
Beneyto, Aleix [1 ]
Contreras, Ivan [1 ]
Vehi, Josep [1 ,2 ]
机构
[1] Univ Girona, Dept Elect Elect & Automatic Engn, Girona 17004, Spain
[2] Ctr Invest Biomed Red Diabet & Enfermedades Metab, Madrid 28001, Spain
关键词
Reinforcement learning; Type; 1; diabetes; Insulin bolus calculator; Artificial pancreas; TO-RUN CONTROL; ARTIFICIAL PANCREAS; GLUCOSE CONTROL; TYPE-1; DELIVERY;
D O I
10.1016/j.artmed.2022.102436
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In continuous subcutaneous insulin infusion and multiple daily injections, insulin boluses are usually calculated based on patient-specific parameters, such as carbohydrates-to-insulin ratio (CR), insulin sensitivity-based correction factor (CF), and the estimation of the carbohydrates (CHO) to be ingested. This study aimed to calculate insulin boluses without CR, CF, and CHO content, thereby eliminating the errors caused by misestimating CHO and alleviating the management burden on the patient. A Q-learning-based reinforcement learning algorithm (RL) was developed to optimise bolus insulin doses for in-silico type 1 diabetic patients. A realistic virtual cohort of 68 patients with type 1 diabetes that was previously developed by our research group, was considered for the in-silico trials. The results were compared to those of the standard bolus calculator (SBC) with and without CHO misestimation using open-loop basal insulin therapy. The percentage of the overall duration spent in the target range of 70-180 mg/dL was 73.4% and 72.37%, <70 mg/dL was 1.96 and 0.70%, and >180 mg/dL was 23.40 and 24.63%, respectively, for RL and SBC without CHO misestimation. The results revealed that RL outperformed SBC in the presence of CHO misestimation, and despite not knowing the CHO content of meals, the performance of RL was similar to that of SBC in perfect conditions. This algorithm can be incorporated into artificial pancreas and automatic insulin delivery systems in the future.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Optimizing Age of Information in RIS-Assisted NOMA Networks: A Deep Reinforcement Learning Approach
    Feng, Xue
    Fu, Shu
    Fang, Fang
    Yu, Fei Richard
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (10) : 2100 - 2104
  • [42] A deep reinforcement learning approach to energy management control with connected information for hybrid electric vehicles
    Mei, Peng
    Karimi, Hamid Reza
    Xie, Hehui
    Chen, Fei
    Huang, Cong
    Yang, Shichun
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 123
  • [43] Optimizing Age of Information Through Aerial Reconfigurable Intelligent Surfaces: A Deep Reinforcement Learning Approach
    Samir, Moataz
    Elhattab, Mohamed
    Assi, Chadi
    Sharafeddine, Sanaa
    Ghrayeb, Ali
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (04) : 3978 - 3983
  • [44] Age-Optimal Information Gathering in Linear Underwater Networks: A Deep Reinforcement Learning Approach
    Al-Habob, Ahmed A.
    Dobre, Octavia A.
    Poor, H. Vincent
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (12) : 13129 - 13138
  • [45] A cross-platform deep reinforcement learning model for autonomous navigation without global information in different scenes
    Cheng, Chuanxin
    Zhang, Hao
    Sun, Yuan
    Tao, Hongfeng
    Chen, Yiyang
    CONTROL ENGINEERING PRACTICE, 2024, 150
  • [46] RARE-EVENT SIMULATION WITHOUT STRUCTURAL INFORMATION: A LEARNING-BASED APPROACH
    Huang, Zhiyuan
    Lam, Henry
    Zhao, Ding
    2018 WINTER SIMULATION CONFERENCE (WSC), 2018, : 1826 - 1837
  • [47] AN ACTOR-CRITIC REINFORCEMENT LEARNING APPROACH TO MINIMUM AGE OF INFORMATION SCHEDULING IN ENERGY HARVESTING NETWORKS
    Leng, Shiyang
    Yener, Aylin
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 8128 - 8132
  • [48] Cooperative Sensing and Heterogeneous Information Fusion in VCPS: A Multi-Agent Deep Reinforcement Learning Approach
    Xu, Xincao
    Liu, Kai
    Dai, Penglin
    Xie, Ruitao
    Cao, Jingjing
    Luo, Jiangtao
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (06) : 4876 - 4891
  • [49] Reinforcement Learning Approach to Generate Zero-Dynamics Attacks on Control Systems Without State Space Models
    Paudel, Bipin
    Amariucai, George
    COMPUTER SECURITY - ESORICS 2023, PT IV, 2024, 14347 : 3 - 22
  • [50] Iterative learning control using faded measurements without system information: a gradient estimation approach
    Shen, Dong
    INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE, 2020, 51 (14) : 2675 - 2689