Bolus Insulin calculation without meal information. A reinforcement learning approach

被引:7
|
作者
Ahmad, Sayyar [1 ]
Beneyto, Aleix [1 ]
Contreras, Ivan [1 ]
Vehi, Josep [1 ,2 ]
机构
[1] Univ Girona, Dept Elect Elect & Automatic Engn, Girona 17004, Spain
[2] Ctr Invest Biomed Red Diabet & Enfermedades Metab, Madrid 28001, Spain
关键词
Reinforcement learning; Type; 1; diabetes; Insulin bolus calculator; Artificial pancreas; TO-RUN CONTROL; ARTIFICIAL PANCREAS; GLUCOSE CONTROL; TYPE-1; DELIVERY;
D O I
10.1016/j.artmed.2022.102436
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In continuous subcutaneous insulin infusion and multiple daily injections, insulin boluses are usually calculated based on patient-specific parameters, such as carbohydrates-to-insulin ratio (CR), insulin sensitivity-based correction factor (CF), and the estimation of the carbohydrates (CHO) to be ingested. This study aimed to calculate insulin boluses without CR, CF, and CHO content, thereby eliminating the errors caused by misestimating CHO and alleviating the management burden on the patient. A Q-learning-based reinforcement learning algorithm (RL) was developed to optimise bolus insulin doses for in-silico type 1 diabetic patients. A realistic virtual cohort of 68 patients with type 1 diabetes that was previously developed by our research group, was considered for the in-silico trials. The results were compared to those of the standard bolus calculator (SBC) with and without CHO misestimation using open-loop basal insulin therapy. The percentage of the overall duration spent in the target range of 70-180 mg/dL was 73.4% and 72.37%, <70 mg/dL was 1.96 and 0.70%, and >180 mg/dL was 23.40 and 24.63%, respectively, for RL and SBC without CHO misestimation. The results revealed that RL outperformed SBC in the presence of CHO misestimation, and despite not knowing the CHO content of meals, the performance of RL was similar to that of SBC in perfect conditions. This algorithm can be incorporated into artificial pancreas and automatic insulin delivery systems in the future.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] A REINFORCEMENT LEARNING BOLUS CALCULATOR WITH NO MEAL INFORMATION FOR PATIENTS WITH TYPE 1 DIABETES
    Ahmad, S.
    Beneyto, A.
    Vehi, J.
    DIABETES TECHNOLOGY & THERAPEUTICS, 2022, 24 : A27 - A28
  • [2] PERSONALIZED MEAL INSULIN BOLUS FOR TYPE 1 DIABETES USING DEEP REINFORCEMENT LEARNING
    Zhu, T.
    Li, K.
    Uduku, C.
    Herrero, P.
    Oliver, N.
    Georgiou, P.
    DIABETES TECHNOLOGY & THERAPEUTICS, 2020, 22 : A115 - A116
  • [3] REINFORCEMENT LEARNING BASED INSULIN BOLUS CALCULATOR: IN SILICO STUDY
    Kim, J.
    Lee, S.
    Kim, J. H.
    Park, S. -M.
    DIABETES TECHNOLOGY & THERAPEUTICS, 2020, 22 : A78 - A78
  • [4] A deep reinforcement learning approach for the meal delivery problem
    Jahanshahi, Hadi
    Bozanta, Aysun
    Cevik, Mucahit
    Kavuk, Eray Mert
    Tosun, Ayse
    Sonuc, Sibel B.
    Kosucu, Bilgin
    Basar, Ayse
    KNOWLEDGE-BASED SYSTEMS, 2022, 243
  • [5] Reinforcement Learning for Diabetes Blood Glucose Control with Meal Information
    Zhu, Jinhao
    Zhang, Yinjia
    Rao, Weixiong
    Zhao, Qinpei
    Li, Jiangfeng
    Wang, Congrong
    BIOINFORMATICS RESEARCH AND APPLICATIONS, ISBRA 2021, 2021, 13064 : 80 - 91
  • [6] An automatic deep reinforcement learning bolus calculator for automated insulin delivery systems
    Ahmad, Sayyar
    Beneyto, Aleix
    Zhu, Taiyu
    Contreras, Ivan
    Georgiou, Pantelis
    Vehi, Josep
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [7] An Insulin Bolus Advisor for Type 1 Diabetes Using Deep Reinforcement Learning
    Zhu, Taiyu
    Li, Kezhi
    Kuang, Lei
    Herrero, Pau
    Georgiou, Pantelis
    SENSORS, 2020, 20 (18) : 1 - 15
  • [8] PERFORMANCE OF THE OMNIPOD® 5 AUTOMATED INSULIN DELIVERY SYSTEM WITH AND WITHOUT PRE-MEAL BOLUS
    Ekhlaspour, L.
    Buckingham, B.
    Huyett, L.
    Criego, A.
    Carlson, A.
    Brown, S.
    Weinstock, R.
    Hansen, D.
    Bode, B.
    Forlenza, G.
    Levy, C.
    Macleish, S.
    Desalvo, D.
    Hirsch, I.
    Jones, T.
    Mehta, S.
    Laffel, L.
    Sherr, J.
    Bhargava, A.
    Shah, V.
    Dumais, B.
    Ly, T.
    DIABETES TECHNOLOGY & THERAPEUTICS, 2022, 24 : A81 - A82
  • [9] Surveys without Questions: A Reinforcement Learning Approach
    Sinha, Atanu R.
    Jain, Deepali
    Sheoran, Nikhil
    Khosla, Sopan
    Sasidharan, Reshmi
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 257 - 264
  • [10] Reinforcement Learning Ramp Metering without Complete Information
    Wang, Xing-Ju
    Xi, Xiao-Ming
    Gao, Gui-Feng
    JOURNAL OF CONTROL SCIENCE AND ENGINEERING, 2012, 2012