Hierarchical reinforcement learning for automatic disease diagnosis

被引:11
|
作者
Zhong, Cheng [1 ]
Liao, Kangenbei [1 ]
Chen, Wei [1 ]
Liu, Qianlong [2 ]
Peng, Baolin [3 ]
Huang, Xuanjing [4 ]
Peng, Jiajie [5 ]
Wei, Zhongyu [1 ,5 ]
机构
[1] Fudan Univ, Sch Data Sci, Shanghai 200433, Peoples R China
[2] Alibaba Grp, Hangzhou 310052, Peoples R China
[3] Microsoft Res, Redmond, WA 98052 USA
[4] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
[5] Fudan Univ, Res Inst Intelligent Complex Symtems, Shanghai 200433, Peoples R China
关键词
D O I
10.1093/bioinformatics/btac408
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Motivation: Disease diagnosis-oriented dialog system models the interactive consultation procedure as the Markov decision process, and reinforcement learning algorithms are used to solve the problem. Existing approaches usually employ a flat policy structure that treat all symptoms and diseases equally for action making. This strategy works well in a simple scenario when the action space is small; however, its efficiency will be challenged in the real environment. Inspired by the offline consultation process, we propose to integrate a hierarchical policy structure of two levels into the dialog system for policy learning. The high-level policy consists of a master model that is responsible for triggering a low-level model, the low-level policy consists of several symptom checkers and a disease classifier. The proposed policy structure is capable to deal with diagnosis problem including large number of diseases and symptoms. Results: Experimental results on three real-world datasets and a synthetic dataset demonstrate that our hierarchical framework achieves higher accuracy and symptom recall in disease diagnosis compared with existing systems. We construct a benchmark including datasets and implementation of existing algorithms to encourage follow-up researches.
引用
收藏
页码:3995 / 4001
页数:7
相关论文
共 50 条
  • [31] Hierarchical Reinforcement Learning for Quadruped Locomotion
    Jain, Deepali
    Iscen, Atil
    Caluwaerts, Ken
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 7551 - 7557
  • [32] Hierarchical Reinforcement Learning With Timed Subgoals
    Guertler, Nico
    Buechler, Dieter
    Martius, Georg
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] Reinforcement Active Learning Hierarchical Loops
    Gordon, Goren
    Ahissar, Ehud
    2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 3008 - 3015
  • [34] Recent Advances in Hierarchical Reinforcement Learning
    Andrew G. Barto
    Sridhar Mahadevan
    Discrete Event Dynamic Systems, 2003, 13 : 41 - 77
  • [35] Recent Advances in Hierarchical Reinforcement Learning
    Andrew G. Barto
    Sridhar Mahadevan
    Discrete Event Dynamic Systems, 2003, 13 (4) : 341 - 379
  • [36] Reinforcement Learning From Hierarchical Critics
    Cao, Zehong
    Lin, Chin-Teng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (02) : 1066 - 1073
  • [37] Hierarchical Adversarial Inverse Reinforcement Learning
    Chen, Jiayu
    Lan, Tian
    Aggarwal, Vaneet
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 17549 - 17558
  • [38] Partial Order Hierarchical Reinforcement Learning
    Hengst, Bernhard
    AI 2008: ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2008, 5360 : 138 - 149
  • [39] Compositional Transfer in Hierarchical Reinforcement Learning
    Wulfmeier, Markus
    Abdolmaleki, Abbas
    Hafner, Roland
    Springenberg, Jost Tobias
    Neunert, Michael
    Hertweck, Tim
    Lampe, Thomas
    Siegel, Noah
    Heess, Nicolas
    Riedmiller, Martin
    ROBOTICS: SCIENCE AND SYSTEMS XVI, 2020,
  • [40] Hierarchical Bayesian Inverse Reinforcement Learning
    Choi, Jaedeug
    Kim, Kee-Eung
    IEEE TRANSACTIONS ON CYBERNETICS, 2015, 45 (04) : 793 - 805