Exploring Multi-Objective Deep Reinforcement Learning Methods for Drug Design

被引:2
|
作者
Al Jumaily, Aws [1 ]
Mukaidaisi, Muhetaer [1 ]
Vu, Andrew [1 ]
Tchagang, Alain [2 ]
Li, Yifeng [1 ]
机构
[1] Brock Univ, Dept Comp Sci, St Catharines, ON, Canada
[2] Natl Res Council Canada, Digital Technol Res Ctr, Ottawa, ON, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
reinforcement learning; deep reinforcement learning; multi-objective optimization; drug design; DeepFMPO;
D O I
10.1109/CIBCB55180.2022.9863052
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Drug design and optimization are complex tasks that require strategically efficient exploration of the extremely vast search space. Various fragmentation strategies have been presented in the literature to reduce the complexity of the molecular search space. From the optimization perspective, drug design can be viewed as a multi-objective optimization process. Deep reinforcement learning (DRL) frameworks have displayed promising performances in this field. However, lengthy training periods and inefficient use of sample data limit the scalability of the current frameworks. In this paper, we (1) review the fundamental concepts of deep or multi-objective RL methods and their applications in molecular design, (2) investigate the performance of a recent multi-objective DRL-based and fragment-based drug design framework, named DeepFMPO, in a real application by integrating protein-ligand docking affinity score, and (3) compare this method with a single-objective variant. Through experiments, we find that the DeepFMPO framework (with docking score) can achieve limited success, however, it is incredibly unstable. Our findings encourage further exploration and improvement. Possible sources of the framework's instability and suggestions of further modifications to stabilize the framework are examined.
引用
收藏
页码:107 / 114
页数:8
相关论文
共 50 条
  • [21] A Two-Stage Multi-Objective Deep Reinforcement Learning Framework
    Chen, Diqi
    Wang, Yizhou
    Gao, Wen
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1063 - 1070
  • [22] Multi-Objective Deep Reinforcement Learning for Variable Speed Limit Control
    Rhanizar, Asmae
    El Akkaoui, Zineb
    2024 16TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, ICMLC 2024, 2024, : 621 - 627
  • [23] Collaborative Deep Reinforcement Learning Method for Multi-Objective Parameter Tuning
    Luo S.
    Wei J.
    Liu X.
    Pan L.
    Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, 2022, 42 (09): : 969 - 975
  • [24] Multi-objective deep inverse reinforcement learning for weight estimation of objectives
    Naoya Takayama
    Sachiyo Arai
    Artificial Life and Robotics, 2022, 27 : 594 - 602
  • [25] Deep reinforcement learning for a multi-objective operation in a nuclear power plant
    Bae, Junyong
    Kim, Jae Min
    Lee, Seung Jun
    NUCLEAR ENGINEERING AND TECHNOLOGY, 2023, 55 (09) : 3277 - 3290
  • [26] Deep Reinforcement Learning for Solving Multi-objective Vehicle Routing Problem
    Zhang, Jian
    Hu, Rong
    Wang, Yi-Jun
    Yang, Yuan-Yuan
    Qian, Bin
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT I, 2023, 14086 : 146 - 155
  • [27] Deep Reinforcement Learning based Multi-Objective Systems for Financial Trading
    Bisht, Kiran
    Kumar, Arun
    2020 5TH IEEE INTERNATIONAL CONFERENCE ON RECENT ADVANCES AND INNOVATIONS IN ENGINEERING (IEEE - ICRAIE-2020), 2020,
  • [28] Multi-objective deep inverse reinforcement learning for weight estimation of objectives
    Takayama, Naoya
    Arai, Sachiyo
    ARTIFICIAL LIFE AND ROBOTICS, 2022, 27 (03) : 594 - 602
  • [29] Multi-Objective Deep Reinforcement Learning for Crowd Route Guidance Optimization
    Nishida, Ryo
    Tanigaki, Yuki
    Onishi, Masaki
    Hashimoto, Koichi
    TRANSPORTATION RESEARCH RECORD, 2024, 2678 (05) : 617 - 633
  • [30] Data transmission optimization based on multi-objective deep reinforcement learning
    Wang, Cuiping
    Li, Xiaole
    Tian, Jinwei
    Yin, Yilong
    COMPUTER JOURNAL, 2024, 68 (02): : 201 - 215