DRAG: design RNAs as hierarchical graphs with reinforcement learning

被引:0
|
作者
Li, Yichong [1 ]
Pan, Xiaoyong [2 ,3 ]
Shen, Hongbin [2 ,3 ]
Yang, Yang [1 ,4 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Inst Image Proc & Pattern Recognit, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
[3] Minist Educ China, Key Lab Syst Control & Informat Proc, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
[4] Shanghai Jiao Tong Univ, Key Lab Shanghai Educ Commiss Intelligent Interact, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
基金
中国国家自然科学基金;
关键词
RNA sequence design; hierarchical division; reinforcement learning; graph neural networks;
D O I
10.1093/bib/bbaf106
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
The rapid development of RNA vaccines and therapeutics puts forward intensive requirements on the sequence design of RNAs. RNA sequence design, or RNA inverse folding, aims to generate RNA sequences that can fold into specific target structures. To date, efficient and high-accuracy prediction models for secondary structures of RNAs have been developed. They provide a basis for computational RNA sequence design methods. Especially, reinforcement learning (RL) has emerged as a promising approach for RNA design due to its ability to learn from trial and error in generation tasks and work without ground truth data. However, existing RL methods are limited in considering complex hierarchical structures in RNA design environments. To address the above limitation, we propose DRAG, an RL method that builds design environments for target secondary structures with hierarchical division based on graph neural networks. Through extensive experiments on benchmark datasets, DRAG exhibits remarkable performance compared with current machine-learning approaches for RNA sequence design. This advantage is particularly evident in long and intricate tasks involving structures with significant depth.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] FeUdal Networks for Hierarchical Reinforcement Learning
    Vezhnevets, Alexander Sasha
    Osindero, Simon
    Schaul, Tom
    Heess, Nicolas
    Jaderberg, Max
    Silver, David
    Kavukcuoglu, Koray
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [22] Recent advances in hierarchical reinforcement learning
    Barto, AG
    Mahadevan, S
    DISCRETE EVENT DYNAMIC SYSTEMS-THEORY AND APPLICATIONS, 2003, 13 (04): : 343 - 379
  • [23] A neural model of hierarchical reinforcement learning
    Rasmussen, Daniel
    Voelker, Aaron
    Eliasmith, Chris
    PLOS ONE, 2017, 12 (07):
  • [24] Hierarchical Reinforcement Learning for Quadruped Locomotion
    Jain, Deepali
    Iscen, Atil
    Caluwaerts, Ken
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 7551 - 7557
  • [25] Hierarchical Reinforcement Learning With Timed Subgoals
    Guertler, Nico
    Buechler, Dieter
    Martius, Georg
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [26] Reinforcement Active Learning Hierarchical Loops
    Gordon, Goren
    Ahissar, Ehud
    2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 3008 - 3015
  • [27] Recent Advances in Hierarchical Reinforcement Learning
    Andrew G. Barto
    Sridhar Mahadevan
    Discrete Event Dynamic Systems, 2003, 13 : 41 - 77
  • [28] Recent Advances in Hierarchical Reinforcement Learning
    Andrew G. Barto
    Sridhar Mahadevan
    Discrete Event Dynamic Systems, 2003, 13 (4) : 341 - 379
  • [29] Reinforcement Learning From Hierarchical Critics
    Cao, Zehong
    Lin, Chin-Teng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (02) : 1066 - 1073
  • [30] Hierarchical Adversarial Inverse Reinforcement Learning
    Chen, Jiayu
    Lan, Tian
    Aggarwal, Vaneet
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 17549 - 17558