DRAG: design RNAs as hierarchical graphs with reinforcement learning

被引:0
|
作者
Li, Yichong [1 ]
Pan, Xiaoyong [2 ,3 ]
Shen, Hongbin [2 ,3 ]
Yang, Yang [1 ,4 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Inst Image Proc & Pattern Recognit, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
[3] Minist Educ China, Key Lab Syst Control & Informat Proc, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
[4] Shanghai Jiao Tong Univ, Key Lab Shanghai Educ Commiss Intelligent Interact, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
基金
中国国家自然科学基金;
关键词
RNA sequence design; hierarchical division; reinforcement learning; graph neural networks;
D O I
10.1093/bib/bbaf106
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
The rapid development of RNA vaccines and therapeutics puts forward intensive requirements on the sequence design of RNAs. RNA sequence design, or RNA inverse folding, aims to generate RNA sequences that can fold into specific target structures. To date, efficient and high-accuracy prediction models for secondary structures of RNAs have been developed. They provide a basis for computational RNA sequence design methods. Especially, reinforcement learning (RL) has emerged as a promising approach for RNA design due to its ability to learn from trial and error in generation tasks and work without ground truth data. However, existing RL methods are limited in considering complex hierarchical structures in RNA design environments. To address the above limitation, we propose DRAG, an RL method that builds design environments for target secondary structures with hierarchical division based on graph neural networks. Through extensive experiments on benchmark datasets, DRAG exhibits remarkable performance compared with current machine-learning approaches for RNA sequence design. This advantage is particularly evident in long and intricate tasks involving structures with significant depth.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Optimal Hierarchical Learning Path Design With Reinforcement Learning
    Li, Xiao
    Xu, Hanchen
    Zhang, Jinming
    Chang, Hua-hua
    APPLIED PSYCHOLOGICAL MEASUREMENT, 2021, 45 (01) : 54 - 70
  • [2] Balancing Exploration and Exploitation in Hierarchical Reinforcement Learning via Latent Landmark Graphs
    Zhang, Qingyang
    Yang, Yiming
    Ruan, Jingqing
    Xiong, Xuantang
    Xing, Dengpeng
    Xu, Bo
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [3] Graph-Based Design of Hierarchical Reinforcement Learning Agents
    Tateo, Davide
    Erdenlig, Idil Su
    Bonarini, Andrea
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 1003 - 1009
  • [4] Hierarchical Reinforcement Learning Framework for Stochastic Spaceflight Campaign Design
    Takubo, Yuji
    Chen, Hao
    Ho, Koki
    JOURNAL OF SPACECRAFT AND ROCKETS, 2022, 59 (02) : 421 - 433
  • [5] Hierarchical deep reinforcement learning to drag heavy objects by adult-sized humanoid robot
    Saeedvand, Saeed
    Mandala, Hanjaya
    Baltes, Jacky
    APPLIED SOFT COMPUTING, 2021, 110
  • [6] Reinforcement Learning on Graphs: A Survey
    Nie, Mingshuo
    Chen, Dongming
    Wang, Dongqi
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2023, 7 (04): : 1065 - 1082
  • [7] Reinforcement Learning with Feedback Graphs
    Dann, Christoph
    Mansour, Yishay
    Mohri, Mehryar
    Sekhari, Ayush
    Sridharan, Karthik
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [8] Concurrent Hierarchical Reinforcement Learning
    Marthi, Bhaskara
    Russell, Stuart
    Latham, David
    Guestrin, Carlos
    19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-05), 2005, : 779 - 785
  • [9] Hierarchical reinforcement learning with OMQ
    Shen, Jing
    Liu, Haibo
    Gu, Guochang
    PROCEEDINGS OF THE FIFTH IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS, VOLS 1 AND 2, 2006, : 584 - 588
  • [10] Hierarchical Imitation and Reinforcement Learning
    Le, Hoang M.
    Jiang, Nan
    Agarwal, Alekh
    Dudik, Miroslav
    Yue, Yisong
    Daume, Hal, III
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80