DRAG: design RNAs as hierarchical graphs with reinforcement learning

被引:0
|
作者
Li, Yichong [1 ]
Pan, Xiaoyong [2 ,3 ]
Shen, Hongbin [2 ,3 ]
Yang, Yang [1 ,4 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Inst Image Proc & Pattern Recognit, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
[3] Minist Educ China, Key Lab Syst Control & Informat Proc, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
[4] Shanghai Jiao Tong Univ, Key Lab Shanghai Educ Commiss Intelligent Interact, 800 Dong Chuan Rd, Shanghai 200240, Peoples R China
基金
中国国家自然科学基金;
关键词
RNA sequence design; hierarchical division; reinforcement learning; graph neural networks;
D O I
10.1093/bib/bbaf106
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
The rapid development of RNA vaccines and therapeutics puts forward intensive requirements on the sequence design of RNAs. RNA sequence design, or RNA inverse folding, aims to generate RNA sequences that can fold into specific target structures. To date, efficient and high-accuracy prediction models for secondary structures of RNAs have been developed. They provide a basis for computational RNA sequence design methods. Especially, reinforcement learning (RL) has emerged as a promising approach for RNA design due to its ability to learn from trial and error in generation tasks and work without ground truth data. However, existing RL methods are limited in considering complex hierarchical structures in RNA design environments. To address the above limitation, we propose DRAG, an RL method that builds design environments for target secondary structures with hierarchical division based on graph neural networks. Through extensive experiments on benchmark datasets, DRAG exhibits remarkable performance compared with current machine-learning approaches for RNA sequence design. This advantage is particularly evident in long and intricate tasks involving structures with significant depth.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Autonomous Reinforcement Learning with Hierarchical REPS
    Daniel, Christian
    Neumann, Gerhard
    Peters, Jan
    2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2013,
  • [42] Scalable Evolutionary Hierarchical Reinforcement Learning
    Abramowitz, Sasha
    Nitschke, Geoff
    PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 272 - 275
  • [43] A Neural Signature of Hierarchical Reinforcement Learning
    Ribas-Fernandes, Jose J. F.
    Solway, Alec
    Diuk, Carlos
    McGuire, Joseph T.
    Barto, Andrew G.
    Niv, Yael
    Botvinick, Matthew M.
    NEURON, 2011, 71 (02) : 370 - 379
  • [44] Hierarchical Representation Learning for Bipartite Graphs
    Li, Chong
    Jia, Kunyang
    Shen, Dan
    Shi, C. J. Richard
    Yang, Hongxia
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2873 - 2879
  • [45] Learning Generalizable Locomotion Skills with Hierarchical Reinforcement Learning
    Li, Tianyu
    Lambert, Nathan
    Calandra, Roberto
    Meier, Franziska
    Rai, Akshara
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 413 - 419
  • [46] Effectively Learning Initiation Sets in Hierarchical Reinforcement Learning
    Bagaria, Akhil
    Abbatematteo, Ben
    Gottesman, Omer
    Corsaro, Matt
    Rammohan, Sreehari
    Konidaris, George
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [47] Designing the process designer: Hierarchical reinforcement learning for optimisation-based process design
    Khan, Ahmad A.
    Lapkin, Alexei A.
    Chemical Engineering and Processing - Process Intensification, 2022, 180
  • [48] Designing the process designer: Hierarchical reinforcement learning for optimisation-based process design
    Khan, Ahmad A.
    Lapkin, Alexei A.
    CHEMICAL ENGINEERING AND PROCESSING-PROCESS INTENSIFICATION, 2022, 180
  • [49] FINDING GEODESICS ON GRAPHS USING REINFORCEMENT LEARNING
    Kious, Daniel
    Mailler, Cecile
    Schapira, Bruno
    ANNALS OF APPLIED PROBABILITY, 2022, 32 (05): : 3889 - 3929
  • [50] Hierarchical Reinforcement Learning Under Mixed Observability
    Hai Nguyen
    Yang, Zhihan
    Baisero, Andrea
    Ma, Xiao
    Platt, Robert
    Amato, Christopher
    ALGORITHMIC FOUNDATIONS OF ROBOTICS XV, 2023, 25 : 188 - 204