Multiple sequence alignment based on deep reinforcement learning with self-attention and positional encoding

被引:0
|
作者
Liu, Yuhang [1 ]
Yuan, Hao [1 ]
Zhang, Qiang [1 ]
Wang, Zixuan [2 ]
Xiong, Shuwen [1 ]
Wen, Naifeng [3 ]
Zhang, Yongqing [1 ]
机构
[1] Chengdu Univ Informat Technol, Sch Comp Sci, Chengdu 610225, Peoples R China
[2] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610065, Peoples R China
[3] Dalian Minzu Univ, Sch Mech & Elect Engn, Dalian 116600, Peoples R China
基金
中国国家自然科学基金;
关键词
T-COFFEE;
D O I
10.1093/bioinformatics/btad636
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Motivation Multiple sequence alignment (MSA) is one of the hotspots of current research and is commonly used in sequence analysis scenarios. However, there is no lasting solution for MSA because it is a Nondeterministic Polynomially complete problem, and the existing methods still have room to improve the accuracy.Results We propose Deep reinforcement learning with Positional encoding and self-Attention for MSA, based on deep reinforcement learning, to enhance the accuracy of the alignment Specifically, inspired by the translation technique in natural language processing, we introduce self-attention and positional encoding to improve accuracy and reliability. Firstly, positional encoding encodes the position of the sequence to prevent the loss of nucleotide position information. Secondly, the self-attention model is used to extract the key features of the sequence. Then input the features into a multi-layer perceptron, which can calculate the insertion position of the gap according to the features. In addition, a novel reinforcement learning environment is designed to convert the classic progressive alignment into progressive column alignment, gradually generating each column's sub-alignment. Finally, merge the sub-alignment into the complete alignment. Extensive experiments based on several datasets validate our method's effectiveness for MSA, outperforming some state-of-the-art methods in terms of the Sum-of-pairs and Column scores.Availability and implementation The process is implemented in Python and available as open-source software from https://github.com/ZhangLab312/DPAMSA.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Gridding and filtering method of gravity and magnetic data based on self-attention deep learning
    Ma G.
    Wang Z.
    Li L.
    Shiyou Diqiu Wuli Kantan/Oil Geophysical Prospecting, 2022, 57 (01): : 34 - 42
  • [42] Deep Learning Based on Hierarchical Self-Attention for Finance Distress Prediction Incorporating Text
    Ruan, Sumei
    Sun, Xusheng
    Yao, Ruanxingchen
    Li, Wei
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2021, 2021
  • [43] BetaAlign: a deep learning approach for multiple sequence alignment
    Dotan, Edo
    Wygoda, Elya
    Ecker, Noa
    Alburquerque, Michael
    Avram, Oren
    Belinkov, Yonatan
    Pupko, Tal
    BIOINFORMATICS, 2025, 41 (01)
  • [44] Self-Attention Encoding and Pooling for Speaker Recognition
    Safari, Pooyan
    India, Miquel
    Hernando, Javier
    INTERSPEECH 2020, 2020, : 941 - 945
  • [45] Actor-Critic for Multi-Agent Reinforcement Learning with Self-Attention
    Zhao, Juan
    Zhu, Tong
    Xiao, Shuo
    Gao, Zongqian
    Sun, Hao
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (09)
  • [46] Kernel Self-Attention for Weakly-supervised Image Classification using Deep Multiple Instance Learning
    Rymarczyk, Dawid
    Borowa, Adriana
    Tabor, Jacek
    Zielinski, Bartosz
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 1720 - 1729
  • [47] Deep Transfer Learning With Self-Attention for Industry Sensor Fusion Tasks
    Zhang, Ze
    Farnsworth, Michael
    Song, Boyang
    Tiwari, Divya
    Tiwari, Ashutosh
    IEEE SENSORS JOURNAL, 2022, 22 (15) : 15235 - 15247
  • [48] A Multi-Agent Reinforcement Learning Method Based on Self-Attention Mechanism and Policy Mapping Recombination
    Li J.-C.
    Shi H.-B.
    Hwang K.-S.
    Jisuanji Xuebao/Chinese Journal of Computers, 2022, 45 (09): : 1842 - 1858
  • [49] Lightweight Smoke Recognition Based on Deep Convolution and Self-Attention
    Zhao, Yang
    Wang, Yigang
    Jung, Hoi-Kyung
    Jin, Yongqiang
    Hua, Dan
    Xu, Sen
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2022, 2022
  • [50] A self-attention based deep learning method for lesion attribute detection from CT reports
    Peng, Yifan
    Yan, Ke
    Sandfort, Veit
    Summers, Ronald M.
    Lu, Zhiyong
    2019 IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI), 2019, : 218 - 222