ProkBERT family: genomic language models for microbiome applications

被引:3
|
作者
Ligeti, Balazs [1 ]
Szepesi-Nagy, Istvan [1 ]
Bodnar, Babett [1 ]
Ligeti-Nagy, Noemi [2 ]
Juhasz, Janos [1 ,3 ]
机构
[1] Pazmany Peter Catholic Univ, Fac Informat Technol & Bion, Budapest, Hungary
[2] HUN REN Hungarian Res Ctr Linguist, Language Technol Res Grp, Budapest, Hungary
[3] Semmelweis Univ, Inst Med Microbiol, Budapest, Hungary
关键词
genomic language models; language models; promoter; phage; BERT; transformer models; LCA tokenization; machine learning in microbiology; CD-HIT;
D O I
10.3389/fmicb.2023.1331233
中图分类号
Q93 [微生物学];
学科分类号
071005 ; 100705 ;
摘要
BackgroundIn the evolving landscape of microbiology and microbiome analysis, the integration of machine learning is crucial for understanding complex microbial interactions, and predicting and recognizing novel functionalities within extensive datasets. However, the effectiveness of these methods in microbiology faces challenges due to the complex and heterogeneous nature of microbial data, further complicated by low signal-to-noise ratios, context-dependency, and a significant shortage of appropriately labeled datasets. This study introduces the ProkBERT model family, a collection of large language models, designed for genomic tasks. It provides a generalizable sequence representation for nucleotide sequences, learned from unlabeled genome data. This approach helps overcome the above-mentioned limitations in the field, thereby improving our understanding of microbial ecosystems and their impact on health and disease.MethodsProkBERT models are based on transfer learning and self-supervised methodologies, enabling them to use the abundant yet complex microbial data effectively. The introduction of the novel Local Context-Aware (LCA) tokenization technique marks a significant advancement, allowing ProkBERT to overcome the contextual limitations of traditional transformer models. This methodology not only retains rich local context but also demonstrates remarkable adaptability across various bioinformatics tasks.ResultsIn practical applications such as promoter prediction and phage identification, the ProkBERT models show superior performance. For promoter prediction tasks, the top-performing model achieved a Matthews Correlation Coefficient (MCC) of 0.74 for E. coli and 0.62 in mixed-species contexts. In phage identification, ProkBERT models consistently outperformed established tools like VirSorter2 and DeepVirFinder, achieving an MCC of 0.85. These results underscore the models' exceptional accuracy and generalizability in both supervised and unsupervised tasks.ConclusionsThe ProkBERT model family is a compact yet powerful tool in the field of microbiology and bioinformatics. Its capacity for rapid, accurate analyses and its adaptability across a spectrum of tasks marks a significant advancement in machine learning applications in microbiology. The models are available on GitHub (https://github.com/nbrg-ppcu/prokbert) and HuggingFace (https://huggingface.co/nerualbioinfo) providing an accessible tool for the community.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Are genomic language models all you need? Exploring genomic language models on protein downstream tasks
    Boshar, Sam
    Trop, Evan
    de Almeida, Bernardo P.
    Copoiu, Liviu
    Pierrot, Thomas
    BIOINFORMATICS, 2024, 40 (09)
  • [2] Estimation of language models for new spoken language applications
    Issar, S
    ICSLP 96 - FOURTH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, PROCEEDINGS, VOLS 1-4, 1996, : 869 - 872
  • [3] Recent advances in deep learning and language models for studying the microbiome
    Yan, Binghao
    Nam, Yunbi
    Li, Lingyao
    Deek, Rebecca A.
    Li, Hongzhe
    Ma, Siyuan
    FRONTIERS IN GENETICS, 2025, 15
  • [4] Genomic language models could transform medicine but not yet
    Micaela Elisa Consens
    Ben Li
    Anna R. Poetsch
    Stephen Gilbert
    npj Digital Medicine, 8 (1)
  • [5] Large language models for oncological applications
    Vera Sorin
    Yiftach Barash
    Eli Konen
    Eyal Klang
    Journal of Cancer Research and Clinical Oncology, 2023, 149 : 9505 - 9508
  • [6] Industrial applications of large language models
    Mubashar Raza
    Zarmina Jahangir
    Muhammad Bilal Riaz
    Muhammad Jasim Saeed
    Muhammad Awais Sattar
    Scientific Reports, 15 (1)
  • [7] Large language models for oncological applications
    Sorin, Vera
    Barash, Yiftach
    Konen, Eli
    Klang, Eyal
    JOURNAL OF CANCER RESEARCH AND CLINICAL ONCOLOGY, 2023, 149 (11) : 9505 - 9508
  • [8] Applications of Large Language Models in Pathology
    Cheng, Jerome
    BIOENGINEERING-BASEL, 2024, 11 (04):
  • [9] Large language models and their applications in bioinformatics
    Sarumi, Oluwafemi A.
    Heider, Dominik
    COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL, 2024, 23 : 3498 - 3505
  • [10] Applications of large language models in oncology
    Loefflert, Chiara M.
    Bressem, Keno K.
    Truhn, Daniel
    ONKOLOGIE, 2024, 30 (05): : 388 - 393