Benchmarking protein language models for protein crystallization

被引:0
|
作者
Mall, Raghvendra [1 ]
Kaushik, Rahul [1 ]
Martinez, Zachary A. [2 ]
Thomson, Matt W. [2 ]
Castiglione, Filippo [1 ,3 ]
机构
[1] Technol Innovat Inst, Biotechnol Res Ctr, POB 9639, Abu Dhabi, U Arab Emirates
[2] CALTECH, Div Biol & Bioengn, Pasadena, CA 91125 USA
[3] Natl Res Council Italy, Inst Appl Comp, I-00185 Rome, Italy
来源
SCIENTIFIC REPORTS | 2025年 / 15卷 / 01期
关键词
Open protein language models (PLMs); Protein crystallization; Benchmarking; Protein generation; PROPENSITY PREDICTION; REFINEMENT;
D O I
10.1038/s41598-025-86519-5
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The problem of protein structure determination is usually solved by X-ray crystallography. Several in silico deep learning methods have been developed to overcome the high attrition rate, cost of experiments and extensive trial-and-error settings, for predicting the crystallization propensities of proteins based on their sequences. In this work, we benchmark the power of open protein language models (PLMs) through the TRILL platform, a be-spoke framework democratizing the usage of PLMs for the task of predicting crystallization propensities of proteins. By comparing LightGBM / XGBoost classifiers built on the average embedding representations of proteins learned by different PLMs, such as ESM2, Ankh, ProtT5-XL, ProstT5, xTrimoPGLM, SaProt with the performance of state-of-the-art sequence-based methods like DeepCrystal, ATTCrys and CLPred, we identify the most effective methods for predicting crystallization outcomes. The LightGBM classifiers utilizing embeddings from ESM2 model with 30 and 36 transformer layers and 150 and 3000 million parameters respectively have performance gains by 3-\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5\%$$\end{document} than all compared models for various evaluation metrics, including AUPR (Area Under Precision-Recall Curve), AUC (Area Under the Receiver Operating Characteristic Curve), and F1 on independent test sets. Furthermore, we fine-tune the ProtGPT2 model available via TRILL to generate crystallizable proteins. Starting with 3000 generated proteins and through a step of filtration processes including consensus of all open PLM-based classifiers, sequence identity through CD-HIT, secondary structure compatibility, aggregation screening, homology search and foldability evaluation, we identified a set of 5 novel proteins as potentially crystallizable.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Predictive models for protein crystallization
    Rupp, B
    Wang, JW
    METHODS, 2004, 34 (03) : 390 - 407
  • [2] PLM_Sol: predicting protein solubility by benchmarking multiple protein language models with the updated Escherichia coli protein solubility dataset
    Zhang, Xuechun
    Hu, Xiaoxuan
    Zhang, Tongtong
    Yang, Ling
    Liu, Chunhong
    Xu, Ning
    Wang, Haoyi
    Sun, Wen
    BRIEFINGS IN BIOINFORMATICS, 2024, 25 (05)
  • [3] A resource for benchmarking the usefulness of protein structure models
    Daniel Carbajo
    Anna Tramontano
    BMC Bioinformatics, 13
  • [4] A resource for benchmarking the usefulness of protein structure models
    Carbajo, Daniel
    Tramontano, Anna
    BMC BIOINFORMATICS, 2012, 13
  • [5] PLMC: Language Model of Protein Sequences Enhances Protein Crystallization Prediction
    Xiong, Dapeng
    Kaicheng, U.
    Sun, Jianfeng
    Cribbs, Adam P.
    INTERDISCIPLINARY SCIENCES-COMPUTATIONAL LIFE SCIENCES, 2024, 16 (04) : 802 - 813
  • [6] Language models for protein design
    Lee, Jin Sub
    Abdin, Osama
    Kim, Philip M.
    CURRENT OPINION IN STRUCTURAL BIOLOGY, 2025, 92
  • [7] Benchmarking of structure refinement methods for protein complex models
    Verburgt, Jacob
    Kihara, Daisuke
    PROTEINS-STRUCTURE FUNCTION AND BIOINFORMATICS, 2022, 90 (01) : 83 - 95
  • [8] Collectively encoding protein properties enriches protein language models
    An, Jingmin
    Weng, Xiaogang
    BMC BIOINFORMATICS, 2022, 23 (01)
  • [9] Collectively encoding protein properties enriches protein language models
    Jingmin An
    Xiaogang Weng
    BMC Bioinformatics, 23
  • [10] Protein language models can capture protein quaternary state
    Avraham O.
    Tsaban T.
    Ben-Aharon Z.
    Tsaban L.
    Schueler-Furman O.
    BMC Bioinformatics, 2023, 24 (01)