Empowering Molecule Discovery for Molecule-Caption Translation With Large Language Models: A ChatGPT Perspective

被引:4
|
作者
Li, Jiatong [1 ]
Liu, Yunqing [1 ]
Fan, Wenqi [1 ]
Wei, Xiao-Yong [1 ]
Liu, Hui [2 ]
Tang, Jiliang [2 ]
Li, Qing [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Comp, Hung Hom, Hong Kong, Peoples R China
[2] Michigan State Univ, E Lansing, MI 48824 USA
关键词
Task analysis; Chatbots; Chemicals; Training; Recurrent neural networks; Computer architecture; Atoms; Drug discovery; large language models (LLMs); in-context learning; retrieval augmented generation;
D O I
10.1109/TKDE.2024.3393356
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Molecule discovery plays a crucial role in various scientific fields, advancing the design of tailored materials and drugs, which contributes to the development of society and human well-being. Specifically, molecule-caption translation is an important task for molecule discovery, aligning human understanding with molecular space. However, most of the existing methods heavily rely on domain experts, require excessive computational cost, or suffer from sub-optimal performance. On the other hand, Large Language Models (LLMs), like ChatGPT, have shown remarkable performance in various cross-modal tasks due to their powerful capabilities in natural language understanding, generalization, and in-context learning (ICL), which provides unprecedented opportunities to advance molecule discovery. Despite several previous works trying to apply LLMs in this task, the lack of domain-specific corpus and difficulties in training specialized LLMs still remain challenges. In this work, we propose a novel LLM-based framework (MolReGPT) for molecule-caption translation, where an In-Context Few-Shot Molecule Learning paradigm is introduced to empower molecule discovery with LLMs like ChatGPT to perform their in-context learning capability without domain-specific pre-training and fine-tuning. MolReGPT leverages the principle of molecular similarity to retrieve similar molecules and their text descriptions from a local database to enable LLMs to learn the task knowledge from context examples. We evaluate the effectiveness of MolReGPT on molecule-caption translation, including molecule understanding and text-based molecule generation. Experimental results show that compared to fine-tuned models, MolReGPT outperforms MolT5-base and is comparable to MolT5-large without additional training. To the best of our knowledge, MolReGPT is the first work to leverage LLMs via in-context learning in molecule-caption translation for advancing molecule discovery. Our work expands the scope of LLM applications, as well as providing a new paradigm for molecule discovery and design.
引用
收藏
页码:6071 / 6083
页数:13
相关论文
共 50 条
  • [41] ChatGPT on ECT Can Large Language Models Support Psychoeducation?
    Lundin, Robert M.
    Berk, Michael
    Ostergaard, Soren Dinesen
    JOURNAL OF ECT, 2023, 39 (03) : 130 - 133
  • [42] ChatGPT and Gemini large language models for pharmacometrics with NONMEM: comment
    Daungsupawong, Hinpetch
    Wiwanitkit, Viroj
    JOURNAL OF PHARMACOKINETICS AND PHARMACODYNAMICS, 2024, 51 (04) : 303 - 304
  • [43] A Surgical Perspective on Large Language Models
    Miller, Robert
    ANNALS OF SURGERY, 2023, 278 (02) : E211 - E213
  • [44] Can ChatGPT Truly Overcome Other Large Language Models?
    Ray, Partha
    CANADIAN ASSOCIATION OF RADIOLOGISTS JOURNAL-JOURNAL DE L ASSOCIATION CANADIENNE DES RADIOLOGISTES, 2024, 75 (02): : 429 - 429
  • [45] Causal Dataset Discovery with Large Language Models
    Liu, Junfei
    Sun, Shaotong
    Nargesian, Fatemeh
    WORKSHOP ON HUMAN-IN-THE-LOOP DATA ANALYTICS, HILDA 2024, 2024,
  • [46] RNA transport from transcription to localized translation: a single molecule perspective
    Basyuk, Eugenia
    Rage, Florence
    Bertrand, Edouard
    RNA BIOLOGY, 2021, 18 (09) : 1221 - 1237
  • [47] Empowering Cross-lingual Abilities of Instruction-tuned Large Language Models by Translation-following Demonstrations
    Ranaldi, Leonardo
    Pucci, Giulia
    Freitas, Andre
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 7961 - 7973
  • [48] SIMPLE-MODELS FOR LARGE-MOLECULE CALCULATIONS
    MCWEENY, R
    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, 1984, 26 (05) : 693 - 708
  • [49] Discovery of small molecule STING antagonists: Novel use of small molecule dimers to drug large protein pockets
    Siu, Tony
    Altman, Michael
    Baltus, Gretchen
    Childers, Matthew
    Ellis, Michael
    Gunaydin, Hakan
    Hatch, Harold
    Thu Ho
    Jewell, James
    Lacey, Brian
    Lesburg, Charles
    Pan, Bo-Sheng
    Sauvagnat, Berengere
    Schroeder, Gottfried
    Xu, Serena
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2019, 258
  • [50] Can ChatGPT Detect Intent? Evaluating Large Language Models for Spoken Language Understanding
    He, Mutian
    Garner, Philip N.
    INTERSPEECH 2023, 2023, : 1109 - 1113