Empowering Molecule Discovery for Molecule-Caption Translation With Large Language Models: A ChatGPT Perspective

被引:4
|
作者
Li, Jiatong [1 ]
Liu, Yunqing [1 ]
Fan, Wenqi [1 ]
Wei, Xiao-Yong [1 ]
Liu, Hui [2 ]
Tang, Jiliang [2 ]
Li, Qing [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Comp, Hung Hom, Hong Kong, Peoples R China
[2] Michigan State Univ, E Lansing, MI 48824 USA
关键词
Task analysis; Chatbots; Chemicals; Training; Recurrent neural networks; Computer architecture; Atoms; Drug discovery; large language models (LLMs); in-context learning; retrieval augmented generation;
D O I
10.1109/TKDE.2024.3393356
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Molecule discovery plays a crucial role in various scientific fields, advancing the design of tailored materials and drugs, which contributes to the development of society and human well-being. Specifically, molecule-caption translation is an important task for molecule discovery, aligning human understanding with molecular space. However, most of the existing methods heavily rely on domain experts, require excessive computational cost, or suffer from sub-optimal performance. On the other hand, Large Language Models (LLMs), like ChatGPT, have shown remarkable performance in various cross-modal tasks due to their powerful capabilities in natural language understanding, generalization, and in-context learning (ICL), which provides unprecedented opportunities to advance molecule discovery. Despite several previous works trying to apply LLMs in this task, the lack of domain-specific corpus and difficulties in training specialized LLMs still remain challenges. In this work, we propose a novel LLM-based framework (MolReGPT) for molecule-caption translation, where an In-Context Few-Shot Molecule Learning paradigm is introduced to empower molecule discovery with LLMs like ChatGPT to perform their in-context learning capability without domain-specific pre-training and fine-tuning. MolReGPT leverages the principle of molecular similarity to retrieve similar molecules and their text descriptions from a local database to enable LLMs to learn the task knowledge from context examples. We evaluate the effectiveness of MolReGPT on molecule-caption translation, including molecule understanding and text-based molecule generation. Experimental results show that compared to fine-tuned models, MolReGPT outperforms MolT5-base and is comparable to MolT5-large without additional training. To the best of our knowledge, MolReGPT is the first work to leverage LLMs via in-context learning in molecule-caption translation for advancing molecule discovery. Our work expands the scope of LLM applications, as well as providing a new paradigm for molecule discovery and design.
引用
收藏
页码:6071 / 6083
页数:13
相关论文
共 50 条
  • [31] Large Language Models Like ChatGPT in ABME: Author Guidelines
    Norris, Carly
    ANNALS OF BIOMEDICAL ENGINEERING, 2023, 51 (06) : 1121 - 1122
  • [32] Opportunities and challenges for ChatGPT and large language models in biomedicine and health
    Tian, Shubo
    Jin, Qiao
    Yeganova, Lana
    Lai, Po-Ting
    Zhu, Qingqing
    Chen, Xiuying
    Yang, Yifan
    Chen, Qingyu
    Kim, Won
    Comeau, Donald C.
    Islamaj, Rezarta
    Kapoor, Aadit
    Gao, Xin
    Lu, Zhiyong
    BRIEFINGS IN BIOINFORMATICS, 2024, 25 (01)
  • [33] ChatGPT for good? On opportunities and challenges of large language models for education
    Kasneci, Enkelejda
    Sessler, Kathrin
    Kuechemann, Stefan
    Bannert, Maria
    Dementieva, Daryna
    Fischer, Frank
    Gasser, Urs
    Groh, Georg
    Guennemann, Stephan
    Huellermeier, Eyke
    Krusche, Stepha
    Kutyniok, Gitta
    Michaeli, Tilman
    Nerdel, Claudia
    Pfeffer, Juergen
    Poquet, Oleksandra
    Sailer, Michael
    Schmidt, Albrecht
    Seidel, Tina
    Stadler, Matthias
    Weller, Jochen
    Kuhn, Jochen
    Kasneci, Gjergji
    LEARNING AND INDIVIDUAL DIFFERENCES, 2023, 103
  • [34] Probing into the Fairness of Large Language Models: A Case Study of ChatGPT
    Li, Yunqi
    Zhang, Lanjing
    Zhang, Yongfeng
    2024 58TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS, CISS, 2024,
  • [35] The use of ChatGPT and other large language models in surgical science
    Janssen, Boris, V
    Kazemier, Geert
    Besselink, Marc G.
    BJS OPEN, 2023, 7 (02):
  • [36] Do Large Language Models Understand Chemistry? A Conversation with ChatGPT
    Nascimento, Cayque Monteiro Castro
    Pimentel, Andre Silva
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2023, 63 (06) : 1649 - 1655
  • [37] The Security of Using Large Language Models:A Survey With Emphasis on ChatGPT
    Wei Zhou
    Xiaogang Zhu
    QingLong Han
    Lin Li
    Xiao Chen
    Sheng Wen
    Yang Xiang
    IEEE/CAA Journal of Automatica Sinica, 2025, 12 (01) : 1 - 26
  • [38] The wide range of opportunities for large language models such as ChatGPT in rheumatology
    Hugle, Thomas
    RMD OPEN, 2023, 9 (02):
  • [39] Large Language Models Like ChatGPT in ABME: Author Guidelines
    Carly Norris
    Annals of Biomedical Engineering, 2023, 51 : 1121 - 1122
  • [40] Evaluation of ChatGPT and Gemini large language models for pharmacometrics with NONMEM
    Shin, Euibeom
    Yu, Yifan
    Bies, Robert R.
    Ramanathan, Murali
    JOURNAL OF PHARMACOKINETICS AND PHARMACODYNAMICS, 2024, 51 (03) : 187 - 197