Empowering Molecule Discovery for Molecule-Caption Translation With Large Language Models: A ChatGPT Perspective

被引:4
|
作者
Li, Jiatong [1 ]
Liu, Yunqing [1 ]
Fan, Wenqi [1 ]
Wei, Xiao-Yong [1 ]
Liu, Hui [2 ]
Tang, Jiliang [2 ]
Li, Qing [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Comp, Hung Hom, Hong Kong, Peoples R China
[2] Michigan State Univ, E Lansing, MI 48824 USA
关键词
Task analysis; Chatbots; Chemicals; Training; Recurrent neural networks; Computer architecture; Atoms; Drug discovery; large language models (LLMs); in-context learning; retrieval augmented generation;
D O I
10.1109/TKDE.2024.3393356
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Molecule discovery plays a crucial role in various scientific fields, advancing the design of tailored materials and drugs, which contributes to the development of society and human well-being. Specifically, molecule-caption translation is an important task for molecule discovery, aligning human understanding with molecular space. However, most of the existing methods heavily rely on domain experts, require excessive computational cost, or suffer from sub-optimal performance. On the other hand, Large Language Models (LLMs), like ChatGPT, have shown remarkable performance in various cross-modal tasks due to their powerful capabilities in natural language understanding, generalization, and in-context learning (ICL), which provides unprecedented opportunities to advance molecule discovery. Despite several previous works trying to apply LLMs in this task, the lack of domain-specific corpus and difficulties in training specialized LLMs still remain challenges. In this work, we propose a novel LLM-based framework (MolReGPT) for molecule-caption translation, where an In-Context Few-Shot Molecule Learning paradigm is introduced to empower molecule discovery with LLMs like ChatGPT to perform their in-context learning capability without domain-specific pre-training and fine-tuning. MolReGPT leverages the principle of molecular similarity to retrieve similar molecules and their text descriptions from a local database to enable LLMs to learn the task knowledge from context examples. We evaluate the effectiveness of MolReGPT on molecule-caption translation, including molecule understanding and text-based molecule generation. Experimental results show that compared to fine-tuned models, MolReGPT outperforms MolT5-base and is comparable to MolT5-large without additional training. To the best of our knowledge, MolReGPT is the first work to leverage LLMs via in-context learning in molecule-caption translation for advancing molecule discovery. Our work expands the scope of LLM applications, as well as providing a new paradigm for molecule discovery and design.
引用
收藏
页码:6071 / 6083
页数:13
相关论文
共 50 条
  • [1] Large language models (LLM) and ChatGPT: a medical student perspective
    Arosh S. Perera Molligoda Arachchige
    European Journal of Nuclear Medicine and Molecular Imaging, 2023, 50 : 2248 - 2249
  • [2] Large language models (LLM) and ChatGPT: a medical student perspective
    Arachchige, Arosh S. Perera Molligoda S.
    EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING, 2023, 50 (08) : 2248 - 2249
  • [3] ChatGPT and large language models in gastroenterology
    Sharma, Prateek
    Parasa, Sravanthi
    NATURE REVIEWS GASTROENTEROLOGY & HEPATOLOGY, 2023, 20 (08) : 481 - 482
  • [4] ChatGPT and large language models in gastroenterology
    Prateek Sharma
    Sravanthi Parasa
    Nature Reviews Gastroenterology & Hepatology, 2023, 20 : 481 - 482
  • [5] Artificial intelligence enabled ChatGPT and large language models in drug target discovery, drug discovery, and development
    Chakraborty, Chiranjib
    Bhattacharya, Manojit
    Lee, Sang-Soo
    MOLECULAR THERAPY-NUCLEIC ACIDS, 2023, 33 : 866 - 868
  • [6] Empowering Users with ChatGPT and Similar Large Language Models (LLMs): Everyday Information Needs, Uses, and Gratification
    Ju, Boryung
    Stewart, J. Brenton
    Proceedings of the Association for Information Science and Technology, 2024, 61 (01) : 172 - 182
  • [7] PEER: Empowering Writing with Large Language Models
    Sessler, Kathrin
    Xiang, Tao
    Bogenrieder, Lukas
    Kasneci, Enkelejda
    RESPONSIVE AND SUSTAINABLE EDUCATIONAL FUTURES, EC-TEL 2023, 2023, 14200 : 755 - 761
  • [8] LARGE LANGUAGE MODELS (LLMS) AND CHATGPT FOR BIOMEDICINE
    Arighi, Cecilia
    Brenner, Steven
    Lu, Zhiyong
    BIOCOMPUTING 2024, PSB 2024, 2024, : 641 - 644
  • [9] Large Language Models Meet Open-World Intent Discovery and Recognition: An Evaluation of ChatGPT
    Song, Xiaoshuai
    He, Keqing
    Wang, Pei
    Dong, Guanting
    Mou, Yutao
    Wang, Jingang
    Xiang, Yunsen
    Cai, Xunliang
    Xu, Weiran
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 10291 - 10304
  • [10] From Artificially Real to Real: Leveraging Pseudo Data from Large Language Models for Low-Resource Molecule Discovery
    Chen, Yuhan
    Xi, Nuwa
    Du, Yanrui
    Wang, Haochun
    Chen, Jianyu
    Zhao, Sendong
    Qin, Bing
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 20, 2024, : 21958 - 21966