Empowering Molecule Discovery for Molecule-Caption Translation With Large Language Models: A ChatGPT Perspective

被引:4
|
作者
Li, Jiatong [1 ]
Liu, Yunqing [1 ]
Fan, Wenqi [1 ]
Wei, Xiao-Yong [1 ]
Liu, Hui [2 ]
Tang, Jiliang [2 ]
Li, Qing [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Comp, Hung Hom, Hong Kong, Peoples R China
[2] Michigan State Univ, E Lansing, MI 48824 USA
关键词
Task analysis; Chatbots; Chemicals; Training; Recurrent neural networks; Computer architecture; Atoms; Drug discovery; large language models (LLMs); in-context learning; retrieval augmented generation;
D O I
10.1109/TKDE.2024.3393356
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Molecule discovery plays a crucial role in various scientific fields, advancing the design of tailored materials and drugs, which contributes to the development of society and human well-being. Specifically, molecule-caption translation is an important task for molecule discovery, aligning human understanding with molecular space. However, most of the existing methods heavily rely on domain experts, require excessive computational cost, or suffer from sub-optimal performance. On the other hand, Large Language Models (LLMs), like ChatGPT, have shown remarkable performance in various cross-modal tasks due to their powerful capabilities in natural language understanding, generalization, and in-context learning (ICL), which provides unprecedented opportunities to advance molecule discovery. Despite several previous works trying to apply LLMs in this task, the lack of domain-specific corpus and difficulties in training specialized LLMs still remain challenges. In this work, we propose a novel LLM-based framework (MolReGPT) for molecule-caption translation, where an In-Context Few-Shot Molecule Learning paradigm is introduced to empower molecule discovery with LLMs like ChatGPT to perform their in-context learning capability without domain-specific pre-training and fine-tuning. MolReGPT leverages the principle of molecular similarity to retrieve similar molecules and their text descriptions from a local database to enable LLMs to learn the task knowledge from context examples. We evaluate the effectiveness of MolReGPT on molecule-caption translation, including molecule understanding and text-based molecule generation. Experimental results show that compared to fine-tuned models, MolReGPT outperforms MolT5-base and is comparable to MolT5-large without additional training. To the best of our knowledge, MolReGPT is the first work to leverage LLMs via in-context learning in molecule-caption translation for advancing molecule discovery. Our work expands the scope of LLM applications, as well as providing a new paradigm for molecule discovery and design.
引用
收藏
页码:6071 / 6083
页数:13
相关论文
共 50 条
  • [21] From the editor's office: empowering authors and protecting science communication-integrating ChatGPT and large language models ethically
    Ramos-Remus, Cesar
    Pontefract, Wendy J.
    Adebajo, Adewale
    CLINICAL RHEUMATOLOGY, 2024, : 1 - 5
  • [22] ChatGPT/GPT-4 (large language models): Opportunities and challenges of perspective in bariatric healthcare professionals
    Law, Saikam
    Oldfield, Brian
    Yang, Wah
    OBESITY REVIEWS, 2024, 25 (07)
  • [23] Translating classical Arabic verse: human translation vs. AI large language models (Gemini and ChatGPT)
    Farghal, Mohammed
    Haider, Ahmad S.
    COGENT SOCIAL SCIENCES, 2024, 10 (01):
  • [24] Large language models open new way of AI-assisted molecule design for chemists
    Ishida, Shoichi
    Sato, Tomohiro
    Honma, Teruki
    Terayama, Kei
    JOURNAL OF CHEMINFORMATICS, 2025, 17 (01):
  • [25] Empowering Time Series Analysis with Large Language Models: A Survey
    Jiang, Yushan
    Pan, Zijie
    Zhang, Xikun
    Garg, Sahil
    Schneider, Anderson
    Nevmyvaka, Yuriy
    Song, Dongjin
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 8095 - 8103
  • [26] AtomTool: Empowering Large Language Models with Tool Utilization Skills
    Li, Yongle
    Zhang, Zheng
    Zhang, Junqi
    Hu, Wenbo
    Wu, Yongyu
    Hong, Richang
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT 1, 2025, 15031 : 323 - 337
  • [27] Large language models (ChatGPT) in medical education: Embrace or abjure?
    Luke, Nathasha
    Taneja, Reshma
    Ban, Kenneth
    Samarasekera, Dujeepa
    Yap, Celestial T.
    ASIA PACIFIC SCHOLAR, 2023, 8 (04): : 50 - 52
  • [28] The Security of Using Large Language Models: A Survey with Emphasis on ChatGPT
    Zhou, Wei
    Zhu, Xiaogang
    Han, Qing-Long
    Li, Lin
    Chen, Xiao
    Wen, Sheng
    Xiang, Yang
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2025, 12 (01) : 1 - 26
  • [29] PointLLM: Empowering Large Language Models to Understand Point Clouds
    Xu, Runsen
    Wang, Xiaolong
    Wang, Tai
    Chen, Yilun
    Pang, Jiangmiao
    Lin, Dahua
    COMPUTER VISION - ECCV 2024, PT XXV, 2025, 15083 : 131 - 147
  • [30] Assisting Static Analysis with Large Language Models: A ChatGPT Experiment
    Li, Haonan
    Hao, Yu
    Zhai, Yizhuo
    Qian, Zhiyun
    PROCEEDINGS OF THE 31ST ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2023, 2023, : 2107 - 2111