A comprehensive survey of large language models and multimodal large models in medicine

被引:1
|
作者
Xiao, Hanguang [1 ]
Zhou, Feizhong [1 ]
Liu, Xingyue [1 ]
Liu, Tianqi [1 ]
Li, Zhipeng [1 ]
Liu, Xin [1 ]
Huang, Xiaoxuan [1 ]
机构
[1] Chongqing Univ Technol, Sch Artificial Intelligence, Chongqing 401120, Peoples R China
关键词
Large language model; Multimodal large language model; Medicine; Healthcare; Clinical application; ARTIFICIAL-INTELLIGENCE; LIMITS;
D O I
10.1016/j.inffus.2024.102888
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since the release of ChatGPT and GPT-4, large language models (LLMs) and multimodal large language models (MLLMs) have attracted widespread attention for their exceptional capabilities in understanding, reasoning, and generation, introducing transformative paradigms for integrating artificial intelligence into medicine. This survey provides a comprehensive overview of the development, principles, application scenarios, challenges, and future directions of LLMs and MLLMs in medicine. Specifically, it begins by examining the paradigm shift, tracing the transition from traditional models to LLMs and MLLMs, and highlighting the unique advantages of these LLMs and MLLMs in medical applications. Next, the survey reviews existing medical LLMs and MLLMs, providing detailed guidance on their construction and evaluation in a clear and systematic manner. Subsequently, to underscore the substantial value of LLMs and MLLMs in healthcare, the survey explores five promising applications in the field. Finally, the survey addresses the challenges confronting medical LLMs and MLLMs and proposes practical strategies and future directions for their integration into medicine. In summary, this survey offers a comprehensive analysis of the technical methodologies and practical clinical applications of medical LLMs and MLLMs, with the goal of bridging the gap between these advanced technologies and clinical practice, thereby fostering the evolution of the next generation of intelligent healthcare systems.
引用
收藏
页数:26
相关论文
共 50 条
  • [41] Do multimodal large language models understand welding?
    Khvatskii, Grigorii
    Lee, Yong Suk
    Angst, Corey
    Gibbs, Maria
    Landers, Robert
    Chawla, Nitesh V.
    INFORMATION FUSION, 2025, 120
  • [42] Woodpecker: hallucination correction for multimodal large language models
    Shukang YIN
    Chaoyou FU
    Sirui ZHAO
    Tong XU
    Hao WANG
    Dianbo SUI
    Yunhang SHEN
    Ke LI
    Xing SUN
    Enhong CHEN
    Science China(Information Sciences), 2024, 67 (12) : 52 - 64
  • [43] Privacy issues in Large Language Models: A survey
    Kibriya, Hareem
    Khan, Wazir Zada
    Siddiqa, Ayesha
    Khan, Muhammad Khurrum
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 120
  • [44] Jailbreak Attack for Large Language Models: A Survey
    Li N.
    Ding Y.
    Jiang H.
    Niu J.
    Yi P.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (05): : 1156 - 1181
  • [45] A Survey on Multimodal Large Language Models in Radiology for Report Generation and Visual Question Answering
    Yi, Ziruo
    Xiao, Ting
    Albert, Mark V.
    INFORMATION, 2025, 16 (02)
  • [46] Bias and Fairness in Large Language Models: A Survey
    Gallegos, Isabel O.
    Rossi, Ryan A.
    Barrow, Joe
    Tanjim, Md Mehrab
    Kim, Sungchul
    Dernoncourt, Franck
    Yu, Tong
    Zhang, Ruiyi
    Ahmed, Nesreen K.
    COMPUTATIONAL LINGUISTICS, 2024, 50 (03) : 1097 - 1179
  • [47] Large Language Models for Time Series: A Survey
    Zhang, Xiyuan
    Chowdhury, Ranak Roy
    Gupta, Rajesh K.
    Shang, Jingbo
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 8335 - 8343
  • [48] A Survey on Model Compression for Large Language Models
    Zhu, Xunyu
    Li, Jian
    Liu, Yong
    Ma, Can
    Wang, Weiping
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2024, 12 : 1556 - 1577
  • [49] A survey of table reasoning with large language models
    Zhang, Xuanliang
    Wang, Dingzirui
    Dou, Longxu
    Zhu, Qingfu
    Che, Wanxiang
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (09)
  • [50] Tool learning with large language models: a survey
    Qu, Changle
    Dai, Sunhao
    Wei, Xiaochi
    Cai, Hengyi
    Wang, Shuaiqiang
    Yin, Dawei
    Xu, Jun
    Wen, Ji-rong
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (08)