The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports

被引:0
|
作者
Kanemaru, Noriko [1 ]
Yasaka, Koichiro [1 ]
Fujita, Nana [1 ]
Kanzawa, Jun [1 ]
Abe, Osamu [1 ]
机构
[1] Univ Tokyo, Grad Sch Med, Dept Radiol, 7-3-1 Hongo Bunkyo-Ku, Tokyo 1138655, Japan
关键词
Large language model; Bone metastasis; Deep learning; DISEASE;
D O I
10.1007/s10278-024-01242-3
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Early detection of patients with impending bone metastasis is crucial for prognosis improvement. This study aimed to investigate the feasibility of a fine-tuned, locally run large language model (LLM) in extracting patients with bone metastasis in unstructured Japanese radiology report and to compare its performance with manual annotation. This retrospective study included patients with "metastasis" in radiological reports (April 2018-January 2019, August-May 2022, and April-December 2023 for training, validation, and test datasets of 9559, 1498, and 7399 patients, respectively). Radiologists reviewed the clinical indication and diagnosis sections of the radiological report (used as input data) and classified them into groups 0 (no bone metastasis), 1 (progressive bone metastasis), and 2 (stable or decreased bone metastasis). The data for group 0 was under-sampled in training and test datasets due to group imbalance. The best-performing model from the validation set was subsequently tested using the testing dataset. Two additional radiologists (readers 1 and 2) were involved in classifying radiological reports within the test dataset for testing purposes. The fine-tuned LLM, reader 1, and reader 2 demonstrated an accuracy of 0.979, 0.996, and 0.993, sensitivity for groups 0/1/2 of 0.988/0.947/0.943, 1.000/1.000/0.966, and 1.000/0.982/0.954, and time required for classification (s) of 105, 2312, and 3094 in under-sampled test dataset (n = 711), respectively. Fine-tuned LLM extracted patients with bone metastasis, demonstrating satisfactory performance that was comparable to or slightly lower than manual annotation by radiologists in a noticeably shorter time.
引用
收藏
页码:865 / 872
页数:8
相关论文
共 50 条
  • [1] Extracting structured data from organic synthesis procedures using a fine-tuned large language model
    Ai, Qianxiang
    Meng, Fanwang
    Shi, Jiale
    Pelkie, Brenden
    Coley, Connor W.
    DIGITAL DISCOVERY, 2024, 3 (09): : 1822 - 1831
  • [2] Fine-Tuned Large Language Model for Extracting Patients on Pretreatment for Lung Cancer from a Picture Archiving and Communication System Based on Radiological Reports
    Yasaka, Koichiro
    Kanzawa, Jun
    Kanemaru, Noriko
    Koshino, Saori
    Abe, Osamu
    JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2024, : 327 - 334
  • [3] CentralBankRoBERTa: A fine-tuned large language model for central bank communications☆
    Pfeifer, Moritz
    Marohl, Vincent P.
    JOURNAL OF FINANCE AND DATA SCIENCE, 2023, 9
  • [4] Automated classification of brain MRI reports using fine-tuned large language models
    Kanzawa, Jun
    Yasaka, Koichiro
    Fujita, Nana
    Fujiwara, Shin
    Abe, Osamu
    NEURORADIOLOGY, 2024, : 2177 - 2183
  • [5] Need of Fine-Tuned Radiology Aware Open-Source Large Language Models for Neuroradiology
    Ray, Partha Pratim
    CLINICAL NEURORADIOLOGY, 2024,
  • [6] Taiyi: a bilingual fine-tuned large language model for diverse biomedical tasks
    Luo, Ling
    Ning, Jinzhong
    Zhao, Yingwen
    Wang, Zhijun
    Ding, Zeyuan
    Chen, Peng
    Fu, Weiru
    Han, Qinyu
    Xu, Guangtao
    Qiu, Yunzhi
    Pan, Dinghao
    Li, Jiru
    Li, Hao
    Feng, Wenduo
    Tu, Senbo
    Liu, Yuqi
    Yang, Zhihao
    Wang, Jian
    Sun, Yuanyuan
    Lin, Hongfei
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (09) : 1865 - 1874
  • [7] Understanding language-elicited EEG data by predicting it from a fine-tuned language model
    Schwartz, Dan
    Mitchell, Tom
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 43 - 57
  • [9] EpilepsyLLM: Domain-Specific Large Language Model Fine-tuned with Epilepsy Medical Knowledge
    Zhao, Xuyang
    Zhao, Qibin
    Tanaka, Toshihisa
    arXiv,
  • [10] Website Category Classification Using Fine-tuned BERT Language Model
    Demirkiran, Ferhat
    Cayir, Aykut
    Unal, Ugur
    Dag, Hasan
    2020 5TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND ENGINEERING (UBMK), 2020, : 333 - 336