Traditional Chinese Medicine Symptom Normalization Approach Based on Pre-Trained Language Models

被引:0
|
作者
Xie, Yonghong [1 ,2 ]
Tao, Hu [1 ,2 ]
Jia, Qi [1 ,2 ]
Yang, Shibing [1 ,2 ]
Han, Xinliang [2 ]
机构
[1] School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing,100083, China
[2] Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing,100083, China
关键词
Entity matching - Language model - Literals - Multi-label text classification - Normalisation - Pre-trained language model - Semantic classification - Symptom normalization - Text classification models - Traditional Chinese Medicine;
D O I
10.13190/j.jbupt.2021-191
中图分类号
学科分类号
摘要
引用
收藏
页码:13 / 18
相关论文
共 50 条
  • [1] Pre-trained language models in medicine: A survey *
    Luo, Xudong
    Deng, Zhiqi
    Yang, Binxia
    Luo, Michael Y.
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 154
  • [2] A complex network approach to analyse pre-trained language models for ancient Chinese
    Zheng, Jianyu
    Xiao, Xin'ge
    [J]. ROYAL SOCIETY OPEN SCIENCE, 2024, 11 (05):
  • [3] Revisiting Pre-trained Models for Chinese Natural Language Processing
    Cui, Yiming
    Che, Wanxiang
    Liu, Ting
    Qin, Bing
    Wang, Shijin
    Hu, Guoping
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 657 - 668
  • [4] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    [J]. ENGINEERING, 2023, 25 : 51 - 65
  • [5] Improving Braille-Chinese translation with jointly trained and pre-trained language models
    Huang, Tianyuan
    Su, Wei
    Liu, Lei
    Cai, Chuan
    Yu, Hailong
    Yuan, Yongna
    [J]. DISPLAYS, 2024, 82
  • [6] A Data Cartography based MixUp for Pre-trained Language Models
    Park, Seo Yeon
    Caragea, Cornelia
    [J]. NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 4244 - 4250
  • [7] Pre-trained transformer-based language models for Sundanese
    Wilson Wongso
    Henry Lucky
    Derwin Suhartono
    [J]. Journal of Big Data, 9
  • [8] Pre-trained transformer-based language models for Sundanese
    Wongso, Wilson
    Lucky, Henry
    Suhartono, Derwin
    [J]. JOURNAL OF BIG DATA, 2022, 9 (01)
  • [9] A Transformer Based Approach To Detect Suicidal Ideation Using Pre-Trained Language Models
    Haque, Farsheed
    Nur, Ragib Un
    Al Jahan, Shaeekh
    Mahmud, Zarar
    Shah, Faisal Muhammad
    [J]. 2020 23RD INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY (ICCIT 2020), 2020,
  • [10] Annotating Columns with Pre-trained Language Models
    Suhara, Yoshihiko
    Li, Jinfeng
    Li, Yuliang
    Zhang, Dan
    Demiralp, Cagatay
    Chen, Chen
    Tan, Wang-Chiew
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 1493 - 1503