Classification of Traditional Chinese Medicine Cases based on Character-level Bert and Deep Learning

被引:0
|
作者
Song, Zihao [1 ]
Xie, Yonghong [1 ]
Huang, Wen [1 ]
Wang, Haoyu [1 ]
机构
[1] Univ Sci & Technol Beijing, Schoool Comp & Commun Engn, Beijing, Peoples R China
关键词
Text Classification; Medicial Cases; Bert; Convolutional Neural Network; Recurrent Neural Network;
D O I
10.1109/itaic.2019.8785612
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As one of the traditional cultures of our country, Traditional Chinese Medicine (TCM) has received more and more attention. As a valuable asset inherited from ancient times, TCM medical cases carry the core knowledge content of TCM. Accurate medical case classification is an important part of establishing a correct medical case diagnosis and treatment system, and is also an important part of medical assistance system. This paper proposes a new model to effectively classify medical cases. First, the multi-layer semantic expansion method is used to increase the semantic information of TCM medical cases in instance layer and attribute layer. Then, the character-level Bidirectional Encoder Representations from Transformers (Bert) model is used as a language model for text representation of the medical cases, and the results are as the input of the deep learning models. Finally, the optimized Text-Convolutional Neural Network (Text-CNN) model is used to classify TCM medical cases, and the reliability and accuracy of the whole model are verified through the comparison with the result of other text representation and classification methods.
引用
收藏
页码:1383 / 1387
页数:5
相关论文
共 50 条
  • [1] Chinese Sentiment Analysis Based on Lightweight Character-Level BERT
    Tang, Fuhong
    Nongpong, Kwankamol
    2021 13TH INTERNATIONAL CONFERENCE ON KNOWLEDGE AND SMART TECHNOLOGY (KST-2021), 2021, : 27 - 32
  • [2] Chinese text classification based on character-level CNN and SVM
    Wu H.
    Li D.
    Cheng M.
    International Journal of Intelligent Information and Database Systems, 2019, 12 (03) : 212 - 228
  • [3] Text Classification and Transfer Learning Based on Character-Level Deep Convolutional Neural Networks
    Sato, Minato
    Orihara, Ryohei
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART 2017), 2018, 10839 : 62 - 81
  • [4] Japanese Text Classification by Character-level Deep ConvNets and Transfer Learning
    Sato, Minato
    Orihara, Ryohei
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    ICAART: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 2, 2017, : 175 - 184
  • [5] Deep Graph-Based Character-Level Chinese Dependency Parsing
    Wu, Linzhi
    Zhang, Meishan
    IEEE/ACM Transactions on Audio Speech and Language Processing, 2021, 29 : 1329 - 1339
  • [6] Deep Graph-Based Character-Level Chinese Dependency Parsing
    Wu, Linzhi
    Zhang, Meishan
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 1329 - 1339
  • [7] Deep learning based Character-level approach for Morphological Inflection Generation
    Prasad, Vidya K.
    Premjith, B.
    Chandran, Chandni, V
    Soman, K. P.
    Poornachandran, Prabaharan
    PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND CONTROL SYSTEMS (ICCS), 2019, : 1423 - 1427
  • [8] Webshell Traffic Detection With Character-Level Features Based on Deep Learning
    Zhang, Hua
    Guan, Hongchao
    Yan, Hanbing
    Li, Wenmin
    Yu, Yuqi
    Zhou, Hao
    Zeng, Xingyu
    IEEE ACCESS, 2018, 6 : 75268 - 75277
  • [9] A Compact Encoding for Efficient Character-level Deep Text Classification
    Marinho, Wemerson
    Marti, Luis
    Sanchez-Pi, Nayat
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [10] A CHINESE CHARACTER-LEVEL AND WORD-LEVEL COMPLEMENTARY TEXT CLASSIFICATION METHOD
    Chen, Wentong
    Fan, Chunxiao
    Wu, Yuexin
    Lou, Zhixiong
    2020 25TH INTERNATIONAL CONFERENCE ON TECHNOLOGIES AND APPLICATIONS OF ARTIFICIAL INTELLIGENCE (TAAI 2020), 2020, : 187 - 192