MSA: Jointly Detecting Drug Name and Adverse Drug Reaction Mentioning Tweets with Multi-Head Self-Attention

被引:5
|
作者
Wu, Chuhan [1 ]
Wu, Fangzhao [2 ]
Yuan, Zhigang [1 ]
Liu, Junxin [1 ]
Huang, Yongfeng [1 ]
Xie, Xing [2 ]
机构
[1] Tsinghua Univ, Elect Engn, Beijing, Peoples R China
[2] Microsoft Res Asia, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Twitter; Drug Name; Adverse Drug Reaction; Self-Attention; SOCIAL MEDIA; PHARMACOVIGILANCE; CLASSIFICATION;
D O I
10.1145/3289600.3290980
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Twitter is a popular social media platform for information sharing and dissemination. Many Twitter users post tweets to share their experiences about drugs and adverse drug reactions. Automatic detection of tweets mentioning drug names and adverse drug reactions at a large scale has important applications such as pharmacovigilance. However, detecting drug name and adverse drug reaction mentioning tweets is very challenging, because tweets are usually very noisy and informal, and there are massive misspellings and user-created abbreviations for these mentions. In addition, these mentions are usually context dependent. In this paper, we propose a neural approach with hierarchical tweet representation and multi-head self-attention mechanism to jointly detect tweets mentioning drug names and adverse drug reactions. In order to alleviate the influence of massive misspellings and user-created abbreviations in tweets, we propose to use a hierarchical tweet representation model to first learn word representations from characters and then learn tweet representations from words. In addition, we propose to use multi-head self-attention mechanism to capture the interactions between words to fully model the contexts of tweets. Besides, we use additive attention mechanism to select the informative words to learn more informative tweet representations. Experimental results validate the effectiveness of our approach.
引用
收藏
页码:33 / 41
页数:9
相关论文
共 50 条
  • [31] Modality attention fusion model with hybrid multi-head self-attention for video understanding
    Zhuang, Xuqiang
    Liu, Fang'al
    Hou, Jian
    Hao, Jianhua
    Cai, Xiaohong
    PLOS ONE, 2022, 17 (10):
  • [32] A Multi-tab Webpage Fingerprinting Method Based on Multi-head Self-attention
    Xie, Lixia
    Li, Yange
    Yang, Hongyu
    Hu, Ze
    Wang, Peng
    Cheng, Xiang
    Zhang, Liang
    FRONTIERS IN CYBER SECURITY, FCS 2023, 2024, 1992 : 131 - 140
  • [33] Integration of Multi-Head Self-Attention and Convolution for Person Re-Identification
    Zhou, Yalei
    Liu, Peng
    Cui, Yue
    Liu, Chunguang
    Duan, Wenli
    SENSORS, 2022, 22 (16)
  • [34] A HYBRID TEXT NORMALIZATION SYSTEM USING MULTI-HEAD SELF-ATTENTION FOR MANDARIN
    Zhang, Junhui
    Pan, Junjie
    Yin, Xiang
    Li, Chen
    Liu, Shichao
    Zhang, Yang
    Wang, Yuxuan
    Ma, Zejun
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6694 - 6698
  • [35] Text summarization based on multi-head self-attention mechanism and pointer network
    Qiu, Dong
    Yang, Bing
    COMPLEX & INTELLIGENT SYSTEMS, 2022, 8 (01) : 555 - 567
  • [36] Lip Recognition Based on Bi-GRU with Multi-Head Self-Attention
    Ni, Ran
    Jiang, Haiyang
    Zhou, Lu
    Lu, Yuanyao
    ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, PT III, AIAI 2024, 2024, 713 : 99 - 110
  • [37] Chinese CNER Combined with Multi-head Self-attention and BiLSTM-CRF
    Luo X.
    Xia X.
    An Y.
    Chen X.
    Hunan Daxue Xuebao/Journal of Hunan University Natural Sciences, 2021, 48 (04): : 45 - 55
  • [38] Text summarization based on multi-head self-attention mechanism and pointer network
    Dong Qiu
    Bing Yang
    Complex & Intelligent Systems, 2022, 8 : 555 - 567
  • [39] Multi-Head Self-Attention for 3D Point Cloud Classification
    Gao, Xue-Yao
    Wang, Yan-Zhao
    Zhang, Chun-Xiang
    Lu, Jia-Qi
    IEEE Access, 2021, 9 : 18137 - 18147
  • [40] The sentiment analysis model with multi-head self-attention and Tree-LSTM
    Li Lei
    Pei Yijian
    Jin Chenyang
    SIXTH INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION, 2021, 11913