Edge-featured graph attention network with dependency features for causality detection of events

被引:1
|
作者
Wei, Jianxiang [1 ]
Chen, Yuhang [2 ]
Han, Pu [1 ]
Zhu, Yunxia [2 ]
Huang, Weidong [1 ,3 ,4 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Management, Nanjing, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Sch Internet Things, Nanjing, Peoples R China
[3] Nanjing Univ Posts & Telecommun, Key Res Base Philosophy & Social Sci Jiangsu Infor, Nanjing, Peoples R China
[4] Nanjing Univ Posts & Telecommun, Emergency Management Res Ctr, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
causality detection; dependency-directed graphs; Edge-featured Graph Attention Network; IDENTIFICATION;
D O I
10.1111/exsy.13332
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Causality detection, as a more fine-grained task than causality extraction, which aims to detect the components that represent the cause and effect in sentence-level texts with causality, is a significant task in the field of Natural Language Processing (NLP). Previous research on causality detection has concentrated on text token features whilst ignoring the dependency attributes between tokens. In this paper, we propose a model that uses the Edge-featured Graph Attention Network based on dependency-directed graphs for the causality detection task. To begin, we convert the texts with causality into the representation of dependency-directed graphs (DDG), which regard the dependency attributes between tokens as edge features. Then we use Edge-featured Graph Attention Network to aggregate the node and edge features of DDG. Finally, we put the graph embedding into Bi-directional Long Short-Term Memory (BiLSTM) layer to learn the dependencies between forward and backward long-distance nodes in DDG. Experiments on three datasets prove that this method achieves better performance in precision, recall, and other evaluation metrics compared with other methods.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Landscape-Enhanced Graph Attention Network for Rumor Detection
    Jiang, Jianguo
    Liu, Qiang
    Yu, Min
    Li, Gang
    Liu, Mingqi
    Liu, Chao
    Huang, Weiqing
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, 2021, 12817 : 188 - 199
  • [42] Graph Attention Network for Text Classification and Detection of Mental Disorder
    Ahmed, Usman
    Lin, Jerry Chun-Wei
    Srivastava, Gautam
    ACM TRANSACTIONS ON THE WEB, 2023, 17 (03)
  • [43] Fault Detection in Seismic Data Using Graph Attention Network
    Palo, Patitapaban
    Routray, Aurobinda
    Singh, Sanjai Kumar
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2022, PT II, 2022, 13427 : 97 - 109
  • [44] Affection Enhanced Relational Graph Attention Network for Sarcasm Detection
    Li, Guowei
    Lin, Fuqiang
    Chen, Wangqun
    Liu, Bo
    APPLIED SCIENCES-BASEL, 2022, 12 (07):
  • [45] Solve routing problems with a residual edge-graph attention neural network
    Lei, Kun
    Guo, Peng
    Wang, Yi
    Wu, Xiao
    Zhao, Wenchao
    NEUROCOMPUTING, 2022, 508 : 79 - 98
  • [46] Piecewise graph convolutional network with edge-level attention for relation extraction
    Changsen Yuan
    Heyan Huang
    Chong Feng
    Qianwen Cao
    Neural Computing and Applications, 2022, 34 : 16739 - 16751
  • [47] Piecewise graph convolutional network with edge-level attention for relation extraction
    Yuan, Changsen
    Huang, Heyan
    Feng, Chong
    Cao, Qianwen
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (19): : 16739 - 16751
  • [48] Floorplanning with Edge-aware Graph Attention Network and Hindsight Experience Replay
    Yang, Bo
    Xu, Qi
    Geng, Hao
    Chen, Song
    Yu, Bei
    Kang, Yi
    ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2024, 29 (03)
  • [49] Markov enhanced graph attention network for spammer detection in online social network
    Tripathi, Ashutosh
    Ghosh, Mohona
    Bharti, Kusum Kumari
    KNOWLEDGE AND INFORMATION SYSTEMS, 2024, 66 (09) : 5561 - 5580
  • [50] Multi-decoding Network with Attention Learning for Edge Detection
    Xiao Zhang
    Chuan Lin
    Neural Processing Letters, 2023, 55 : 4889 - 4906