Joint Multi-Label Attention Networks for Social Text Annotation

被引:0
|
作者
Dong, Hang [1 ,2 ]
Wang, Wei [2 ]
Huang, Kaizhu [3 ]
Coenen, Frans [1 ]
机构
[1] Univ Liverpool, Dept Comp Sci, Liverpool, Merseyside, England
[2] Xian Jiaotong Liverpool Univ, Dept Comp Sci & Software Engn, Xian, Peoples R China
[3] Xian Jiaotong Liverpool Univ, Dept Elect & Elect Engn, Xian, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a novel attention network for document annotation with user-generated tags. The network is designed according to the human reading and annotation behaviour. Usually, users try to digest the title and obtain a rough idea about the topic first, and then read the content of the document. Present research shows that the title metadata could largely affect the social annotation. To better utilise this information, we design a framework that separates the title from the content of a document and apply a title-guided attention mechanism over each sentence in the content. We also propose two semanticbased loss regularisers that enforce the output of the network to conform to label semantics, i.e. similarity and subsumption. We analyse each part of the proposed system with two real-world open datasets on publication and question annotation. The integrated approach, Joint Multi-label Attention Network (JMAN), significantly outperformed the Bidirectional Gated Recurrent Unit (Bi-GRU) by around 13%-26% and the Hierarchical Attention Network (HAN) by around 4%-12% on both datasets, with around 10%-30% reduction of training time.
引用
收藏
页码:1348 / 1354
页数:7
相关论文
共 50 条
  • [31] Label-Related Adaptive Graph Construction Based on Attention for Multi-label Text Classification
    Zhou, Xiwen
    Xie, Xiaopeng
    Zhao, Chenlong
    Yao, Lei
    Li, Zhaoxia
    Zhang, Yong
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14878 : 197 - 208
  • [32] Text multi-label learning method based on label-aware attention and semantic dependency
    Baisong Liu
    Xiaoling Liu
    Hao Ren
    Jiangbo Qian
    YangYang Wang
    Multimedia Tools and Applications, 2022, 81 : 7219 - 7237
  • [33] challenges & approaches in multi-label image annotation
    Kalaivani, A.
    Chitrakal, S.
    2013 FOURTH INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS AND NETWORKING TECHNOLOGIES (ICCCNT), 2013,
  • [34] Text multi-label learning method based on label-aware attention and semantic dependency
    Liu, Baisong
    Liu, Xiaoling
    Ren, Hao
    Qian, Jiangbo
    Wang, YangYang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (05) : 7219 - 7237
  • [35] A Novel Model for Multi-label Image Annotation
    Wu, Xinjian
    Zhang, Li
    Li, Fanzhang
    Wang, Bangjun
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 1953 - 1958
  • [36] Multi-Label Dictionary Learning for Image Annotation
    Jing, Xiao-Yuan
    Wu, Fei
    Li, Zhiqiang
    Hu, Ruimin
    Zhang, David
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (06) : 2712 - 2725
  • [37] Multi-Label Text Classification Combining Bidirectional Attention and Contrast Enhancement Mechanism
    Li, Jiandong
    Fu, Jia
    Li, Jiaqi
    Computer Engineering and Applications, 2024, 60 (16) : 105 - 115
  • [38] Multi-Label Text Classification Model Based on Multi-Level Constraint Augmentation and Label Association Attention
    Wei, Xiao
    Huang, Jianbao
    Zhao, Rui
    Yu, Hang
    Xu, Zheng
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2024, 23 (01)
  • [39] LABEL-AWARE TEXT REPRESENTATION FOR MULTI-LABEL TEXT CLASSIFICATION
    Guo, Hao
    Li, Xiangyang
    Zhang, Lei
    Liu, Jia
    Chen, Wei
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7728 - 7732
  • [40] Multi-Label Classification of Historical Documents by Using Hierarchical Attention Networks
    Kim, Dong-Kyum
    Lee, Byunghwee
    Kim, Daniel
    Jeong, Hawoong
    JOURNAL OF THE KOREAN PHYSICAL SOCIETY, 2020, 76 (05) : 368 - 377