Keeping Consistency of Sentence Generation and Document Classification with Multi-Task Learning

被引:0
|
作者
Nishino, Toru [1 ]
Misawa, Shotaro [1 ]
Kano, Ryuji [1 ]
Taniguchi, Tomoki [1 ]
Miura, Yasuhide [1 ]
Ohkuma, Tomoko [1 ]
机构
[1] Fuji Xerox Co Ltd, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The automated generation of information indicating the characteristics of articles such as headlines, key phrases, summaries and categories helps writers to alleviate their workload. Previous research has tackled these tasks using neural abstractive summarization and classification methods. However, the outputs may be inconsistent if they are generated individually. The purpose of our study is to generate multiple outputs consistently. We introduce a multi-task learning model with a shared encoder and multiple decoders for each task. We propose a novel loss function called hierarchical consistency loss to maintain consistency among the attention weights of the decoders. To evaluate the consistency, we employ a human evaluation. The results show that our model generates more consistent headlines, key phrases and categories. In addition, our model outperforms the baseline model on the ROUGE scores, and generates more adequate and fluent headlines.
引用
收藏
页码:3195 / 3205
页数:11
相关论文
共 50 条
  • [31] Multi-task learning for classification with Dirichlet process priors
    Xue, Ya
    Liao, Xuejun
    Carin, Lawrence
    Krishnapuram, Balaji
    JOURNAL OF MACHINE LEARNING RESEARCH, 2007, 8 : 35 - 63
  • [32] Multi-Task Learning with Language Modeling for Question Generation
    Zhou, Wenjie
    Zhang, Minghua
    Wu, Yunfang
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 3394 - 3399
  • [33] Binaural Audio Generation via Multi-task Learning
    Li, Sijia
    Liu, Shiguang
    Manocha, Dinesh
    ACM TRANSACTIONS ON GRAPHICS, 2021, 40 (06):
  • [34] Dataset for modulation classification and signal type classification for multi-task and single task learning
    Jagannath, Anu
    Jagannath, Jithin
    COMPUTER NETWORKS, 2021, 199
  • [35] Usr-mtl: an unsupervised sentence representation learning framework with multi-task learning
    Wenshen Xu
    Shuangyin Li
    Yonghe Lu
    Applied Intelligence, 2021, 51 : 3506 - 3521
  • [36] Usr-mtl: an unsupervised sentence representation learning framework with multi-task learning
    Xu, Wenshen
    Li, Shuangyin
    Lu, Yonghe
    APPLIED INTELLIGENCE, 2021, 51 (06) : 3506 - 3521
  • [37] Multi-task learning with cross-task consistency for improved depth estimation in colonoscopy
    Chavarrias Solano, Pedro Esteban
    Bulpitt, Andrew
    Subramanian, Venkataraman
    Ali, Sharib
    Medical Image Analysis, 2025, 99
  • [38] Multi-modal microblog classification via multi-task learning
    Sicheng Zhao
    Hongxun Yao
    Sendong Zhao
    Xuesong Jiang
    Xiaolei Jiang
    Multimedia Tools and Applications, 2016, 75 : 8921 - 8938
  • [39] Dermoscopic attributes classification using deep learning and multi-task learning
    Saitov, Irek
    Polevaya, Tatyana
    Filchenkov, Andrey
    9TH INTERNATIONAL YOUNG SCIENTISTS CONFERENCE IN COMPUTATIONAL SCIENCE, YSC2020, 2020, 178 : 328 - 336
  • [40] Multi-modal microblog classification via multi-task learning
    Zhao, Sicheng
    Yao, Hongxun
    Zhao, Sendong
    Jiang, Xuesong
    Jiang, Xiaolei
    MULTIMEDIA TOOLS AND APPLICATIONS, 2016, 75 (15) : 8921 - 8938