Towards Zero-Shot Conditional Summarization with Adaptive Multi-Task Fine-Tuning

被引:0
|
作者
Goodwin, Travis R. [1 ]
Savery, Max E. [1 ]
Demner-Fushman, Dina [1 ]
机构
[1] NIH, US Natl Lib Med, Bethesda, MD 20892 USA
基金
美国国家卫生研究院;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatic summarization research has traditionally focused on providing high quality general-purpose summaries of documents. However, there are many applications that require more specific summaries, such as supporting question answering or topic-based literature discovery. In this paper, we study the problem of conditional summarization in which content selection and surface realization are explicitly conditioned on an ad-hoc natural language question or topic description. Because of the difficulty in obtaining sufficient reference summaries to support arbitrary conditional summarization, we explore the use of multi-task fine-tuning (MTFT) on twenty-one natural language tasks to enable zero-shot conditional summarization on five tasks. We present four new summarization datasets, two novel "online" or adaptive task-mixing strategies, and report zero-shot performance using T5 and BART, demonstrating that MTFT can improve zero-shot summarization quality.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable Fine-tuning for Zero-Shot Dialogue Summarization
    Zhao, Lulu
    Zheng, Fujia
    Zeng, Weihao
    He, Keqing
    Xu, Weiran
    Jiang, Huixing
    Wu, Wei
    Wu, Yanan
    [J]. NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 4848 - 4862
  • [2] Robust fine-tuning of zero-shot models
    Wortsman, Mitchell
    Ilharco, Gabriel
    Kim, Jong Wook
    Li, Mike
    Kornblith, Simon
    Roelofs, Rebecca
    Lopes, Raphael Gontijo
    Hajishirzi, Hannaneh
    Farhadi, Ali
    Namkoong, Hongseok
    Schmidt, Ludwig
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7949 - 7961
  • [3] Sentiment-aware Review Summarization with Personalized Multi-task Fine-tuning
    Xu, Hongyan
    Liu, Hongtao
    Lv, Zhepeng
    Yang, Qing
    Wang, Wenjun
    [J]. PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 2826 - 2835
  • [4] Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
    Oh, Junhyuk
    Singh, Satinder
    Lee, Honglak
    Kohli, Pushmeet
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [5] Feature fine-tuning and attribute representation transformation for zero-shot learning
    Pang, Shanmin
    He, Xin
    Hao, Wenyu
    Long, Yang
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 236
  • [6] Canonical mean filter for almost zero-shot multi-task classification
    Li, Yong
    Wang, Heng
    Ye, Xiang
    [J]. APPLIED INTELLIGENCE, 2023, 53 (20) : 24422 - 24434
  • [7] Multi-Task Zero-Shot Action Recognition with Prioritised Data Augmentation
    Xu, Xun
    Hospedales, Timothy M.
    Gong, Shaogang
    [J]. COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 : 343 - 359
  • [8] Canonical mean filter for almost zero-shot multi-task classification
    Yong Li
    Heng Wang
    Xiang Ye
    [J]. Applied Intelligence, 2023, 53 : 24422 - 24434
  • [9] Fine-tuning Encoders for Improved Monolingual and Zero-shot Polylingual Neural Topic Modeling
    Mueller, Aaron
    Dredze, Mark
    [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3054 - 3068
  • [10] Zero-Shot Rationalization by Multi-Task Transfer Learning from Question Answering
    Kung, Po-Nien
    Yang, Tse-Hsuan
    Chen, Yi-Cheng
    Yin, Sheng-Siang
    Chen, Yun-Nung
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 2187 - 2197