共 20 条
- [1] See A, Liu P J, Manning C D., Get to the point: Summarization with pointer-generator networks, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1073-1083, (2017)
- [2] Peters M E, Neumann M, Iyyer M, Et al., Deep contex-tualized word representations
- [3] Radford A, Narasimhan K, Salimans T, Et al., Im-proving language understanding by generative pre-training
- [4] Devlin J, Chang M W, Lee K, Et al., BERT: pre-training of deep bidirectional transformers for lanuage understanding, Proceedings of NAACL-HLT 2019, pp. 4171-4186, (2019)
- [5] Hu Xu, Bing Liu, Lei Shu, Et al., BERT post-training for review reading comprehension and aspect-based sentiment analysis, Proceedings of NAACL-HLT 2019, pp. 2324-2335, (2019)
- [6] Yang Liu, Lapata M., Text summarization with pretrained encoders, Proceedings of the 2019 Con-ference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 3730-3740, (2019)
- [7] Xingxing Zhang, Furu Wei, Ming Zhou, HIBERT: Document level pre-training of hierarchical bidirec-tional transformers for document summarization, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5059-5069, (2019)
- [8] Nallapati R, Zhou B, Gulcehre C, Et al., Abstractive text summarization using sequence-tosequence RNNs and beyond, Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pp. 280-290, (2016)
- [9] Qingyu Zhou, Nan Yang, Furu Wei, Et al., Selective encoding for abstractive sentence summarization, Proceedings of the 55th Annual Meeting of the As-sociation for Computational Linguistics, pp. 1095-1104, (2017)
- [10] Hsu W T, Lin C K, Lee M Y, Et al., A unified model for extractive and abstractive summarization using inconsistency loss, Proceedings of the 56th Annual Meeting of the Association for Computational Lin-guistics, pp. 132-141, (2018)