共 50 条
- [1] Lattice-BERT: Leveraging Multi-Granularity Representations in Chinese Pre-trained Language Models [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 1716 - 1731
- [2] Pre-trained Language Model based Ranking in Baidu Search [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 4014 - 4022
- [4] ERNIE-GeoL: A Geography-and-Language Pre-trained Model and its Applications in Baidu Maps [J]. PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 3029 - 3039
- [5] Pre-trained Language Model Representations for Language Generation [J]. 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 4052 - 4059
- [6] AMBERT: A Pre-trained Language Model with Multi-Grained Tokenization [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 421 - 435
- [7] Pre-trained Language Model forWeb-scale Retrieval in Baidu Search [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 3365 - 3375
- [9] Adder Encoder for Pre-trained Language Model [J]. CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 339 - 347