共 106 条
- [61] Wallace E, Rodriguez P, Feng Shi, Et al., Trick me if you can: Human-in-the-loop generation of adversarial question answering examples, Transactions of the Association for Computational Linguistics, 7, 3, pp. 387-401, (2019)
- [62] Cheng Minhao, Wei Wei, Hsieh C., Evaluating and enhancing the robustness of dialogue systems: A case study on a negotiation agent, Proc of the Int Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3325-3335, (2019)
- [63] Minervini P, Riedel S., Adversarially regularising neural NLI models to integrate logical background knowledge, Proc of the 22nd Conf on Computational Natural Language Learning, pp. 65-74, (2018)
- [64] Wang Yicheng, Bansal M., Robust machine comprehension models via adversarial training, Proc of the Int Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 575-581, (2018)
- [65] Minervini P, Demeester T, Rocktaschel T, Et al., Adversarial sets for regularising neural link predictors, Proc of the 33rd Conf on Uncertainty in Artificial Intelligence, pp. 1-10, (2017)
- [66] Miyato T, Dai A, Goodfellow I., Adversarial training methods for semi-supervised text classification, Proc of the 5th Int Conf on Learning Representations, pp. 1-11, (2017)
- [67] Liu Xiaodong, Cheng Hao, He Pengcheng, Et al., Adversarial training for large neural language models, pp. 1-13, (2020)
- [68] Liu Kai, Liu Xin, Yang An, Et al., A robust adversarial training approach to machine reading comprehension, Proc of the 34th AAAI Conf on Artificial Intelligence, pp. 8392-8400, (2020)
- [69] Liu Hui, Zhang Yongzheng, Wang Yipeng, Et al., Joint character-level word embedding and adversarial stability training to defend adversarial text, Proc of the 34th AAAI Conf on Artificial Intelligence, pp. 8384-8391, (2020)
- [70] Li Yitong, Baldwin T, Cohn T., Towards robust and privacy-preserving text representations, Proc of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 25-30, (2018)