Nested-Wasserstein Self-Imitation Learning for Sequence Generation

被引:0
|
作者
Zhang, Ruiyi [1 ]
Chen, Changyou [2 ]
Gan, Zhe [3 ]
Wen, Zheng [4 ]
Wang, Wenlin [1 ]
Carin, Lawrence [1 ]
机构
[1] Duke Univ, Durham, NC 27706 USA
[2] SUNY Buffalo, Buffalo, NY USA
[3] Microsoft Dynam 365 AI Res, Redmond, WA USA
[4] DeepMind, London, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning (RL) has been widely studied for improving sequence-generation models. However, the conventional rewards used for RL training typically cannot capture sufficient semantic information and therefore manifest model bias. Further, the sparse and delayed rewards make RL exploration inefficient. To alleviate these issues, we propose the concept of nested-Wasserstein distance for distributional semantic matching. To further exploit it, a novel nested-Wasserstein self-imitation learning framework is developed, encouraging the model to exploit historical high-reward sequences for enhanced exploration and better semantic matching. Our solution can be understood as approximately executing proximal policy optimization with Wasserstein trust-regions. Experiments on a variety of unconditional and conditional sequence-generation tasks demonstrate the proposed approach consistently leads to improved performance.
引用
下载
收藏
页码:422 / 432
页数:11
相关论文
共 50 条