[go: up one dir, main page]

Counterfactual Off-Policy Training for Neural Dialogue Generation

Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang


Abstract
Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses. In this paper, we propose to explore potential responses by counterfactual reasoning. Given an observed response, the counterfactual reasoning model automatically infers the outcome of an alternative policy that could have been taken. The resulting counterfactual response synthesized in hindsight is of higher quality than the response synthesized from scratch. Training on the counterfactual responses under the adversarial learning framework helps to explore the high-reward area of the potential response space. An empirical study on the DailyDialog dataset shows that our approach significantly outperforms the HRED model as well as the conventional adversarial learning approaches.
Anthology ID:
2020.emnlp-main.276
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3438–3448
Language:
URL:
https://aclanthology.org/2020.emnlp-main.276
DOI:
10.18653/v1/2020.emnlp-main.276
Bibkey:
Cite (ACL):
Qingfu Zhu, Wei-Nan Zhang, Ting Liu, and William Yang Wang. 2020. Counterfactual Off-Policy Training for Neural Dialogue Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3438–3448, Online. Association for Computational Linguistics.
Cite (Informal):
Counterfactual Off-Policy Training for Neural Dialogue Generation (Zhu et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.276.pdf
Video:
 https://slideslive.com/38939239