%0 Conference Proceedings %T RLET: A Reinforcement Learning Based Approach for Explainable QA with Entailment Trees %A Liu, Tengxiao %A Guo, Qipeng %A Hu, Xiangkun %A Zhang, Yue %A Qiu, Xipeng %A Zhang, Zheng %Y Goldberg, Yoav %Y Kozareva, Zornitsa %Y Zhang, Yue %S Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing %D 2022 %8 December %I Association for Computational Linguistics %C Abu Dhabi, United Arab Emirates %F liu-etal-2022-rlet %X Interpreting the reasoning process from questions to answers poses a challenge in approaching explainable QA. A recently proposed structured reasoning format, entailment tree, manages to offer explicit logical deductions with entailment steps in a tree structure. To generate entailment trees, prior single pass sequence-to-sequence models lack visible internal decision probability, while stepwise approaches are supervised with extracted single step data and cannot model the tree as a whole. In this work, we propose RLET, a Reinforcement Learning based Entailment Tree generation framework, which is trained utilising the cumulative signals across the whole tree. RLET iteratively performs single step reasoning with sentence selection and deduction generation modules, from which the training signal is accumulated across the tree with elaborately designed aligned reward function that is consistent with the evaluation. To the best of our knowledge, we are the first to introduce RL into the entailment tree generation task. Experiments on three settings of the EntailmentBank dataset demonstrate the strength of using RL framework. %R 10.18653/v1/2022.emnlp-main.483 %U https://aclanthology.org/2022.emnlp-main.483 %U https://doi.org/10.18653/v1/2022.emnlp-main.483 %P 7177-7189