Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning
Abstract
In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods---it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang and Li, 2015), and a new way to mix between model based estimates and importance sampling based estimates.
- Publication:
-
arXiv e-prints
- Pub Date:
- April 2016
- DOI:
- arXiv:
- arXiv:1604.00923
- Bibcode:
- 2016arXiv160400923T
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Artificial Intelligence