License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITP.2022.32
URN: urn:nbn:de:0030-drops-167413
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2022/16741/
Yeager, Jared ;
Moss, J. Eliot B. ;
Norrish, Michael ;
Thomas, Philip S.
Mechanizing Soundness of Off-Policy Evaluation
Abstract
There are reinforcement learning scenarios - e.g., in medicine - where we are compelled to be as confident as possible that a policy change will result in an improvement before implementing it. In such scenarios, we can employ off-policy evaluation (OPE). The basic idea of OPE is to record histories of behaviors under the current policy, and then develop an estimate of the quality of a proposed new policy, seeing what the behavior would have been under the new policy. As we are evaluating the policy without actually using it, we have the "off-policy" of OPE. Applying a concentration inequality to the estimate, we derive a confidence interval for the expected quality of the new policy. If the confidence interval lies above that of the current policy, we can change policies with high confidence that we will do no harm.
We focus here on the mathematics of this method, by mechanizing the soundness of off-policy evaluation. A natural side effect of the mechanization is both to clarify all the result’s mathematical assumptions and preconditions, and to further develop HOL4’s library of verified statistical mathematics, including concentration inequalities. Of more significance, the OPE method relies on importance sampling, whose soundness we prove using a measure-theoretic approach. In fact, we generalize the standard result, showing it for contexts comprising both discrete and continuous probability distributions.
BibTeX - Entry
@InProceedings{yeager_et_al:LIPIcs.ITP.2022.32,
author = {Yeager, Jared and Moss, J. Eliot B. and Norrish, Michael and Thomas, Philip S.},
title = {{Mechanizing Soundness of Off-Policy Evaluation}},
booktitle = {13th International Conference on Interactive Theorem Proving (ITP 2022)},
pages = {32:1--32:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-252-5},
ISSN = {1868-8969},
year = {2022},
volume = {237},
editor = {Andronick, June and de Moura, Leonardo},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2022/16741},
URN = {urn:nbn:de:0030-drops-167413},
doi = {10.4230/LIPIcs.ITP.2022.32},
annote = {Keywords: Formal Methods, HOL4, Reinforcement Learning, Off-Policy Evaluation, Concentration Inequality, Hoeffding}
}