License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITP.2022.32
URN: urn:nbn:de:0030-drops-167413
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2022/16741/
Go to the corresponding LIPIcs Volume Portal


Yeager, Jared ; Moss, J. Eliot B. ; Norrish, Michael ; Thomas, Philip S.

Mechanizing Soundness of Off-Policy Evaluation

pdf-format:
LIPIcs-ITP-2022-32.pdf (0.8 MB)


Abstract

There are reinforcement learning scenarios - e.g., in medicine - where we are compelled to be as confident as possible that a policy change will result in an improvement before implementing it. In such scenarios, we can employ off-policy evaluation (OPE). The basic idea of OPE is to record histories of behaviors under the current policy, and then develop an estimate of the quality of a proposed new policy, seeing what the behavior would have been under the new policy. As we are evaluating the policy without actually using it, we have the "off-policy" of OPE. Applying a concentration inequality to the estimate, we derive a confidence interval for the expected quality of the new policy. If the confidence interval lies above that of the current policy, we can change policies with high confidence that we will do no harm.
We focus here on the mathematics of this method, by mechanizing the soundness of off-policy evaluation. A natural side effect of the mechanization is both to clarify all the result’s mathematical assumptions and preconditions, and to further develop HOL4’s library of verified statistical mathematics, including concentration inequalities. Of more significance, the OPE method relies on importance sampling, whose soundness we prove using a measure-theoretic approach. In fact, we generalize the standard result, showing it for contexts comprising both discrete and continuous probability distributions.

BibTeX - Entry

@InProceedings{yeager_et_al:LIPIcs.ITP.2022.32,
  author =	{Yeager, Jared and Moss, J. Eliot B. and Norrish, Michael and Thomas, Philip S.},
  title =	{{Mechanizing Soundness of Off-Policy Evaluation}},
  booktitle =	{13th International Conference on Interactive Theorem Proving (ITP 2022)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-252-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{237},
  editor =	{Andronick, June and de Moura, Leonardo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2022/16741},
  URN =		{urn:nbn:de:0030-drops-167413},
  doi =		{10.4230/LIPIcs.ITP.2022.32},
  annote =	{Keywords: Formal Methods, HOL4, Reinforcement Learning, Off-Policy Evaluation, Concentration Inequality, Hoeffding}
}

Keywords: Formal Methods, HOL4, Reinforcement Learning, Off-Policy Evaluation, Concentration Inequality, Hoeffding
Collection: 13th International Conference on Interactive Theorem Proving (ITP 2022)
Issue Date: 2022
Date of publication: 03.08.2022
Supplementary Material: Software (Source Code): https://github.com/jdyeager/itp_ope archived at: https://archive.softwareheritage.org/swh:1:dir:a554d5232ce611cfaa8df6b8c4fa0c164e51b2bb


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI