License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITCS.2021.41
URN: urn:nbn:de:0030-drops-135806
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2021/13580/
Go to the corresponding LIPIcs Volume Portal


Goldwasser, Shafi ; Rothblum, Guy N. ; Shafer, Jonathan ; Yehudayoff, Amir

Interactive Proofs for Verifying Machine Learning

pdf-format:
LIPIcs-ITCS-2021-41.pdf (0.6 MB)


Abstract

We consider the following question: using a source of labeled data and interaction with an untrusted prover, what is the complexity of verifying that a given hypothesis is "approximately correct"? We study interactive proof systems for PAC verification, where a verifier that interacts with a prover is required to accept good hypotheses, and reject bad hypotheses. Both the verifier and the prover are efficient and have access to labeled data samples from an unknown distribution. We are interested in cases where the verifier can use significantly less data than is required for (agnostic) PAC learning, or use a substantially cheaper data source (e.g., using only random samples for verification, even though learning requires membership queries). We believe that today, when data and data-driven algorithms are quickly gaining prominence, the question of verifying purported outcomes of data analyses is very well-motivated.
We show three main results. First, we prove that for a specific hypothesis class, verification is significantly cheaper than learning in terms of sample complexity, even if the verifier engages with the prover only in a single-round (NP-like) protocol. Moreover, for this class we prove that single-round verification is also significantly cheaper than testing closeness to the class. Second, for the broad class of Fourier-sparse boolean functions, we show a multi-round (IP-like) verification protocol, where the prover uses membership queries, and the verifier is able to assess the result while only using random samples. Third, we show that verification is not always more efficient. Namely, we show a class of functions where verification requires as many samples as learning does, up to a logarithmic factor.

BibTeX - Entry

@InProceedings{goldwasser_et_al:LIPIcs.ITCS.2021.41,
  author =	{Shafi Goldwasser and Guy N. Rothblum and Jonathan Shafer and Amir Yehudayoff},
  title =	{{Interactive Proofs for Verifying Machine Learning}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{41:1--41:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{James R. Lee},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2021/13580},
  URN =		{urn:nbn:de:0030-drops-135806},
  doi =		{10.4230/LIPIcs.ITCS.2021.41},
  annote =	{Keywords: PAC learning, Fourier analysis of boolean functions, Complexity gaps, Complexity lower bounds, Goldreich-Levin algorithm, Kushilevitz-Mansour algorithm, Distribution testing}
}

Keywords: PAC learning, Fourier analysis of boolean functions, Complexity gaps, Complexity lower bounds, Goldreich-Levin algorithm, Kushilevitz-Mansour algorithm, Distribution testing
Collection: 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)
Issue Date: 2021
Date of publication: 04.02.2021


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI