License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.APPROX/RANDOM.2021.60
URN: urn:nbn:de:0030-drops-147534
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2021/14753/
Go to the corresponding LIPIcs Volume Portal


Garg, Sumegha ; Kothari, Pravesh K. ; Liu, Pengda ; Raz, Ran

Memory-Sample Lower Bounds for Learning Parity with Noise

pdf-format:
LIPIcs-APPROX60.pdf (0.8 MB)


Abstract

In this work, we show, for the well-studied problem of learning parity under noise, where a learner tries to learn x = (x₁,…,x_n) ∈ {0,1}ⁿ from a stream of random linear equations over ?₂ that are correct with probability 1/2+ε and flipped with probability 1/2-ε (0 < ε < 1/2), that any learning algorithm requires either a memory of size Ω(n²/ε) or an exponential number of samples.
In fact, we study memory-sample lower bounds for a large class of learning problems, as characterized by [Garg et al., 2018], when the samples are noisy. A matrix M: A × X → {-1,1} corresponds to the following learning problem with error parameter ε: an unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a₁, b₁), (a₂, b₂) …, where for every i, a_i ∈ A is chosen uniformly at random and b_i = M(a_i,x) with probability 1/2+ε and b_i = -M(a_i,x) with probability 1/2-ε (0 < ε < 1/2). Assume that k,?, r are such that any submatrix of M of at least 2^{-k} ⋅ |A| rows and at least 2^{-?} ⋅ |X| columns, has a bias of at most 2^{-r}. We show that any learning algorithm for the learning problem corresponding to M, with error parameter ε, requires either a memory of size at least Ω((k⋅?)/ε), or at least 2^{Ω(r)} samples. The result holds even if the learner has an exponentially small success probability (of 2^{-Ω(r)}). In particular, this shows that for a large class of learning problems, same as those in [Garg et al., 2018], any learning algorithm requires either a memory of size at least Ω(((log|X|)⋅(log|A|))/ε) or an exponential number of noisy samples.
Our proof is based on adapting the arguments in [Ran Raz, 2017; Garg et al., 2018] to the noisy case.

BibTeX - Entry

@InProceedings{garg_et_al:LIPIcs.APPROX/RANDOM.2021.60,
  author =	{Garg, Sumegha and Kothari, Pravesh K. and Liu, Pengda and Raz, Ran},
  title =	{{Memory-Sample Lower Bounds for Learning Parity with Noise}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{60:1--60:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2021/14753},
  URN =		{urn:nbn:de:0030-drops-147534},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.60},
  annote =	{Keywords: memory-sample tradeoffs, learning parity under noise, space lower bound, branching program}
}

Keywords: memory-sample tradeoffs, learning parity under noise, space lower bound, branching program
Collection: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)
Issue Date: 2021
Date of publication: 15.09.2021


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI