License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.APPROX-RANDOM.2014.793
URN: urn:nbn:de:0030-drops-47391
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2014/4739/
Go to the corresponding LIPIcs Volume Portal


Klivans, Adam ; Kothari, Pravesh

Embedding Hard Learning Problems Into Gaussian Space

pdf-format:
56.pdf (0.5 MB)


Abstract

We give the first representation-independent hardness result for agnostically learning halfspaces with respect to the Gaussian distribution. We reduce from the problem of learning sparse parities with noise with respect to the uniform distribution on the hypercube (sparse LPN), a notoriously hard problem in theoretical computer science and show that any algorithm for agnostically learning halfspaces requires n^Omega(log(1/\epsilon)) time under the assumption that k-sparse LPN requires n^Omega(k) time, ruling out a polynomial time algorithm for the problem. As far as we are aware, this is the first representation-independent hardness result for supervised learning when the underlying distribution is restricted to be a Gaussian.
We also show that the problem of agnostically learning sparse polynomials with respect to the Gaussian distribution in polynomial time is as hard as PAC learning DNFs on the uniform distribution in polynomial time. This complements the surprising result of Andoni et. al. 2013 who show that sparse polynomials are learnable under random Gaussian noise in polynomial time.
Taken together, these results show the inherent difficulty of designing supervised learning algorithms in Euclidean space even in the presence of strong distributional assumptions. Our results use a novel embedding of random labeled examples from the uniform distribution on the Boolean hypercube into random labeled examples from the Gaussian distribution that allows us to relate the hardness of learning problems on two different domains and distributions.

BibTeX - Entry

@InProceedings{klivans_et_al:LIPIcs:2014:4739,
  author =	{Adam Klivans and Pravesh Kothari},
  title =	{{Embedding Hard Learning Problems Into Gaussian Space}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{793--809},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Klaus Jansen and Jos{\'e} D. P. Rolim and Nikhil R. Devanur and Cristopher Moore},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{http://drops.dagstuhl.de/opus/volltexte/2014/4739},
  URN =		{urn:nbn:de:0030-drops-47391},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.793},
  annote =	{Keywords: distribution-specific hardness of learning, gaussian space, halfspace-learning, agnostic learning}
}

Keywords: distribution-specific hardness of learning, gaussian space, halfspace-learning, agnostic learning
Collection: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)
Issue Date: 2014
Date of publication: 04.09.2014


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI