License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITC.2023.14
URN: urn:nbn:de:0030-drops-183421
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2023/18342/
Go to the corresponding LIPIcs Volume Portal


Holmgren, Justin ; Jawale, Ruta

Locally Covert Learning

pdf-format:
LIPIcs-ITC-2023-14.pdf (0.6 MB)


Abstract

The goal of a covert learning algorithm is to learn a function f by querying it, while ensuring that an adversary, who sees all queries and their responses, is unable to (efficiently) learn any more about f than they could learn from random input-output pairs. We focus on a relaxation that we call local covertness, in which queries are distributed across k servers and we only limit what is learnable by k - 1 colluding servers.
For any constant k, we give a locally covert algorithm for efficiently learning any Fourier-sparse function (technically, our notion of learning is improper, agnostic, and with respect to the uniform distribution). Our result holds unconditionally and for computationally unbounded adversaries. Prior to our work, such an algorithm was known only for the special case of O(log n)-juntas, and only with k = 2 servers [Yuval Ishai et al., 2019].
Our main technical observation is that the original Goldreich-Levin algorithm only utilizes i.i.d. pairs of correlated queries, where each half of every pair is uniformly random. We give a simple generalization of this algorithm in which pairs are replaced by k-tuples in which any k - 1 components are jointly uniform. The cost of this generalization is that the number of queries needed grows exponentially with k.

BibTeX - Entry

@InProceedings{holmgren_et_al:LIPIcs.ITC.2023.14,
  author =	{Holmgren, Justin and Jawale, Ruta},
  title =	{{Locally Covert Learning}},
  booktitle =	{4th Conference on Information-Theoretic Cryptography (ITC 2023)},
  pages =	{14:1--14:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-271-6},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{267},
  editor =	{Chung, Kai-Min},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2023/18342},
  URN =		{urn:nbn:de:0030-drops-183421},
  doi =		{10.4230/LIPIcs.ITC.2023.14},
  annote =	{Keywords: learning theory, adversarial machine learning, zero knowledge, Fourier analysis of boolean functions, Goldreich-Levin algorithm, Kushilevitz-Mansour algorithm}
}

Keywords: learning theory, adversarial machine learning, zero knowledge, Fourier analysis of boolean functions, Goldreich-Levin algorithm, Kushilevitz-Mansour algorithm
Collection: 4th Conference on Information-Theoretic Cryptography (ITC 2023)
Issue Date: 2023
Date of publication: 21.07.2023


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI