License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ICDT.2022.6
URN: urn:nbn:de:0030-drops-158809
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2022/15880/
Go to the corresponding LIPIcs Volume Portal


Fan, Austen Z. ; Koutris, Paraschos

Certifiable Robustness for Nearest Neighbor Classifiers

pdf-format:
LIPIcs-ICDT-2022-6.pdf (0.9 MB)


Abstract

ML models are typically trained using large datasets of high quality. However, training datasets often contain inconsistent or incomplete data. To tackle this issue, one solution is to develop algorithms that can check whether a prediction of a model is certifiably robust. Given a learning algorithm that produces a classifier and given an example at test time, a classification outcome is certifiably robust if it is predicted by every model trained across all possible worlds (repairs) of the uncertain (inconsistent) dataset. This notion of robustness falls naturally under the framework of certain answers. In this paper, we study the complexity of certifying robustness for a simple but widely deployed classification algorithm, k-Nearest Neighbors (k-NN). Our main focus is on inconsistent datasets when the integrity constraints are functional dependencies (FDs). For this setting, we establish a dichotomy in the complexity of certifying robustness w.r.t. the set of FDs: the problem either admits a polynomial time algorithm, or it is coNP-hard. Additionally, we exhibit a similar dichotomy for the counting version of the problem, where the goal is to count the number of possible worlds that predict a certain label. As a byproduct of our study, we also establish the complexity of a problem related to finding an optimal subset repair that may be of independent interest.

BibTeX - Entry

@InProceedings{fan_et_al:LIPIcs.ICDT.2022.6,
  author =	{Fan, Austen Z. and Koutris, Paraschos},
  title =	{{Certifiable Robustness for Nearest Neighbor Classifiers}},
  booktitle =	{25th International Conference on Database Theory (ICDT 2022)},
  pages =	{6:1--6:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-223-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{220},
  editor =	{Olteanu, Dan and Vortmeier, Nils},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2022/15880},
  URN =		{urn:nbn:de:0030-drops-158809},
  doi =		{10.4230/LIPIcs.ICDT.2022.6},
  annote =	{Keywords: Inconsistent databases, k-NN classification, certifiable robustness}
}

Keywords: Inconsistent databases, k-NN classification, certifiable robustness
Collection: 25th International Conference on Database Theory (ICDT 2022)
Issue Date: 2022
Date of publication: 19.03.2022
Supplementary Material: Audiovisual (Video of the Presentation): https://doi.org/10.5446/57492


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI