License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.FORC.2022.5
URN: urn:nbn:de:0030-drops-165280
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2022/16528/
Go to the corresponding LIPIcs Volume Portal


Chowdhury, Sadia ; Urner, Ruth

Robustness Should Not Be at Odds with Accuracy

pdf-format:
LIPIcs-FORC-2022-5.pdf (6 MB)


Abstract

The phenomenon of adversarial examples in deep learning models has caused substantial concern over their reliability and trustworthiness: in many instances an imperceptible perturbation can falsely flip a neural network’s prediction. Applied research in this area has mostly focused on developing novel adversarial attack strategies or building better defenses against such. It has repeatedly been pointed out that adversarial robustness may be in conflict with requirements for high accuracy. In this work, we take a more principled look at modeling the phenomenon of adversarial examples. We argue that deciding whether a model’s label change under a small perturbation is justified, should be done in compliance with the underlying data-generating process. Through a series of formal constructions, systematically analyzing the relation between standard Bayes classifiers and robust-Bayes classifiers, we make the case for adversarial robustness as a locally adaptive measure. We propose a novel way defining such a locally adaptive robust loss, show that it has a natural empirical counterpart, and develop resulting algorithmic guidance in form of data-informed adaptive robustness radius. We prove that our adaptive robust data-augmentation maintains consistency of 1-nearest neighbor classification under deterministic labels and thereby argue that robustness should not be at odds with accuracy.

BibTeX - Entry

@InProceedings{chowdhury_et_al:LIPIcs.FORC.2022.5,
  author =	{Chowdhury, Sadia and Urner, Ruth},
  title =	{{Robustness Should Not Be at Odds with Accuracy}},
  booktitle =	{3rd Symposium on Foundations of Responsible Computing (FORC 2022)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-226-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{218},
  editor =	{Celis, L. Elisa},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2022/16528},
  URN =		{urn:nbn:de:0030-drops-165280},
  doi =		{10.4230/LIPIcs.FORC.2022.5},
  annote =	{Keywords: Statistical Learning Theory, Bayes optimal classifier, adversarial perturbations, adaptive robust loss}
}

Keywords: Statistical Learning Theory, Bayes optimal classifier, adversarial perturbations, adaptive robust loss
Collection: 3rd Symposium on Foundations of Responsible Computing (FORC 2022)
Issue Date: 2022
Date of publication: 15.07.2022


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI