License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.FORC.2020.8
URN: urn:nbn:de:0030-drops-120247
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2020/12024/
Go to the corresponding LIPIcs Volume Portal


Dwork, Cynthia ; Ilvento, Christina ; Rothblum, Guy N. ; Sur, Pragya

Abstracting Fairness: Oracles, Metrics, and Interpretability

pdf-format:
LIPIcs-FORC-2020-8.pdf (0.5 MB)


Abstract

It is well understood that classification algorithms, for example, for deciding on loan applications, cannot be evaluated for fairness without taking context into account. We examine what can be learned from a fairness oracle equipped with an underlying understanding of "true" fairness. The oracle takes as input a (context, classifier) pair satisfying an arbitrary fairness definition, and accepts or rejects the pair according to whether the classifier satisfies the underlying fairness truth. Our principal conceptual result is an extraction procedure that learns the underlying truth; moreover, the procedure can learn an approximation to this truth given access to a weak form of the oracle. Since every "truly fair" classifier induces a coarse metric, in which those receiving the same decision are at distance zero from one another and those receiving different decisions are at distance one, this extraction process provides the basis for ensuring a rough form of metric fairness, also known as individual fairness.
Our principal technical result is a higher fidelity extractor under a mild technical constraint on the weak oracle’s conception of fairness. Our framework permits the scenario in which many classifiers, with differing outcomes, may all be considered fair.
Our results have implications for interpretablity - a highly desired but poorly defined property of classification systems that endeavors to permit a human arbiter to reject classifiers deemed to be "unfair" or illegitimately derived.

BibTeX - Entry

@InProceedings{dwork_et_al:LIPIcs:2020:12024,
  author =	{Cynthia Dwork and Christina Ilvento and Guy N. Rothblum and Pragya Sur},
  title =	{{Abstracting Fairness: Oracles, Metrics, and Interpretability}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{8:1--8:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Aaron Roth},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2020/12024},
  URN =		{urn:nbn:de:0030-drops-120247},
  doi =		{10.4230/LIPIcs.FORC.2020.8},
  annote =	{Keywords: Algorithmic fairness, fairness definitions, causality-based fairness, interpretability, individual fairness, metric fairness}
}

Keywords: Algorithmic fairness, fairness definitions, causality-based fairness, interpretability, individual fairness, metric fairness
Collection: 1st Symposium on Foundations of Responsible Computing (FORC 2020)
Issue Date: 2020
Date of publication: 18.05.2020


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI