License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.FORC.2020.9
URN: urn:nbn:de:0030-drops-120255
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2020/12025/
Braverman, Mark ;
Garg, Sumegha
The Role of Randomness and Noise in Strategic Classification
Abstract
We investigate the problem of designing optimal classifiers in the "strategic classification" setting, where the classification is part of a game in which players can modify their features to attain a favorable classification outcome (while incurring some cost). Previously, the problem has been considered from a learning-theoretic perspective and from the algorithmic fairness perspective.
Our main contributions include
- Showing that if the objective is to maximize the efficiency of the classification process (defined as the accuracy of the outcome minus the sunk cost of the qualified players manipulating their features to gain a better outcome), then using randomized classifiers (that is, ones where the probability of a given feature vector to be accepted by the classifier is strictly between 0 and 1) is necessary.
- Showing that in many natural cases, the imposed optimal solution (in terms of efficiency) has the structure where players never change their feature vectors (and the randomized classifier is structured in a way, such that the gain in the probability of being classified as a "1" does not justify the expense of changing one’s features).
- Observing that the randomized classification is not a stable best-response from the classifier’s viewpoint, and that the classifier doesn’t benefit from randomized classifiers without creating instability in the system.
- Showing that in some cases, a noisier signal leads to better equilibria outcomes - improving both accuracy and fairness when more than one subpopulation with different feature adjustment costs are involved. This is particularly interesting from a policy perspective, since it is hard to force institutions to stick to a particular randomized classification strategy (especially in a context of a market with multiple classifiers), but it is possible to alter the information environment to make the feature signals inherently noisier.
BibTeX - Entry
@InProceedings{braverman_et_al:LIPIcs:2020:12025,
author = {Mark Braverman and Sumegha Garg},
title = {{The Role of Randomness and Noise in Strategic Classification}},
booktitle = {1st Symposium on Foundations of Responsible Computing (FORC 2020)},
pages = {9:1--9:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-142-9},
ISSN = {1868-8969},
year = {2020},
volume = {156},
editor = {Aaron Roth},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2020/12025},
URN = {urn:nbn:de:0030-drops-120255},
doi = {10.4230/LIPIcs.FORC.2020.9},
annote = {Keywords: Strategic classification, noisy features, randomized classification, fairness}
}
Keywords: |
|
Strategic classification, noisy features, randomized classification, fairness |
Collection: |
|
1st Symposium on Foundations of Responsible Computing (FORC 2020) |
Issue Date: |
|
2020 |
Date of publication: |
|
18.05.2020 |