License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITCS.2023.60
URN: urn:nbn:de:0030-drops-175635
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2023/17563/
Gopalan, Parikshit ;
Hu, Lunjia ;
Kim, Michael P. ;
Reingold, Omer ;
Wieder, Udi
Loss Minimization Through the Lens Of Outcome Indistinguishability
Abstract
We present a new perspective on loss minimization and the recent notion of Omniprediction through the lens of Outcome Indistingusihability. For a collection of losses and hypothesis class, omniprediction requires that a predictor provide a loss-minimization guarantee simultaneously for every loss in the collection compared to the best (loss-specific) hypothesis in the class. We present a generic template to learn predictors satisfying a guarantee we call Loss Outcome Indistinguishability. For a set of statistical tests - based on a collection of losses and hypothesis class - a predictor is Loss OI if it is indistinguishable (according to the tests) from Nature’s true probabilities over outcomes. By design, Loss OI implies omniprediction in a direct and intuitive manner. We simplify Loss OI further, decomposing it into a calibration condition plus multiaccuracy for a class of functions derived from the loss and hypothesis classes. By careful analysis of this class, we give efficient constructions of omnipredictors for interesting classes of loss functions, including non-convex losses.
This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration. We show that calibrated multiaccuracy implies Loss OI for the important set of convex losses arising from Generalized Linear Models, without requiring full multicalibration. For such losses, we show an equivalence between our computational notion of Loss OI and a geometric notion of indistinguishability, formulated as Pythagorean theorems in the associated Bregman divergence. We give an efficient algorithm for calibrated multiaccuracy with computational complexity comparable to that of multiaccuracy. In all, calibrated multiaccuracy offers an interesting tradeoff point between efficiency and generality in the omniprediction landscape.
BibTeX - Entry
@InProceedings{gopalan_et_al:LIPIcs.ITCS.2023.60,
author = {Gopalan, Parikshit and Hu, Lunjia and Kim, Michael P. and Reingold, Omer and Wieder, Udi},
title = {{Loss Minimization Through the Lens Of Outcome Indistinguishability}},
booktitle = {14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
pages = {60:1--60:20},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-263-1},
ISSN = {1868-8969},
year = {2023},
volume = {251},
editor = {Tauman Kalai, Yael},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2023/17563},
URN = {urn:nbn:de:0030-drops-175635},
doi = {10.4230/LIPIcs.ITCS.2023.60},
annote = {Keywords: Loss Minimization, Indistinguishability}
}
Keywords: |
|
Loss Minimization, Indistinguishability |
Collection: |
|
14th Innovations in Theoretical Computer Science Conference (ITCS 2023) |
Issue Date: |
|
2023 |
Date of publication: |
|
01.02.2023 |