License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.OPODIS.2021.4
URN: urn:nbn:de:0030-drops-157793
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2022/15779/
Go to the corresponding LIPIcs Volume Portal


Maurer, Alexandre

Arbitrarily Accurate Aggregation Scheme for Byzantine SGD

pdf-format:
LIPIcs-OPODIS-2021-4.pdf (0.7 MB)


Abstract

A very common optimization technique in Machine Learning is Stochastic Gradient Descent (SGD). SGD can easily be distributed: several workers try to estimate the gradient of a loss function, and a central parameter server gathers these estimates. When all workers behave correctly, the more workers we have, the more accurate the gradient estimate is. We call this the Arbitrary Aggregation Accuracy (AAA) property.
However, in practice, some workers may be Byzantine (i.e., have an arbitrary behavior). Interestingly, when a fixed fraction of workers is assumed to be Byzantine (e.g. 20%), no existing aggregation scheme has the AAA property. In this paper, we propose the first aggregation scheme that has this property despite a fixed fraction of Byzantine workers (less than 50%). We theoretically prove this property, and then illustrate it with simulations.

BibTeX - Entry

@InProceedings{maurer:LIPIcs.OPODIS.2021.4,
  author =	{Maurer, Alexandre},
  title =	{{Arbitrarily Accurate Aggregation Scheme for Byzantine SGD}},
  booktitle =	{25th International Conference on Principles of Distributed Systems (OPODIS 2021)},
  pages =	{4:1--4:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-219-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{217},
  editor =	{Bramas, Quentin and Gramoli, Vincent and Milani, Alessia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2022/15779},
  URN =		{urn:nbn:de:0030-drops-157793},
  doi =		{10.4230/LIPIcs.OPODIS.2021.4},
  annote =	{Keywords: distributed machine learning, Byzantine failures, stochastic gradient descent}
}

Keywords: distributed machine learning, Byzantine failures, stochastic gradient descent
Collection: 25th International Conference on Principles of Distributed Systems (OPODIS 2021)
Issue Date: 2022
Date of publication: 28.02.2022
Supplementary Material: Software (Source Code): https://tinyurl.com/sim-aaa-paper


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI