License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.OPODIS.2020.8
URN: urn:nbn:de:0030-drops-134931
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2021/13493/
Go to the corresponding LIPIcs Volume Portal


Boussetta, Amine ; El-Mhamdi, El-Mahdi ; Guerraoui, Rachid ; Maurer, Alexandre ; Rouault, Sébastien

AKSEL: Fast Byzantine SGD

pdf-format:
LIPIcs-OPODIS-2020-8.pdf (1 MB)


Abstract

Modern machine learning architectures distinguish servers and workers. Typically, a d-dimensional model is hosted by a server and trained by n workers, using a distributed stochastic gradient descent (SGD) optimization scheme. At each SGD step, the goal is to estimate the gradient of a cost function. The simplest way to do this is to average the gradients estimated by the workers. However, averaging is not resilient to even one single Byzantine failure of a worker. Many alternative gradient aggregation rules (GARs) have recently been proposed to tolerate a maximum number f of Byzantine workers. These GARs differ according to (1) the complexity of their computation time, (2) the maximal number of Byzantine workers despite which convergence can still be ensured (breakdown point), and (3) their accuracy, which can be captured by (3.1) their angular error, namely the angle with the true gradient, as well as (3.2) their ability to aggregate full gradients. In particular, many are not full gradients for they operate on each dimension separately, which results in a coordinate-wise blended gradient, leading to low accuracy in practical situations where the number (s) of workers that are actually Byzantine in an execution is small (s < < f).
We propose Aksel, a new scalable median-based GAR with optimal time complexity (?(nd)), optimal breakdown point (n > 2f) and the lowest upper bound on the expected angular error (?(√d)) among full gradient approaches. We also study the actual angular error of Aksel when the gradient distribution is normal and show that it only grows in ?(√dlog{n}), which is the first logarithmic upper bound ever proven on the number of workers n assuming an optimal breakdown point. We also report on an empirical evaluation of Aksel on various classification tasks, which we compare to alternative GARs against state-of-the-art attacks. Aksel is the only GAR reaching top accuracy when there is actually none or few Byzantine workers while maintaining a good defense even under the extreme case (s = f). For simplicity of presentation, we consider a scheme with a single server. However, as we explain in the paper, Aksel can also easily be adapted to multi-server architectures that tolerate the Byzantine behavior of a fraction of the servers.

BibTeX - Entry

@InProceedings{boussetta_et_al:LIPIcs:2021:13493,
  author =	{Amine Boussetta and El-Mahdi El-Mhamdi and Rachid Guerraoui and Alexandre Maurer and S{\'e}bastien Rouault},
  title =	{{AKSEL: Fast Byzantine SGD}},
  booktitle =	{24th International Conference on Principles of Distributed Systems (OPODIS 2020)},
  pages =	{8:1--8:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-176-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{184},
  editor =	{Quentin Bramas and Rotem Oshman and Paolo Romano},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2021/13493},
  URN =		{urn:nbn:de:0030-drops-134931},
  doi =		{10.4230/LIPIcs.OPODIS.2020.8},
  annote =	{Keywords: Machine learning, Stochastic gradient descent, Byzantine failures}
}

Keywords: Machine learning, Stochastic gradient descent, Byzantine failures
Collection: 24th International Conference on Principles of Distributed Systems (OPODIS 2020)
Issue Date: 2021
Date of publication: 25.01.2021


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI