License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/OASIcs.SOSA.2018.4
URN: urn:nbn:de:0030-drops-83045
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2018/8304/
Go to the corresponding OASIcs Volume Portal


Chakrabarty, Deeparnab ; Khanna, Sanjeev

Better and Simpler Error Analysis of the Sinkhorn-Knopp Algorithm for Matrix Scaling

pdf-format:
OASIcs-SOSA-2018-4.pdf (0.5 MB)


Abstract

Given a non-negative real matrix A, the matrix scaling problem is to determine if it is possible to scale the rows and columns so that each row and each column sums to a specified target value for it.

The matrix scaling problem arises in many algorithmic applications, perhaps most notably as a preconditioning step in solving linear system of equations. One of the most natural and by now classical approach to matrix scaling is the Sinkhorn-Knopp algorithm (also known as the RAS method) where one alternately scales either all rows or all columns to meet the target values. In addition to being extremely simple and natural, another appeal of this procedure is that it easily lends itself to parallelization. A central question is to understand the rate of convergence of the Sinkhorn-Knopp algorithm.

Specifically, given a suitable error metric to measure deviations from target values, and an error bound epsilon, how quickly does the Sinkhorn-Knopp algorithm converge to an error below epsilon? While there are several non-trivial convergence results known about the Sinkhorn-Knopp algorithm, perhaps somewhat surprisingly, even for natural error metrics such as ell_1-error or ell_2-error, this is not entirely understood.

In this paper, we present an elementary convergence analysis for the Sinkhorn-Knopp algorithm that improves upon the previous best bound. In a nutshell, our approach is to show (i) a simple bound on the number of iterations needed so that the KL-divergence between the current row-sums and the target row-sums drops below a specified threshold delta, and (ii) then show that for a suitable choice of delta, whenever KL-divergence is below delta, then the ell_1-error or the ell_2-error is below epsilon. The well-known Pinsker's inequality immediately allows us to translate a bound on the KL divergence to a bound on ell_1-error. To bound the ell_2-error in terms of the KL-divergence, we establish a new inequality, referred to as (KL vs ell_1/ell_2) inequality in the paper. This new inequality is a strengthening of the Pinsker's inequality that we believe is of independent interest. Our analysis of ell_2-error significantly improves upon the best previous convergence bound for ell_2-error.

The idea of studying Sinkhorn-Knopp convergence via KL-divergence is not new and has indeed been previously explored. Our contribution is an elementary, self-contained presentation of this approach and an interesting new inequality that yields a significantly stronger convergence guarantee for the extensively studied ell_2-error.

BibTeX - Entry

@InProceedings{chakrabarty_et_al:OASIcs:2018:8304,
  author =	{Deeparnab Chakrabarty and Sanjeev Khanna},
  title =	{{Better and Simpler Error Analysis of the  Sinkhorn-Knopp Algorithm for Matrix Scaling}},
  booktitle =	{1st Symposium on Simplicity in Algorithms (SOSA 2018)},
  pages =	{4:1--4:11},
  series =	{OpenAccess Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-064-4},
  ISSN =	{2190-6807},
  year =	{2018},
  volume =	{61},
  editor =	{Raimund Seidel},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{http://drops.dagstuhl.de/opus/volltexte/2018/8304},
  URN =		{urn:nbn:de:0030-drops-83045},
  doi =		{10.4230/OASIcs.SOSA.2018.4},
  annote =	{Keywords: Matrix Scaling, Entropy Minimization, KL Divergence Inequalities}
}

Keywords: Matrix Scaling, Entropy Minimization, KL Divergence Inequalities
Collection: 1st Symposium on Simplicity in Algorithms (SOSA 2018)
Issue Date: 2018
Date of publication: 05.01.2018


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI