License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/DagSemProc.08492.3
URN: urn:nbn:de:0030-drops-18828
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2009/1882/
Go to the corresponding Portal


Elad, Michael ; Yavneh, Irad

A Weighted Average of Sparse Representations is Better than the Sparsest One Alone

pdf-format:
08492.EladMichael.Paper.1882.pdf (0.4 MB)


Abstract

Cleaning of noise from signals is a classical and long-studied problem in signal
processing. Algorithms for this task necessarily rely on an a-priori knowledge about the signal characteristics, along with information about the noise properties. For signals that admit sparse representations over a known dictionary, a commonly used denoising technique is to seek the sparsest representation that synthesizes a signal close enough to the corrupted one. As this problem is too complex in general, approximation methods, such as greedy pursuit algorithms, are often employed.
In this line of reasoning, we are led to believe that detection of the sparsest representation is key in the success of the denoising goal. Does this mean that other competitive and slightly inferior sparse representations are meaningless? Suppose we are served with a group of competing sparse representations, each claiming to explain the signal differently. Can those be fused somehow to lead to a better result? Surprisingly, the answer to this question is positive; merging these representations can form a more accurate, yet dense, estimate of the original signal even when the latter is known to be sparse.
In this paper we demonstrate this behavior, propose a practical way to generate
such a collection of representations by randomizing the Orthogonal Matching Pursuit (OMP) algorithm, and produce a clear analytical justification for the superiority of the associated Randomized OMP (RandOMP) algorithm. We show that while the Maximum a-posterior Probability (MAP) estimator aims to find and use the sparsest representation, the Minimum Mean-Squared-Error (MMSE) estimator leads to a fusion of representations to form its result. Thus, working with an appropriate mixture of candidate representations, we are surpassing the MAP and tending towards the MMSE estimate, and thereby getting a far more accurate estimation, especially at medium and low SNR.

BibTeX - Entry

@InProceedings{elad_et_al:DagSemProc.08492.3,
  author =	{Elad, Michael and Yavneh, Irad},
  title =	{{A Weighted Average of Sparse Representations is Better than the Sparsest One Alone}},
  booktitle =	{Structured Decompositions and Efficient Algorithms},
  pages =	{1--35},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{8492},
  editor =	{Stephan Dahlke and Ingrid Daubechies and Michal Elad and Gitta Kutyniok and Gerd Teschke},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2009/1882},
  URN =		{urn:nbn:de:0030-drops-18828},
  doi =		{10.4230/DagSemProc.08492.3},
  annote =	{Keywords: Sparse representations, MMSE, MAP, mathcing pursuit}
}

Keywords: Sparse representations, MMSE, MAP, mathcing pursuit
Collection: 08492 - Structured Decompositions and Efficient Algorithms
Issue Date: 2009
Date of publication: 24.02.2009


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI