License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.STACS.2016.3
URN: urn:nbn:de:0030-drops-57044
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2016/5704/
Vassilevska Williams, Virginia
Fine-Grained Algorithms and Complexity (Invited Talk)
Abstract
A central goal of algorithmic research is to determine how fast computational problems can be solved in the worst case. Theorems from complexity theory state that there are problems that, on inputs of size n, can be solved in t(n) time but not in t(n)^{1-epsilon} time for epsilon>0. The main challenge is to determine where in this hierarchy various natural and important problems lie. Throughout the years, many ingenious algorithmic techniques have been developed and applied to obtain blazingly fast algorithms for many problems. Nevertheless, for many other central problems, the best known running times are essentially those of the classical algorithms devised for them in the 1950s and 1960s.
Unconditional lower bounds seem very difficult to obtain, and so practically all known time lower bounds are conditional. For years, the main tool for proving hardness of computational problems have been NP-hardness reductions, basing hardness on P != NP. However, when we care about the exact running time (as opposed to merely polynomial vs non-polynomial), NP-hardness is not applicable, especially if the problem can already be solved in polynomial time. In recent years, a new theory has been developed, based on "fine-grained reductions" that focus on exact running times. The goal of these reductions is as follows. Suppose problem A is solvable in a(n) time and problem B in b(n) time, and no a(n)^{1-epsilon} and b(n)^{1-epsilon} algorithms are known for A and B respectively, for any epsilon>0. Then if A is fine-grained reducible to problem B (for a(n) and b(n)), then a b(n)^{1-epsilon} time algorithm for B (for any epsilon>0) implies an a(n)^{1-epsilon'} algorithm for A (for some epsilon'>0). Now, mimicking NP-hardness, the approach is to (1) select a key problem X that is conjectured to require t(n)^{1-o(1)} time for some t(n), and (2) reduce X in a fine-grained way to many important problems. This approach has led to the discovery of many meaningful relationships between problems, and even sometimes to equivalence classes.
In this talk I will give an overview of the current progress in this area of study, and will highlight some new exciting developments.
BibTeX - Entry
@InProceedings{vassilevskawilliams:LIPIcs:2016:5704,
author = {Virginia Vassilevska Williams},
title = {{Fine-Grained Algorithms and Complexity (Invited Talk)}},
booktitle = {33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
pages = {3:1--3:1},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-001-9},
ISSN = {1868-8969},
year = {2016},
volume = {47},
editor = {Nicolas Ollinger and Heribert Vollmer},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2016/5704},
URN = {urn:nbn:de:0030-drops-57044},
doi = {10.4230/LIPIcs.STACS.2016.3},
annote = {Keywords: algorithms, complexity, polynomial time problems}
}
Keywords: |
|
algorithms, complexity, polynomial time problems |
Collection: |
|
33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016) |
Issue Date: |
|
2016 |
Date of publication: |
|
16.02.2016 |