License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITCS.2020.45
URN: urn:nbn:de:0030-drops-117308
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2020/11730/
Jiang, Haotian ;
Li, Jian ;
Liu, Daogao ;
Singla, Sahil
Algorithms and Adaptivity Gaps for Stochastic k-TSP
Abstract
Given a metric (V,d) and a root ∈ V, the classic k-TSP problem is to find a tour originating at the root of minimum length that visits at least k nodes in V. In this work, motivated by applications where the input to an optimization problem is uncertain, we study two stochastic versions of k-TSP.
In Stoch-Reward k-TSP, originally defined by Ene-Nagarajan-Saket [Ene et al., 2018], each vertex v in the given metric (V,d) contains a stochastic reward R_v. The goal is to adaptively find a tour of minimum expected length that collects at least reward k; here "adaptively" means our next decision may depend on previous outcomes. Ene et al. give an O(log k)-approximation adaptive algorithm for this problem, and left open if there is an O(1)-approximation algorithm. We totally resolve their open question, and even give an O(1)-approximation non-adaptive algorithm for Stoch-Reward k-TSP.
We also introduce and obtain similar results for the Stoch-Cost k-TSP problem. In this problem each vertex v has a stochastic cost C_v, and the goal is to visit and select at least k vertices to minimize the expected sum of tour length and cost of selected vertices. Besides being a natural stochastic generalization of k-TSP, this problem is also interesting because it generalizes the Price of Information framework [Singla, 2018] from deterministic probing costs to metric probing costs.
Our techniques are based on two crucial ideas: "repetitions" and "critical scaling". In general, replacing a random variable with its expectation leads to very poor results. We show that for our problems, if we truncate the random variables at an ideal threshold, then their expected values form a good surrogate. Here, we rely on running several repetitions of our algorithm with the same threshold, and then argue concentration using Freedman’s and Jogdeo-Samuels' inequalities. Unfortunately, this ideal threshold depends on how far we are from achieving our target k, which a non-adaptive algorithm does not know. To overcome this barrier, we truncate the random variables at various different scales and identify a "critical" scale.
BibTeX - Entry
@InProceedings{jiang_et_al:LIPIcs:2020:11730,
author = {Haotian Jiang and Jian Li and Daogao Liu and Sahil Singla},
title = {{Algorithms and Adaptivity Gaps for Stochastic k-TSP}},
booktitle = {11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
pages = {45:1--45:25},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-134-4},
ISSN = {1868-8969},
year = {2020},
volume = {151},
editor = {Thomas Vidick},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2020/11730},
URN = {urn:nbn:de:0030-drops-117308},
doi = {10.4230/LIPIcs.ITCS.2020.45},
annote = {Keywords: approximation algorithms, stochastic optimization, travelling salesman problem}
}
Keywords: |
|
approximation algorithms, stochastic optimization, travelling salesman problem |
Collection: |
|
11th Innovations in Theoretical Computer Science Conference (ITCS 2020) |
Issue Date: |
|
2020 |
Date of publication: |
|
06.01.2020 |