License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ICALP.2020.19
URN: urn:nbn:de:0030-drops-124269
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2020/12426/
Go to the corresponding LIPIcs Volume Portal


Bringmann, Karl ; Fischer, Nick ; Hermelin, Danny ; Shabtay, Dvir ; Wellnitz, Philip

Faster Minimization of Tardy Processing Time on a Single Machine

pdf-format:
LIPIcs-ICALP-2020-19.pdf (0.5 MB)


Abstract

This paper is concerned with the 1||∑ p_jU_j problem, the problem of minimizing the total processing time of tardy jobs on a single machine. This is not only a fundamental scheduling problem, but also a very important problem from a theoretical point of view as it generalizes the Subset Sum problem and is closely related to the 0/1-Knapsack problem. The problem is well-known to be NP-hard, but only in a weak sense, meaning it admits pseudo-polynomial time algorithms. The fastest known pseudo-polynomial time algorithm for the problem is the famous Lawler and Moore algorithm which runs in O(P ⋅ n) time, where P is the total processing time of all n jobs in the input. This algorithm has been developed in the late 60s, and has yet to be improved to date.
In this paper we develop two new algorithms for 1||∑ p_jU_j, each improving on Lawler and Moore’s algorithm in a different scenario:
- Our first algorithm runs in Õ(P^{7/4}) time, and outperforms Lawler and Moore’s algorithm in instances where n = ω̃(P^{3/4}).
- Our second algorithm runs in Õ(min{P ⋅ D_#, P + D}) time, where D_# is the number of different due dates in the instance, and D is the sum of all different due dates. This algorithm improves on Lawler and Moore’s algorithm when n = ω̃(D_#) or n = ω̃(D/P). Further, it extends the known Õ(P) algorithm for the single due date special case of 1||∑ p_jU_j in a natural way.
Both algorithms rely on basic primitive operations between sets of integers and vectors of integers for the speedup in their running times. The second algorithm relies on fast polynomial multiplication as its main engine, while for the first algorithm we define a new "skewed" version of (max,min)-convolution which is interesting in its own right.

BibTeX - Entry

@InProceedings{bringmann_et_al:LIPIcs:2020:12426,
  author =	{Karl Bringmann and Nick Fischer and Danny Hermelin and Dvir Shabtay and Philip Wellnitz},
  title =	{{Faster Minimization of Tardy Processing Time on a Single Machine}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{19:1--19:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Artur Czumaj and Anuj Dawar and Emanuela Merelli},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2020/12426},
  URN =		{urn:nbn:de:0030-drops-124269},
  doi =		{10.4230/LIPIcs.ICALP.2020.19},
  annote =	{Keywords: Weighted number of tardy jobs, sumsets, convolutions}
}

Keywords: Weighted number of tardy jobs, sumsets, convolutions
Collection: 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)
Issue Date: 2020
Date of publication: 29.06.2020


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI