License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.CP.2022.18
URN: urn:nbn:de:0030-drops-166479
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2022/16647/
Go to the corresponding LIPIcs Volume Portal


Dang, Nguyen ; Akgün, Özgür ; Espasa, Joan ; Miguel, Ian ; Nightingale, Peter

A Framework for Generating Informative Benchmark Instances

pdf-format:
LIPIcs-CP-2022-18.pdf (1 MB)


Abstract

Benchmarking is an important tool for assessing the relative performance of alternative solving approaches. However, the utility of benchmarking is limited by the quantity and quality of the available problem instances. Modern constraint programming languages typically allow the specification of a class-level model that is parameterised over instance data. This separation presents an opportunity for automated approaches to generate instance data that define instances that are graded (solvable at a certain difficulty level for a solver) or can discriminate between two solving approaches. In this paper, we introduce a framework that combines these two properties to generate a large number of benchmark instances, purposely generated for effective and informative benchmarking. We use five problems that were used in the MiniZinc competition to demonstrate the usage of our framework. In addition to producing a ranking among solvers, our framework gives a broader understanding of the behaviour of each solver for the whole instance space; for example by finding subsets of instances where the solver performance significantly varies from its average performance.

BibTeX - Entry

@InProceedings{dang_et_al:LIPIcs.CP.2022.18,
  author =	{Dang, Nguyen and Akg\"{u}n, \"{O}zg\"{u}r and Espasa, Joan and Miguel, Ian and Nightingale, Peter},
  title =	{{A Framework for Generating Informative Benchmark Instances}},
  booktitle =	{28th International Conference on Principles and Practice of Constraint Programming (CP 2022)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-240-2},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{235},
  editor =	{Solnon, Christine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2022/16647},
  URN =		{urn:nbn:de:0030-drops-166479},
  doi =		{10.4230/LIPIcs.CP.2022.18},
  annote =	{Keywords: Instance generation, Benchmarking, Constraint Programming}
}

Keywords: Instance generation, Benchmarking, Constraint Programming
Collection: 28th International Conference on Principles and Practice of Constraint Programming (CP 2022)
Issue Date: 2022
Date of publication: 23.07.2022
Supplementary Material: Software (Source Code): https://github.com/stacs-cp/AutoIG


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI