License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/OASIcs.PARMA-DITAM.2021.4
URN: urn:nbn:de:0030-drops-136403
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2021/13640/
Go to the corresponding OASIcs Volume Portal


Ferikoglou, Aggelos ; Masouros, Dimosthenis ; Tzenetopoulos, Achilleas ; Xydis, Sotirios ; Soudris, Dimitrios

Resource Aware GPU Scheduling in Kubernetes Infrastructure

pdf-format:
OASIcs-PARMA-DITAM-2021-4.pdf (3 MB)


Abstract

Nowadays, there is an ever-increasing number of artificial intelligence inference workloads pushed and executed on the cloud. To effectively serve and manage the computational demands, data center operators have provisioned their infrastructures with accelerators. Specifically for GPUs, support for efficient management lacks, as state-of-the-art schedulers and orchestrators, threat GPUs only as typical compute resources ignoring their unique characteristics and application properties. This phenomenon combined with the GPU over-provisioning problem leads to severe resource under-utilization. Even though prior work has addressed this problem by colocating applications into a single accelerator device, its resource agnostic nature does not manage to face the resource under-utilization and quality of service violations especially for latency critical applications.
In this paper, we design a resource aware GPU scheduling framework, able to efficiently colocate applications on the same GPU accelerator card. We integrate our solution with Kubernetes, one of the most widely used cloud orchestration frameworks. We show that our scheduler can achieve 58.8% lower end-to-end job execution time 99%-ile, while delivering 52.5% higher GPU memory usage, 105.9% higher GPU utilization percentage on average and 44.4% lower energy consumption on average, compared to the state-of-the-art schedulers, for a variety of ML representative workloads.

BibTeX - Entry

@InProceedings{ferikoglou_et_al:OASIcs.PARMA-DITAM.2021.4,
  author =	{Ferikoglou, Aggelos and Masouros, Dimosthenis and Tzenetopoulos, Achilleas and Xydis, Sotirios and Soudris, Dimitrios},
  title =	{{Resource Aware GPU Scheduling in Kubernetes Infrastructure}},
  booktitle =	{12th Workshop on Parallel Programming and Run-Time Management Techniques for Many-core Architectures and 10th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2021)},
  pages =	{4:1--4:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-181-8},
  ISSN =	{2190-6807},
  year =	{2021},
  volume =	{88},
  editor =	{Bispo, Jo\~{a}o and Cherubin, Stefano and Flich, Jos\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2021/13640},
  URN =		{urn:nbn:de:0030-drops-136403},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2021.4},
  annote =	{Keywords: cloud computing, GPU scheduling, kubernetes, heterogeneity}
}

Keywords: cloud computing, GPU scheduling, kubernetes, heterogeneity
Collection: 12th Workshop on Parallel Programming and Run-Time Management Techniques for Many-core Architectures and 10th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2021)
Issue Date: 2021
Date of publication: 02.03.2021


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI