License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/OASIcs.WCET.2023.8
URN: urn:nbn:de:0030-drops-184373
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2023/18437/
Kolev, Rumen Rumenov ;
Helpa, Christopher
Analyzing the Stability of Relative Performance Differences Between Cloud and Embedded Environments
Abstract
There has been a shift towards the software-defined vehicle in the automotive industry in recent years. In order to enable the correct behaviour of critical as well as non-critical software functions, like those found in Autonomous Driving/Driver Assistance subsystems, extensive software testing needs to be performed. The usage of embedded hardware for these tests is either very expensive or takes a prohibitively long time in relation to the fast development cycles in the industry. To reduce development bottlenecks, test frameworks executed in cloud environments that leverage the scalability of the cloud are an essential part of the development process. However, relying on more performant cloud hardware for the majority of tests means that performance problems will only become apparent in later development phases when software is deployed to the real target. However, if the performance relation between executing in the cloud and on the embedded target can be approximated with sufficient precision, the expressiveness of the executed tests can be improved. Moreover, as a fully integrated system consists of a large number of software components that, at any given time, exhibit an unknown mix of best-/average-/worst-case behaviour, it is critical to know whether the performance relation differs depending on the inputs. In this paper, we examine the relative performance differences between a physical ARM-based chipset and a cloud-based ARM-based virtual machine, using a generic benchmark and 2 algorithms representative of typical automotive workloads, modified to generate best-/average-/worst-case behaviour in a reproducible and controlled way and assess the performance differences. We determine that the performance difference factor is between 1.8 and 3.6 for synthetic benchmarks and around 2.0-2.8 for more representative benchmarks. These results indicate that it may be possible to relate cloud to embedded performance with acceptable precision, especially when workload characterization is taken into account.
BibTeX - Entry
@InProceedings{kolev_et_al:OASIcs.WCET.2023.8,
author = {Kolev, Rumen Rumenov and Helpa, Christopher},
title = {{Analyzing the Stability of Relative Performance Differences Between Cloud and Embedded Environments}},
booktitle = {21th International Workshop on Worst-Case Execution Time Analysis (WCET 2023)},
pages = {8:1--8:12},
series = {Open Access Series in Informatics (OASIcs)},
ISBN = {978-3-95977-293-8},
ISSN = {2190-6807},
year = {2023},
volume = {114},
editor = {W\"{a}gemann, Peter},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2023/18437},
URN = {urn:nbn:de:0030-drops-184373},
doi = {10.4230/OASIcs.WCET.2023.8},
annote = {Keywords: Performance Benchmarking, Performance Factor Stability, Software Development, Cloud Computing, WCET}
}
Keywords: |
|
Performance Benchmarking, Performance Factor Stability, Software Development, Cloud Computing, WCET |
Collection: |
|
21th International Workshop on Worst-Case Execution Time Analysis (WCET 2023) |
Issue Date: |
|
2023 |
Date of publication: |
|
26.07.2023 |