The source project of this merge request has been removed.
Add dokumentation for benchmark and execution
Compare changes
- Lorenz Boguhn authored
+ 23
− 11
A **Benchmark** is a [*standard tool for the competitive evaluation and comparison of competing systems or components according to specific characteristics, such as performance, dependability, or security*](https://doi.org/10.1145/2668930.2688819). In Theodolite, we have [specification-based benchmarks](https://doi.org/10.1145/2668930.2688819), or at least something very close to that. That is, our benchmarks are architectural descriptions---in our case---[of typical use cases of stream processing in microservices](https://doi.org/10.1016/j.bdr.2021.100209) (e.g. our UC1). Hence, we don't really have a piece of software, which represents a benchmark. We only have implementations of benchmarks, e.g. an implementation of UC1 with Kafka Streams. For simplification, we call these *benchmark implementations* simply *benchmarks*.
@@ -30,6 +34,8 @@ kafkaConfig:
@@ -40,11 +46,11 @@ The properties have the following definitions:
@@ -54,15 +60,17 @@ The properties have the following definitions:
@@ -101,7 +109,7 @@ configurationOverrides:
@@ -118,18 +126,17 @@ The properties have the following meaning:
@@ -180,3 +187,8 @@ The properties have the following meaning:
According to [our benchmarking method](https://doi.org/10.1016/j.bdr.2021.100209), the execution of a benchmark requires performing multiple **Experiments**. I think what is actually done within/during an experiment is another level of detail. (But just for the sake of completeness: In an experiment, the benchmark implementation is deployed, load is generated according to the benchmark specification, some SLOs are monitored continuously, etc.)