Add docs with definitions of Benchmark, Execution, Experiments, etc.
This documentation should go into "Basic Idea behind Theodolite", #141.
To get started:
- A Benchmark is a standard tool for the competitive evaluation and comparison of competing systems or components according to specific characteristics, such as performance, dependability, or security. In Theodolite, we have specification-based benchmarks, or at least something very close to that. That is, our benchmarks are architectural descriptions---in our case---of typical use cases of stream processing in microservices (e.g. our UC1). Hence, we don't really have a piece of software, which represents a benchmark. We only have implementations of benchmarks, e.g. an implementation of UC1 with Kafka Streams. For simplification, we call these benchmark implementations simply benchmarks. This is debatable, but from my point of view this simplification is okay.
- A benchmark can be executed for different SUTs, by different users and multiple times. We call such an execution of a benchmark simply an Execution. (Benchmark execution would be more precise, but more verbose.)
- According to our benchmarking method, the execution of a benchmark requires performing multiple Experiments. I think what is actually done within/during an experiment is another level of detail. (But just for the sake of completeness: In an experiment, the benchmark implementation is deployed, load is generated according to the benchmark specification, some SLOs are monitored continuously, etc.) I'm not entirely happy with the term execution yet, but don't have a better idea. We sometimes referred to it as lag experiment, but this is too specific as we also would like to support other SLOs.
I think Resources should be discussed along with Load and SLOs. I will add these definitions soon.
Content to be considered: