Skip to content
Snippets Groups Projects
Commit 766f89ab authored by Sören Henning's avatar Sören Henning
Browse files

Merge branch 'main' into feature/374-improve-hazelcastjet-structure

parents cbba407e 8a1011f1
No related branches found
No related tags found
1 merge request!275Refactor hazelcast jet benchmarks:
Pipeline #10099 passed
Showing
with 1431 additions and 44 deletions
......@@ -8,7 +8,7 @@ authors:
given-names: Wilhelm
orcid: "https://orcid.org/0000-0001-6625-4335"
title: Theodolite
version: "0.8.0"
version: "0.8.2"
repository-code: "https://github.com/cau-se/theodolite"
license: "Apache-2.0"
doi: "10.1016/j.bdr.2021.100209"
......
![Theodolite](docs/assets/logo/theodolite-horizontal-transparent.svg)
# Theodolite
> A theodolite is a precision optical instrument for measuring angles between designated visible points in the horizontal and vertical planes. -- <cite>[Wikipedia](https://en.wikipedia.org/wiki/Theodolite)</cite>
Theodolite is a framework for benchmarking the horizontal and vertical scalability of stream processing engines. It consists of three modules:
Theodolite is a framework for benchmarking the horizontal and vertical scalability of cloud-native applications.
## Theodolite Benchmarking Tool
## Quickstart
Theodolite aims to benchmark scalability of stream processing engines for real use cases. Microservices that apply stream processing techniques are usually deployed in elastic cloud environments. Hence, Theodolite's cloud-native benchmarking framework deploys its components in a cloud environment, orchestrated by Kubernetes. It is recommended to install Theodolite with the package manager Helm. The Theodolite Helm chart along with instructions how to install it can be found in the [`helm`](helm) directory.
Theodolite runs scalability benchmarks in Kubernetes. Follow our [quickstart guide](https://www.theodolite.rocks/quickstart.html) to get started.
## Theodolite Analysis Tools
## Documentation
Theodolite's benchmarking method maps load intensities to the resource amounts that are required for processing them. A plot showing how resource demand evolves with an increasing load allows to draw conclusions about the scalability of a stream processing engine or its deployment. Theodolite provides Jupyter notebooks for creating such plots based on benchmarking results from the execution framework. More information can be found in [Theodolite analysis tool](analysis).
Documentation on Theodolite itself as well as regarding its benchmarking method can be found on the [Theodolite website](https://www.theodolite.rocks).
## Theodolite Benchmarks
## Project Structure
Theodolite comes with 4 application benchmarks, which are based on typical use cases for stream processing within microservices. For each benchmark, a corresponding load generator is provided. Currently, this repository provides benchmark implementations for Apache Kafka Streams and Apache Flink. The benchmark sources can be found in [Thedolite benchmarks](theodolite-benchmarks).
* Core of Theodolite is its Kubernetes Operator, implemented in Kotlin. The source-code can be found in [`theodolite`](theodolite).
* Theodolite's Helm chart and templates are maintained in [`helm`](helm).
* We provide Juptyer notebooks for analyzing and visualizing the results of benchmark executions in [`analysis`](analysis).
* Theodolite comes with 4 application benchmarks, which are based on typical use cases for stream processing within microservices. Implementations of these benchmarks with several state-of-the art stream processing frameworks as well as corresponding load generators can be found in [`theodolite-benchmarks`](theodolite-benchmarks). This includes both the source code of the implementations as well as benchmark definitions for Theodolite in [`theodolite-benchmarks/definitions`](theodolite-benchmarks/definitions).
* The source code of Theodolite's SLO checkers are located in [`slo-checker`](slo-checker).
* The documentation, which is hosted on [theodolite.rocks](https://www.theodolite.rocks), is located in [`docs`](docs).
## How to Cite
......
......@@ -5,10 +5,10 @@
"codeRepository": "https://github.com/cau-se/theodolite",
"dateCreated": "2020-03-13",
"datePublished": "2020-07-27",
"dateModified": "2022-07-18",
"dateModified": "2022-11-20",
"downloadUrl": "https://github.com/cau-se/theodolite/releases",
"name": "Theodolite",
"version": "0.8.0",
"version": "0.8.2",
"description": "Theodolite is a framework for benchmarking the horizontal and vertical scalability of cloud-native applications.",
"developmentStatus": "active",
"relatedLink": [
......
......@@ -14,7 +14,7 @@ GEM
execjs
coffee-script-source (1.11.1)
colorator (1.1.0)
commonmarker (0.23.4)
commonmarker (0.23.6)
concurrent-ruby (1.1.10)
dnsruby (1.61.9)
simpleidn (~> 0.1)
......@@ -239,7 +239,7 @@ GEM
jekyll-seo-tag (~> 2.1)
minitest (5.15.0)
multipart-post (2.1.1)
nokogiri (1.13.6-x86_64-linux)
nokogiri (1.13.9-x86_64-linux)
racc (~> 1.4)
octokit (4.22.0)
faraday (>= 0.9)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
apiVersion: v1
entries:
theodolite:
- apiVersion: v2
appVersion: 0.8.2
created: "2022-11-20T11:37:04.711009053+01:00"
dependencies:
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.17.*
- condition: kube-prometheus-stack.enabled
name: kube-prometheus-stack
repository: https://prometheus-community.github.io/helm-charts
version: 41.7.*
- condition: cp-helm-charts.enabled
name: cp-helm-charts
repository: https://soerenhenning.github.io/cp-helm-charts
version: 0.6.0
- condition: strimzi.enabled
name: strimzi-kafka-operator
repository: https://strimzi.io/charts/
version: 0.29.*
description: Theodolite is a framework for benchmarking the horizontal and vertical
scalability of cloud-native applications.
digest: b6fc354d08b661dd75beb4e54efd0bb65b488247dcb528fd0c5e365f8f011808
home: https://www.theodolite.rocks
icon: https://www.theodolite.rocks/assets/logo/theodolite-stacked-transparent.svg
maintainers:
- email: soeren.henning@email.uni-kiel.de
name: Sören Henning
url: https://www.se.informatik.uni-kiel.de/en/team/soeren-henning-m-sc
name: theodolite
sources:
- https://github.com/cau-se/theodolite
type: application
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.2/theodolite-0.8.2.tgz
version: 0.8.2
- apiVersion: v2
appVersion: 0.8.1
created: "2022-11-16T09:45:09.130711943+01:00"
dependencies:
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.17.5
- condition: kube-prometheus-stack.enabled
name: kube-prometheus-stack
repository: https://prometheus-community.github.io/helm-charts
version: 20.0.1
- condition: cp-helm-charts.enabled
name: cp-helm-charts
repository: https://soerenhenning.github.io/cp-helm-charts
version: 0.6.0
- condition: strimzi.enabled
name: strimzi-kafka-operator
repository: https://strimzi.io/charts/
version: 0.29.0
description: Theodolite is a framework for benchmarking the horizontal and vertical
scalability of cloud-native applications.
digest: 02a1c6a5a8d0295fb9bf2d704cb04e0a17624b83b2a03cd59c1d61b74d8fe4ab
home: https://www.theodolite.rocks
icon: https://www.theodolite.rocks/assets/logo/theodolite-stacked-transparent.svg
maintainers:
- email: soeren.henning@email.uni-kiel.de
name: Sören Henning
url: https://www.se.informatik.uni-kiel.de/en/team/soeren-henning-m-sc
name: theodolite
sources:
- https://github.com/cau-se/theodolite
type: application
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.1/theodolite-0.8.1.tgz
version: 0.8.1
- apiVersion: v2
appVersion: 0.8.0
created: "2022-07-18T17:48:21.205921939+02:00"
......@@ -387,4 +459,4 @@ entries:
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.4.0/theodolite-0.4.0.tgz
version: 0.4.0
generated: "2022-07-18T17:48:21.163757427+02:00"
generated: "2022-11-20T11:37:04.66991317+01:00"
......@@ -38,6 +38,16 @@ To store the results of benchmark executions in a [PersistentVolume](https://kub
You can also use an existing PersistentVolumeClaim by setting `operator.resultsVolume.persistent.existingClaim`.
If persistence is not enabled, all results will be gone upon pod termination.
### Exposing Grafana
Per default, Theodolite exposes a Grafana instance as NodePort at port `31199`. This can configured by setting `grafana.service.nodePort`.
### Additional Kubernetes cluster metrics
As long as you have sufficient permissions on your cluster, you can integrate additional Kubernetes metrics into Prometheus. This involes enabling some exporters, additional Grafana dashboards and additional permissions. We provide a [values file for enabling extended metrics](https://github.com/cau-se/theodolite/blob/main/helm/preconfigs/extended-metrics.yaml).
See the [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) for more details on configuring the individual exporters.
### Random scheduler
Installation of the random scheduler can be enabled via `randomScheduler.enabled`. Please note that the random scheduler is neither required in operator mode nor in standalone mode. However, it has to be installed if benchmark executions should use random scheduling.
......
......@@ -9,8 +9,9 @@ nav_order: 9
Below you can find a list of publications that are directly related to Theodolite:
* S. Henning and W. Hasselbring. “[A Configurable Method for Benchmarking Scalability of Cloud-Native Applications](https://doi.org/10.1007/s10664-022-10162-1)”. In: *Empirical Software Engineering* 27. 2022. DOI: [10.1007/s10664-022-10162-1](https://doi.org/10.1007/s10664-022-10162-1).
* T. Pfandzelter, S. Henning, T. Schirmer, W. Hasselbring, and D. Bermbach. “[Streaming vs. Functions: A Cost Perspective on Cloud Event Processing](https://arxiv.org/pdf/2204.11509.pdf)”. In: *IEEE International Conference on Cloud Engineering*. 2022. In press.
* S. Henning and W. Hasselbring. “Demo Paper: Benchmarking Scalability of Cloud-Native Applications with Theodolite”. In: *IEEE International Conference on Cloud Engineering*. 2022. In press.
* T. Pfandzelter, S. Henning, T. Schirmer, W. Hasselbring, and D. Bermbach. “[Streaming vs. Functions: A Cost Perspective on Cloud Event Processing](https://arxiv.org/pdf/2204.11509.pdf)”. In: *IEEE International Conference on Cloud Engineering*. 2022. DOI: [10.1109/IC2E55432.2022.00015](https://doi.org/10.1109/IC2E55432.2022.00015).
* S. Henning and W. Hasselbring. “[Demo Paper: Benchmarking Scalability of Cloud-Native Applications with Theodolite](https://oceanrep.geomar.de/id/eprint/57336/)”. In: *IEEE International Conference on Cloud Engineering*. 2022. DOI: [10.1109/IC2E55432.2022.00037](https://doi.org/10.1109/IC2E55432.2022.00037).
* S. Henning, B. Wetzel, and W. Hasselbring. “[Cloud-Native Scalability Benchmarking with Theodolite Applied to the TeaStore Benchmark](https://oceanrep.geomar.de/id/eprint/57338/)”. In: *Symposium on Software Performance*. 2022.
* S. Henning, B. Wetzel, and W. Hasselbring. “[Reproducible Benchmarking of Cloud-Native Applications With the Kubernetes Operator Pattern](http://ceur-ws.org/Vol-3043/short5.pdf)”. In: *Symposium on Software Performance*. 2021.
* S. Henning and W. Hasselbring. “[How to Measure Scalability of Distributed Stream Processing Engines?](https://research.spec.org/icpe_proceedings/2021/companion/p85.pdf)” In: *Companion of the ACM/SPEC International Conference on Performance Engineering*. 2021. DOI: [10.1145/3447545.3451190](https://doi.org/10.1145/3447545.3451190).
* S. Henning and W. Hasselbring. “[Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures](https://arxiv.org/abs/2009.00304)”. In: *Big Data Research* 25. 2021. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209).
......
---
title: Benchmark UC1
parent: Streaming Benchmarks
has_children: false
nav_order: 1
---
# Benchmark UC1: Database Storage
A simple, but common use case in event-driven architectures is that events or messages should be stored permanently, for example, in a NoSQL database.
## Dataflow Architecture
![Theodolite Benchmark UC1: Database Storage](../../assets/images/arch-uc1.svg){: .d-block .mx-auto }
The first step is to read data records from a messaging system. Then, these records are converted into another data format in order to match the often different formats required by the database. Finally, the converted records are written to an external database.
Per default, this benchmark does not use a real database, but instead writes all incoming records to system out. Otherwise, due to the simple, stateless stream processing topology, the benchmark would primarily test the database’s write capabilities. However, for implementations of some stream processing engines, also a real database can be configured.
## Further Reading
S. Henning and W. Hasselbring. “[Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures](https://arxiv.org/abs/2009.00304)”. In: *Big Data Research* 25. 2021. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209).
---
title: Benchmark UC2
parent: Streaming Benchmarks
has_children: false
nav_order: 1
---
# Benchmark UC2: Downsampling
Another common use case for stream processing architectures is reducing the amount of events, messages, or measurements by aggregating multiple records within consecutive, non-overlapping time windows. Typical aggregations compute the average, minimum, or maximum of measurements within a time window or
count the occurrence of same events. Such reduced amounts of data are required, for example, to save computing resources or to provide a better user experience (e.g., for data visualizations).
When using aggregation windows of fixed size that succeed each other without gaps (called [tumbling windows](https://kafka.apache.org/30/documentation/streams/developer-guide/dsl-api.html#tumbling-time-windows) in many stream processing engines), the (potentially varying) message frequency is reduced to a constant value.
This is also referred to as downsampling. Downsampling allows for applying many machine learning methods that require data of a fixed frequency.
## Dataflow Architecture
![Theodolite Benchmark UC2: Downsampling](../../assets/images/arch-uc2.svg){: .d-block .mx-auto }
The dataflow architecture first reads measurement data from an input stream and then assigns each measurement to a time window of fixed, but statically configurable size. Afterwards, an aggregation operator computes the summary statistics sum, count, minimum, maximum, average and population variance for a time window. Finally, the aggregation result containing all summary statistics is written to an output stream.
## Further Reading
S. Henning and W. Hasselbring. “[Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures](https://arxiv.org/abs/2009.00304)”. In: *Big Data Research* 25. 2021. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209).
---
title: Benchmark UC3
parent: Streaming Benchmarks
has_children: false
nav_order: 1
---
# Benchmark UC3: Time Attribute-Based Aggregation
A second type of temporal aggregation is aggregating messages that have the same time attribute. Such a time attribute is, for example, the hour of day, day of week, or day in the year. This type of aggregation can be used to compute, for example, an average course over the day, the week, or the year. It allows to demonstrate or discover seasonal patterns in the data.
## Dataflow Architecture
![Theodolite Benchmark UC3: Time Attribute-Based Aggregation](../../assets/images/arch-uc3.svg){: .d-block .mx-auto }
The first step is to read measurement data from the input stream. Then, a new key is set for each message, which consists of the original key (i.e., the identifier of a sensor) and the selected time attribute (e.g., day of week) extracted from the record’s timestamp. In the next step, the message is duplicated for each sliding window it is contained in. Then, all measurements of the same sensor and the same time attribute are aggregated for each sliding time window by computing the summary statistics sum, count, minimum, maximum, average and population variance. The aggregation results per identifier, time attribute, and window are written to an output stream.
## Further Reading
S. Henning and W. Hasselbring. “[Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures](https://arxiv.org/abs/2009.00304)”. In: *Big Data Research* 25. 2021. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209).
---
title: Benchmark UC4
parent: Streaming Benchmarks
has_children: false
nav_order: 1
---
# Benchmark UC4: Hierarchical Aggregation
For analyzing sensor data, often not only the individual measurements of sensors are of interest, but also aggregated data for
groups of sensors. When monitoring energy consumption in industrial facilities, for example, comparing the total consumption
of machine types often provides better insights than comparing the consumption of all individual machines. Additionally, it may
be necessary to combine groups further into larger groups and adjust these group hierarchies at runtime.
## Dataflow Architecture
![Theodolite Benchmark UC4: Hierarchical Aggregation](../../assets/images/arch-uc4.svg){: .d-block .mx-auto }
The dataflow architecture requires two input data streams: a stream of sensor measurements and a stream tracking changes to the hierarchies of sensor groups. In the consecutive steps, both streams are joined, measurements are duplicated for each relevant group, assigned to time windows, and the measurements for all sensors in a group per window are aggregated. Finally, the aggregation results are exposed via a new data stream. Additionally, the output stream is fed back as an input stream in order to compute aggregations for groups containing subgroups. To also support unknown record frequencies, this dataflow architecture can be configured to use sliding windows instead of tumbling windows (see [further reading](#further-reading)).
## Further Reading
* S. Henning and W. Hasselbring. “[Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures](https://arxiv.org/abs/2009.00304)”. In: *Big Data Research* 25. 2021. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209).
* S. Henning and W. Hasselbring. “[Scalable and reliable multi-dimensional sensor data aggregation in data-streaming architectures](https://doi.org/10.1007/s41688-020-00041-3)”. In: *Data-Enabled Discovery and Applications* 4.1. 2020. DOI: [10.1007/s41688-020-00041-3](https://doi.org/10.1007/s41688-020-00041-3).
\ No newline at end of file
---
title: Available Benchmarks
title: Streaming Benchmarks
has_children: true
nav_order: 7
---
# Theodolite Benchmarks
# Theodolite's Stream Processing Benchmarks
Theodolite comes with 4 application benchmarks, which are based on typical use cases for stream processing within microservices. For each benchmark, a corresponding [load generator](load-generator) is provided. Currently, Theodolite provides benchmark implementations for Apache Kafka Streams and Apache Flink.
Theodolite comes with 4 application benchmarks, which are based on typical use cases for stream processing within microservices. For each benchmark, a corresponding [load generator](load-generator) is provided. Currently, Theodolite provides benchmark implementations for Apache Kafka Streams, Apache Flink, Hazelcast Jet and Apache Beam (with Samza and Flink).
Theodolite's benchmarks (labeled UC1--UC4) represent some sort of event-driven microservice performing Industrial Internet of Things data analytics. Specifically, they are derived from a microservice-based research software for analyzing industrial power consumption data streams (the [Titan Control Center](https://github.com/cau-se/titan-ccp)).
Theodolite's benchmarks are based on typical use cases for stream processing within microservices. Specifically, all benchmarks represent some sort of microservice doing Industrial Internet of Things data analytics.
| Stream processing engine | [UC1](benchmark-uc1) | [UC2](benchmark-uc2) | [UC3](benchmark-uc3) | [UC4](benchmark-uc4) |
|:--------------------------|:---:|:---:|:---:|:---:|
| Apache Kafka Streams | ✓ | ✓ | ✓ | ✓ |
| Apache Flink | ✓ | ✓ | ✓ | ✓ |
| Hazelcast Jet | ✓ | ✓ | ✓ | ✓ |
| Apache Beam (Samza/Flink) | ✓ | ✓ | ✓ | ✓ |
## UC1: Database Storage
## Installation
A simple, but common use case in event-driven architectures is that events or messages should be stored permanently, for example, in a NoSQL database.
When [installing Theodolite](../installation) with Helm and the default configuration, also our stream processing benchmarks are automatically installed.
This can be verified by running `kubectl get benchmarks`, which should yield something like:
```
NAME AGE STATUS
uc1-beam-flink 2d20h Ready
uc1-beam-samza 2d20h Ready
uc1-flink 2d20h Ready
uc1-hazelcastjet 2d16h Ready
uc1-kstreams 2d20h Ready
uc2-beam-flink 2d20h Ready
uc2-beam-samza 2d20h Ready
uc2-flink 2d20h Ready
uc2-hazelcastjet 2d16h Ready
uc2-kstreams 2d20h Ready
uc3-beam-flink 2d20h Ready
uc3-beam-samza 2d20h Ready
uc3-flink 2d20h Ready
uc3-hazelcastjet 2d16h Ready
uc3-kstreams 2d20h Ready
uc4-beam-flink 2d20h Ready
uc4-beam-samza 2d20h Ready
uc4-flink 2d20h Ready
uc4-hazelcastjet 2d16h Ready
uc4-kstreams 2d20h Ready
```
## UC2: Downsampling
Alternatively, all benchmarks can also be found at [GitHub](https://github.com/cau-se/theodolite/tree/main/theodolite-benchmarks/definitions) and installed manually with `kubectl apply -f <benchmark-yaml-file>`. Additionally, you would need to package the benchmarks' Kubernetes resources into a ConfigMap by running:
Another common use case for stream processing architectures is reducing the amount of events, messages, or measurements by aggregating multiple records within consecutive, non-overlapping time windows. Typical aggregations compute the average, minimum, or maximum of measurements within a time window or
count the occurrence of same events. Such reduced amounts of data are required, for example, to save computing resources or to provide a better user experience (e.g., for data visualizations).
When using aggregation windows of fixed size that succeed each other without gaps (called [tumbling windows](https://kafka.apache.org/30/documentation/streams/developer-guide/dsl-api.html#tumbling-time-windows) in many stream processing engines), the (potentially varying) message frequency is reduced to a constant value.
This is also referred to as downsampling. Downsampling allows for applying many machine learning methods that require data of a fixed frequency.
```sh
kubectl create configmap <configmap-name-required-by-benchmark> --from-file <directory-with-benchmark-resources>
```
See the [install-configmaps.sh](https://github.com/cau-se/theodolite/blob/main/theodolite-benchmarks/definitions/install-configmaps.sh) script for examples.
## UC3: Time Attribute-Based Aggregation
## Running Benchmarks
A second type of temporal aggregation is aggregating messages that have the same time attribute. Such a time attribute is, for example, the hour of day, day of week, or day in the year. This type of aggregation can be used to compute, for example, an average course over the day, the week, or the year. It allows to demonstrate or discover seasonal patterns in the data.
To run a benchmark, you need to create and apply an `Execution` YAML file as described in the [running benchmarks documentation](../running-benchmarks).
## UC4: Hierarchical Aggregation
## Control the Number of Load Generator Instances
For analyzing sensor data, often not only the individual measurements of sensors are of interest, but also aggregated data for
groups of sensors. When monitoring energy consumption in industrial facilities, for example, comparing the total consumption
of machine types often provides better insights than comparing the consumption of all individual machines. Additionally, it may
be necessary to combine groups further into larger groups and adjust these group hierarchies at runtime.
Depending on the load to be generated, the Theodolite benchmarks create multiple load generator instances.
Per default, a single instance will generate up to 150&thinsp;000 messages per second.
If higher loads are to be generated, accordingly more instances are deployed.
However, the actual load that can be generated by a single load generator instance depends on the cluster configuration and might be lower.
To change the maximum number of messages per instance, run the following commands.
Set the `MAX_RECORDS_PER_INSTANCE` variable to the number of messages a single instance can generate in your cluster (use our Grafana dashboard to figure out that value).
```sh
export MAX_RECORDS_PER_INSTANCE=150000 # Change to your desired value
kubectl patch benchmarks uc1-beam-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc1-beam-samza --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc1-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc1-hazelcastjet --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc1-kstreams --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc2-beam-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc2-beam-samza --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc2-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc2-hazelcastjet --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc2-kstreams --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc3-beam-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc3-beam-samza --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc3-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc3-hazelcastjet --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc3-kstreams --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc4-beam-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc4-beam-samza --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc4-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc4-hazelcastjet --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc4-kstreams --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
```
\ No newline at end of file
---
title: Load Generators
parent: Available Benchmarks
parent: Streaming Benchmarks
has_children: false
nav_order: 1
nav_order: 5
---
# Load Generator Framework
......
......@@ -31,7 +31,7 @@ Theodolite produces Kubernetes events, which you can view by running:
kubectl describe execution <your-execution-name>
```
## Looking the Operator Logs
## Looking at the Operator Logs
If you cannot figure out why your benchmark execution fails, you might want to have look at the operator logs:
......
......@@ -14,19 +14,15 @@ type: application
dependencies:
- name: grafana
version: 6.17.5
version: 6.17.*
repository: https://grafana.github.io/helm-charts
condition: grafana.enabled
- name: kube-prometheus-stack
version: 20.0.1
version: 41.7.*
repository: https://prometheus-community.github.io/helm-charts
condition: kube-prometheus-stack.enabled
- name: cp-helm-charts
version: 0.6.0
repository: https://soerenhenning.github.io/cp-helm-charts
condition: cp-helm-charts.enabled
- name: strimzi-kafka-operator
version: 0.28.0
version: 0.29.*
repository: https://strimzi.io/charts/
condition: strimzi.enabled
......
......@@ -9,7 +9,7 @@ helm dependencies update .
helm install theodolite .
```
**Hint for Windows users:** The Theodolite Helm chart makes use of some symbolic links. These are not properly created when this repository is checked out with Windows. There are a couple of solutions presented in this [Stack Overflow post](https://stackoverflow.com/q/5917249/4121056). A simpler workaround is to manually delete the symbolic links and replace them by the files and folders, they are pointing to. The relevant symbolic links are `benchmark-definitions` and the files inside `crd`.
**Hint for Windows users:** The Theodolite Helm chart makes use of some symbolic links. These are not properly created when this repository is checked out with Windows. There are a couple of solutions presented in this [Stack Overflow post](https://stackoverflow.com/q/5917249/4121056). A simpler workaround is to manually delete the symbolic links and replace them by the files and folders, they are pointing to. The relevant symbolic links are `benchmark-definitions/examples`, `benchmark-definitions/theodolite-benchmarks` and the files inside `crd`.
## Customize Installation
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment