Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • she/theodolite
1 result
Show changes
Showing
with 849 additions and 2451 deletions
---
title: Release Process
has_children: false
parent: Development
parent: Contributing
nav_order: 1
---
......
---
title: "Example: TeaStore"
has_children: false
parent: "Creating Benchmarks"
nav_order: 1
---
# Example: A Benchmark for the TeaStore
The [TeaStore](https://github.com/DescartesResearch/TeaStore) is a microservice reference application.
It resamples a web shop for tea, allowing customers, for example, to browse the shop catalog, receive product recommendations, or place orders.
The TeaStore consists of six microservices and a MariaDB database.
The entire application can easily deployed with Kubernetes using the provided resource files.
In this example, we will create a Theodolite benchmark for the TeaStore.
We use [Open Service Mesh (OSM)](https://openservicemesh.io/) to inject sidecar proxies into the TeaStore microservices, which allow us to gather latency and other metrics.
## Prerequisites
To get started, you need:
* A running Kubernetes cluster (for testing purposes, you might want to use [Minikube](https://minikube.sigs.k8s.io/), [kind](https://kind.sigs.k8s.io/) or [k3d](https://k3d.io/))
* [Helm installed](https://helm.sh/) on you local machine
## Cluster Preparation
Before running a benchmark, we need to install Theodolite and OSM on our cluster.
### Install Theodolite
In general, Theodolite can be installed using Helm as described in the [installation guide](installation).
However, we need to make sure that no OSM sidecards are injected into the pod of Theodolite, Prometheus, etc.
As we do not use Kafka in this example, we can omit the Strimzi installation.
If no further configuration is required, run the following command to install Theodolite:
```sh
helm install theodolite theodolite/theodolite -f https://raw.githubusercontent.com/cau-se/theodolite/main/helm/preconfigs/osm-ready.yaml -f https://raw.githubusercontent.com/cau-se/theodolite/main/helm/preconfigs/kafka-less.yaml
```
### Install Open Service Mesh
To install OSM, we use the Helm chart provided by the OSM project. Run the following commands to install and configure OSM:
```sh
export NAMESPACE=default # Kubernetes namespace to be monitored
kubectl create ns osm-system
helm install osm osm --repo https://openservicemesh.github.io/osm --namespace osm-system --version 0.9.2 # A newer version would probably work as well
sleep 60s # Installation may take some time, so we wait a bit
kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge
kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"enableEgress":true}}}' --type=merge
kubectl label namespace $NAMESPACE openservicemesh.io/monitored-by=osm --overwrite
kubectl annotate namespace $NAMESPACE openservicemesh.io/metrics=enabled --overwrite
kubectl annotate namespace $NAMESPACE openservicemesh.io/sidecar-injection=enabled --overwrite
```
## Create a Benchmark for the TeaStore
According to our Theodolite benchmarking method, we need to define a system under test (SUT) and a load generator for the benchmark.
Quite obviously, the TeaStore enriched by OSM's Envoy sidecar proxies is the SUT.
As load generator, we use JMeter along with the [browse profile](https://github.com/DescartesResearch/TeaStore/blob/master/examples/jmeter/teastore_browse_nogui.jmx) provided in the TeaStore repository.
The desired benchmarking setup is shown in the following diagram:
![TeaStore Benchmark](../assets/images/example-teastore-deployment.svg)
Although the TeaStore comes with Kubernetes resources (Deployments and Services), we need some modifications such as one resource per file, resource limits, or readiness probes. (Note that these modifications are not specific for Theodolite, but are generally considered good practice.)
We created a fork of the TeaStore repository all required modifications. Clone it by running:
```sh
git clone -b add-theodolite-example git@github.com:SoerenHenning/TeaStore.git
```
We now have to create ConfigMaps bundling these resources and a Benchmark resource describing the benchmark.
### Create ConfigMaps containing all components
To create a ConfigMap containing all TeaStore resources, simply run:
```sh
kubectl create configmap teastore-deployment --from-file=teastore/examples/kubernetes/teastore-clusterip-split/
```
Likewise, we have to create a ConfigMap for the JMeter profile and a ConfigMap containing a [JMeter deployment](https://github.com/SoerenHenning/TeaStore/blob/add-theodolite-example/examples/theodolite/jmeter.yaml):
```sh
kubectl create configmap teastore-jmeter-browse --from-file=teastore/examples/jmeter/teastore_browse_nogui.jmx
kubectl create configmap teastore-jmeter-deployment --from-file=teastore/examples/theodolite/jmeter.yaml
```
### Create the Benchmark file
Once all the required resources are bundled in ConfigMaps, we can define a Benchmark resource.
The following resource defines a simple benchmark providing one load type, one resource type and one SLO.
```yaml
apiVersion: theodolite.rocks/v1beta1
kind: benchmark
metadata:
name: teastore
spec:
waitForResourcesEnabled: true
sut:
resources:
- configMap:
name: teastore-deployment
loadGenerator:
resources:
- configMap:
name: teastore-jmeter-deployment
resourceTypes:
- typeName: "Instances"
patchers:
- type: "ReplicaPatcher"
resource: "teastore-auth-deployment.yaml"
- type: "ReplicaPatcher"
resource: "teastore-image-deployment.yaml"
- type: "ReplicaPatcher"
resource: "teastore-persistence-deployment.yaml"
- type: "ReplicaPatcher"
resource: "teastore-recommender-deployment.yaml"
- type: "ReplicaPatcher"
resource: "teastore-webui-deployment.yaml"
loadTypes:
- typeName: NumUsers
patchers:
- type: "EnvVarPatcher"
resource: "jmeter.yaml"
properties:
container: jmeter
variableName: NUM_USERS
slos:
- sloType: "generic"
name: "uiLatency"
prometheusUrl: "http://prometheus-operated:9090"
offset: 0
properties:
externalSloUrl: "http://localhost:8082"
promQLQuery: "histogram_quantile(0.95,sum(irate(osm_request_duration_ms_bucket{destination_name='teastore_webui'}[1m])) by (le, destination_name))"
warmup: 600 #in seconds
queryAggregation: max
repetitionAggregation: median
operator: lte
threshold: 200
```
#### SUT and Load Generator
We simply use the ConfigMaps created previously as SUT and load generator.
#### Resource Types
We scale the number of replicas of the TeaStore's WebUI, Auth, Image, Persistence, and Recommender services equally to cope with increasing load.
Hence, our resource type *Instances* defined *ReplicaPatcher*, which modify the number of replicas of all these services.
See our [extended version of this benchmark](https://github.com/SoerenHenning/TeaStore/blob/add-theodolite-example/examples/theodolite/benchmark.yaml), also supporting two other resources types.
#### Load Types
We focus on increasing the load on the TeaStore by increasing the number of concurrent users. Each user is simulated by JMeter and performs a series of UI interactions in an endless loop. Our load type is called *NumUsers* and modifies the `NUM_USERS` environment variable of the JMeter Deployment with an *EnvVarPatcher*.
#### SLOs
The SLO is defined as that the 95th percentile of the response time of the TeaStore's WebUI service must not exceed 200ms.
If multiple repetitions are performed, the median of the response times is used. Measurements from the first 600 seconds are discarded as warmup.
## Run the Benchmark
To run the benchmark, we first have to define an Execution resource, which we afterwards have to apply to our cluster along with the Benchmark resource.
### Create an Execution for our Benchmark
A simple Execution resource for our benchmark could look like this:
```yaml
apiVersion: theodolite.rocks/v1beta1
kind: execution
metadata:
name: teastore-example
spec:
benchmark: teastore
load:
loadType: "NumUsers"
loadValues: [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]
resources:
resourceType: "Instances"
resourceValues: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
slos:
- name: "uiLatency"
execution:
strategy:
name: "RestrictionSearch"
restrictions:
- "LowerBound"
searchStrategy: "LinearSearch"
duration: 1200 # in seconds
repetitions: 1
configOverrides: []
```
It is named `teastore-example` and defines that we want to execute the benchmark `teastore`.
We evaluate load intensities from 5 to 50 users and provision 1 to 20 instances per scaled service.
We apply the [*lower bound restriction search*](concepts/search-strategies#lower-bound-restriction-search), run each experiment for 1200 seconds and perform only one repetition.
### Start the Benchmark
To let Prometheus scrape OSM metrics, we need to create a PodMonitor.
Download the [PodMonitor from GitHub](https://github.com/SoerenHenning/TeaStore/blob/add-theodolite-example/examples/theodolite/pod-monitors.yaml) (or use the repository already cloned in the previous step) and apply it: (Of course, this could also be made part of the benchmark.)
```sh
kubectl apply -f pod-monitors.yaml
```
Next, we need to deploy our benchmark:
```sh
kubectl apply -f benchmark.yaml
```
To now start benchmark execution, we deploy our Execution resource defined previously:
```sh
kubectl apply -f execution-users.yaml
```
As described at the [Running Benchmarks](running-benchmarks), we now have to wait, observe the benchmark execution, and finally access the results.
## Further Reading
We published a short paper about this example:
* S. Henning, B. Wetzel, and W. Hasselbring. “[Cloud-Native Scalability Benchmarking with Theodolite Applied to the TeaStore Benchmark](https://oceanrep.geomar.de/id/eprint/57338/)”. In: *Symposium on Software Performance*. 2022.
You might also want to have a look at the corresponding slides presented at the [Symposium on Software Performance 2022](https://www.performance-symposium.org/fileadmin/user_upload/palladio-conference/2022/presentations/Henning-Cloud-Native-Scalability-Benchmarking-with-Theodolite.pdf).
apiVersion: v1
entries:
cp-helm-charts:
- apiVersion: v1
appVersion: "1.0"
created: "2023-04-14T16:35:13.695149306+02:00"
dependencies:
- condition: cp-kafka.enabled
name: cp-kafka
repository: file://./charts/cp-kafka
version: 0.1.0
- condition: cp-zookeeper.enabled
name: cp-zookeeper
repository: file://./charts/cp-zookeeper
version: 0.1.0
- condition: cp-schema-registry.enabled
name: cp-schema-registry
repository: file://./charts/cp-schema-registry
version: 0.1.0
- condition: cp-kafka-rest.enabled
name: cp-kafka-rest
repository: file://./charts/cp-kafka-rest
version: 0.1.0
- condition: cp-kafka-connect.enabled
name: cp-kafka-connect
repository: file://./charts/cp-kafka-connect
version: 0.1.0
- condition: cp-ksql-server.enabled
name: cp-ksql-server
repository: file://./charts/cp-ksql-server
version: 0.1.0
- condition: cp-control-center.enabled
name: cp-control-center
repository: file://./charts/cp-control-center
version: 0.1.0
description: A Helm chart for Confluent Platform Community Edition
digest: 45c1beba96b77f120f0d05e77be21b8d30431a9f2b63f05087defb54c5f3c60b
name: cp-helm-charts
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.6/charts/cp-helm-charts-0.6.0.tgz
version: 0.6.0
grafana:
- apiVersion: v2
appVersion: 8.2.5
created: "2023-04-14T16:35:13.696909309+02:00"
description: The leading tool for querying and visualizing time series and metrics.
digest: 56aec8d05f41792656f6a90a3e6ff1516b0e024a64fc2e39040128af9e3459c0
home: https://grafana.net
icon: https://raw.githubusercontent.com/grafana/grafana/master/public/img/logo_transparent_400x.png
kubeVersion: ^1.8.0-0
maintainers:
- email: zanhsieh@gmail.com
name: zanhsieh
- email: rluckie@cisco.com
name: rtluckie
- email: maor.friedman@redhat.com
name: maorfr
- email: miroslav.hadzhiev@gmail.com
name: Xtigyro
- email: mail@torstenwalter.de
name: torstenwalter
name: grafana
sources:
- https://github.com/grafana/grafana
type: application
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.6/charts/grafana-6.17.10.tgz
version: 6.17.10
kube-prometheus-stack:
- annotations:
artifacthub.io/links: |
- name: Chart Source
url: https://github.com/prometheus-community/helm-charts
- name: Upstream Project
url: https://github.com/prometheus-operator/kube-prometheus
artifacthub.io/operator: "true"
apiVersion: v2
appVersion: 0.60.1
created: "2023-04-14T16:35:13.720986306+02:00"
dependencies:
- condition: kubeStateMetrics.enabled
name: kube-state-metrics
repository: https://prometheus-community.github.io/helm-charts
version: 4.22.*
- condition: nodeExporter.enabled
name: prometheus-node-exporter
repository: https://prometheus-community.github.io/helm-charts
version: 4.4.*
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.43.*
description: kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide easy
to operate end-to-end Kubernetes cluster monitoring with Prometheus using the
Prometheus Operator.
digest: 8360468fa9ec4eb2152f5b629b488571dc92be404682deb493207c5b2b552f07
home: https://github.com/prometheus-operator/kube-prometheus
icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png
keywords:
- operator
- prometheus
- kube-prometheus
kubeVersion: '>=1.16.0-0'
maintainers:
- email: andrew@quadcorps.co.uk
name: andrewgkew
- email: gianrubio@gmail.com
name: gianrubio
- email: github.gkarthiks@gmail.com
name: gkarthiks
- email: kube-prometheus-stack@sisti.pt
name: GMartinez-Sisti
- email: scott@r6by.com
name: scottrigby
- email: miroslav.hadzhiev@gmail.com
name: Xtigyro
name: kube-prometheus-stack
sources:
- https://github.com/prometheus-community/helm-charts
- https://github.com/prometheus-operator/kube-prometheus
type: application
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.6/charts/kube-prometheus-stack-41.7.4.tgz
version: 41.7.4
strimzi-kafka-operator:
- apiVersion: v2
appVersion: 0.29.0
created: "2023-04-14T16:35:13.725738879+02:00"
description: 'Strimzi: Apache Kafka running on Kubernetes'
digest: 87bb22f4b674a91cea51b61edf7c0b3b92c706ac7427534c8f40278c8c712a59
home: https://strimzi.io/
icon: https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/main/documentation/logo/strimzi_logo.png
keywords:
- kafka
- queue
- stream
- event
- messaging
- datastore
- topic
maintainers:
- name: Frawless
- name: ppatierno
- name: samuel-hawker
- name: scholzj
- name: tombentley
- name: sknot-rh
name: strimzi-kafka-operator
sources:
- https://github.com/strimzi/strimzi-kafka-operator
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.6/charts/strimzi-kafka-operator-helm-3-chart-0.29.0.tgz
version: 0.29.0
theodolite:
- apiVersion: v2
appVersion: 0.9.0
created: "2023-07-19T09:58:16.207401357+02:00"
dependencies:
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.17.*
- condition: kube-prometheus-stack.enabled
name: kube-prometheus-stack
repository: https://prometheus-community.github.io/helm-charts
version: 41.7.*
- condition: strimzi.enabled
name: strimzi-kafka-operator
repository: https://strimzi.io/charts/
version: 0.29.*
description: Theodolite is a framework for benchmarking the horizontal and vertical
scalability of cloud-native applications.
digest: d23f73c4b9c838d45be659cd27e6003b16ae22da52706d4d7111709389ffc9c2
home: https://www.theodolite.rocks
icon: https://www.theodolite.rocks/assets/logo/theodolite-stacked-transparent.svg
maintainers:
- email: soeren.henning@jku.at
name: Sören Henning
url: https://www.jku.at/lit-cyber-physical-systems-lab/ueber-uns/team/dr-ing-soeren-henning/
name: theodolite
sources:
- https://github.com/cau-se/theodolite
type: application
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.9.0/theodolite-0.9.0.tgz
version: 0.9.0
- apiVersion: v2
appVersion: 0.8.6
created: "2023-04-14T16:35:13.691461495+02:00"
dependencies:
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.17.*
- condition: kube-prometheus-stack.enabled
name: kube-prometheus-stack
repository: https://prometheus-community.github.io/helm-charts
version: 41.7.*
- condition: cp-helm-charts.enabled
name: cp-helm-charts
repository: https://soerenhenning.github.io/cp-helm-charts
version: 0.6.0
- condition: strimzi.enabled
name: strimzi-kafka-operator
repository: https://strimzi.io/charts/
version: 0.29.*
description: Theodolite is a framework for benchmarking the horizontal and vertical
scalability of cloud-native applications.
digest: cb50c8f901462b8592232ca3af4a0316c82b89d6c66b83b018b9fdff0f9620e0
home: https://www.theodolite.rocks
icon: https://www.theodolite.rocks/assets/logo/theodolite-stacked-transparent.svg
maintainers:
- email: soeren.henning@email.uni-kiel.de
name: Sören Henning
url: https://www.se.informatik.uni-kiel.de/en/team/soeren-henning-m-sc
name: theodolite
sources:
- https://github.com/cau-se/theodolite
type: application
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.6/theodolite-0.8.6.tgz
version: 0.8.6
- apiVersion: v2
appVersion: 0.8.5
created: "2023-02-09T12:34:28.130334604+01:00"
dependencies:
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.17.*
- condition: kube-prometheus-stack.enabled
name: kube-prometheus-stack
repository: https://prometheus-community.github.io/helm-charts
version: 41.7.*
- condition: cp-helm-charts.enabled
name: cp-helm-charts
repository: https://soerenhenning.github.io/cp-helm-charts
version: 0.6.0
- condition: strimzi.enabled
name: strimzi-kafka-operator
repository: https://strimzi.io/charts/
version: 0.29.*
description: Theodolite is a framework for benchmarking the horizontal and vertical
scalability of cloud-native applications.
digest: 6b80c894c6db461a65262553d5dbe8880c4d22edebd150b9a97819a7e3355509
home: https://www.theodolite.rocks
icon: https://www.theodolite.rocks/assets/logo/theodolite-stacked-transparent.svg
maintainers:
- email: soeren.henning@email.uni-kiel.de
name: Sören Henning
url: https://www.se.informatik.uni-kiel.de/en/team/soeren-henning-m-sc
name: theodolite
sources:
- https://github.com/cau-se/theodolite
type: application
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.5/theodolite-0.8.5.tgz
version: 0.8.5
- apiVersion: v2
appVersion: 0.8.4
created: "2023-02-01T14:02:42.124711907+01:00"
dependencies:
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.17.*
- condition: kube-prometheus-stack.enabled
name: kube-prometheus-stack
repository: https://prometheus-community.github.io/helm-charts
version: 41.7.*
- condition: cp-helm-charts.enabled
name: cp-helm-charts
repository: https://soerenhenning.github.io/cp-helm-charts
version: 0.6.0
- condition: strimzi.enabled
name: strimzi-kafka-operator
repository: https://strimzi.io/charts/
version: 0.29.*
description: Theodolite is a framework for benchmarking the horizontal and vertical
scalability of cloud-native applications.
digest: 3f2815329938fcb018186d6db251e87ba05243adbbb29582370db6499ed69bcb
home: https://www.theodolite.rocks
icon: https://www.theodolite.rocks/assets/logo/theodolite-stacked-transparent.svg
maintainers:
- email: soeren.henning@email.uni-kiel.de
name: Sören Henning
url: https://www.se.informatik.uni-kiel.de/en/team/soeren-henning-m-sc
name: theodolite
sources:
- https://github.com/cau-se/theodolite
type: application
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.4/theodolite-0.8.4.tgz
version: 0.8.4
- apiVersion: v2
appVersion: 0.8.3
created: "2023-01-31T18:28:08.273346921+01:00"
dependencies:
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.17.*
- condition: kube-prometheus-stack.enabled
name: kube-prometheus-stack
repository: https://prometheus-community.github.io/helm-charts
version: 41.7.*
- condition: cp-helm-charts.enabled
name: cp-helm-charts
repository: https://soerenhenning.github.io/cp-helm-charts
version: 0.6.0
- condition: strimzi.enabled
name: strimzi-kafka-operator
repository: https://strimzi.io/charts/
version: 0.29.*
description: Theodolite is a framework for benchmarking the horizontal and vertical
scalability of cloud-native applications.
digest: f0b3ce50db9dec094993073cd8aebf548929e529d399710dda0235d1ea185546
home: https://www.theodolite.rocks
icon: https://www.theodolite.rocks/assets/logo/theodolite-stacked-transparent.svg
maintainers:
- email: soeren.henning@email.uni-kiel.de
name: Sören Henning
url: https://www.se.informatik.uni-kiel.de/en/team/soeren-henning-m-sc
name: theodolite
sources:
- https://github.com/cau-se/theodolite
type: application
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.3/theodolite-0.8.3.tgz
version: 0.8.3
- apiVersion: v2
appVersion: 0.8.2
created: "2022-11-20T11:37:04.711009053+01:00"
dependencies:
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.17.*
- condition: kube-prometheus-stack.enabled
name: kube-prometheus-stack
repository: https://prometheus-community.github.io/helm-charts
version: 41.7.*
- condition: cp-helm-charts.enabled
name: cp-helm-charts
repository: https://soerenhenning.github.io/cp-helm-charts
version: 0.6.0
- condition: strimzi.enabled
name: strimzi-kafka-operator
repository: https://strimzi.io/charts/
version: 0.29.*
description: Theodolite is a framework for benchmarking the horizontal and vertical
scalability of cloud-native applications.
digest: b6fc354d08b661dd75beb4e54efd0bb65b488247dcb528fd0c5e365f8f011808
home: https://www.theodolite.rocks
icon: https://www.theodolite.rocks/assets/logo/theodolite-stacked-transparent.svg
maintainers:
- email: soeren.henning@email.uni-kiel.de
name: Sören Henning
url: https://www.se.informatik.uni-kiel.de/en/team/soeren-henning-m-sc
name: theodolite
sources:
- https://github.com/cau-se/theodolite
type: application
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.2/theodolite-0.8.2.tgz
version: 0.8.2
- apiVersion: v2
appVersion: 0.8.1
created: "2022-11-16T09:45:09.130711943+01:00"
dependencies:
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.17.5
- condition: kube-prometheus-stack.enabled
name: kube-prometheus-stack
repository: https://prometheus-community.github.io/helm-charts
version: 20.0.1
- condition: cp-helm-charts.enabled
name: cp-helm-charts
repository: https://soerenhenning.github.io/cp-helm-charts
version: 0.6.0
- condition: strimzi.enabled
name: strimzi-kafka-operator
repository: https://strimzi.io/charts/
version: 0.29.0
description: Theodolite is a framework for benchmarking the horizontal and vertical
scalability of cloud-native applications.
digest: 02a1c6a5a8d0295fb9bf2d704cb04e0a17624b83b2a03cd59c1d61b74d8fe4ab
home: https://www.theodolite.rocks
icon: https://www.theodolite.rocks/assets/logo/theodolite-stacked-transparent.svg
maintainers:
- email: soeren.henning@email.uni-kiel.de
name: Sören Henning
url: https://www.se.informatik.uni-kiel.de/en/team/soeren-henning-m-sc
name: theodolite
sources:
- https://github.com/cau-se/theodolite
type: application
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.8.1/theodolite-0.8.1.tgz
version: 0.8.1
- apiVersion: v2
appVersion: 0.8.0
created: "2022-07-18T17:48:21.205921939+02:00"
......@@ -387,4 +787,4 @@ entries:
urls:
- https://github.com/cau-se/theodolite/releases/download/v0.4.0/theodolite-0.4.0.tgz
version: 0.4.0
generated: "2022-07-18T17:48:21.163757427+02:00"
generated: "2023-07-19T09:58:16.182895416+02:00"
......@@ -15,9 +15,6 @@ helm repo update
helm install theodolite theodolite/theodolite
```
This installs Theodolite in operator mode. Operator mode is the easiest to use, but requires some permissions in the installation. If those cannot be granted, Theodolite can also be installed for standalone mode.
## Installation Options
As usual, the installation via Helm can be configured by passing a values YAML file:
......@@ -38,6 +35,16 @@ To store the results of benchmark executions in a [PersistentVolume](https://kub
You can also use an existing PersistentVolumeClaim by setting `operator.resultsVolume.persistent.existingClaim`.
If persistence is not enabled, all results will be gone upon pod termination.
### Exposing Grafana
Per default, Theodolite exposes a Grafana instance as NodePort at port `31199`. This can configured by setting `grafana.service.nodePort`.
### Additional Kubernetes cluster metrics
As long as you have sufficient permissions on your cluster, you can integrate additional Kubernetes metrics into Prometheus. This involes enabling some exporters, additional Grafana dashboards and additional permissions. We provide a [values file for enabling extended metrics](https://github.com/cau-se/theodolite/blob/main/helm/preconfigs/extended-metrics.yaml).
See the [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) for more details on configuring the individual exporters.
### Random scheduler
Installation of the random scheduler can be enabled via `randomScheduler.enabled`. Please note that the random scheduler is neither required in operator mode nor in standalone mode. However, it has to be installed if benchmark executions should use random scheduling.
......
......@@ -6,26 +6,41 @@ nav_order: 8
# Project Info
Theodolite is open-source research software, actively maintained at Kiel University's [Software Engineering Group](https://www.se.informatik.uni-kiel.de).
Theodolite is open-source research software, actively maintained at Kiel University's [Software Engineering Group](https://www.se.informatik.uni-kiel.de) and Johannes Kepler University Linz' [LIT CPS Lab](https://www.jku.at/en/lit-cyber-physical-systems-lab/).
## Getting Help
To get support with using Theodolite, feel free to directly contact [Sören Henning](https://www.se.informatik.uni-kiel.de/en/team/soeren-henning-m-sc).
To get support with using Theodolite, feel free to directly contact [Sören Henning](https://www.jku.at/en/lit-cyber-physical-systems-lab/about-us/our-team/dr-ing-soeren-henning/).
You might also want to raise an issue on [GitHub](http://github.com/cau-se/theodolite).
## Project Management
Theodolite's internal development including issue boards, merge requests and extensive CI pipelines is tracked in our [internal GitLab](https://git.se.informatik.uni-kiel.de/she/theodolite). We provide a public mirror on GitHub, [cau-se/theodolite](http://github.com/cau-se/theodolite), where we are also happy to welcome issues and pull requests.
Theodolite's internal development including issue boards, merge requests and extensive CI pipelines is tracked in our [internal GitLab](https://git.se.informatik.uni-kiel.de/she/theodolite).
While all internal development is publicly accessible, contributing requires an account to be set up.
To ease contribution, we provide a public mirror on GitHub, [cau-se/theodolite](http://github.com/cau-se/theodolite), where we are also happy to welcome issues and pull requests.
Also releases are published via GitHub. See the following table for an overview:
| Project management | Public GitHub | Internal GitLab |
|:---|:---|:---|
| Source code | [GitHub](https://github.com/cau-se/theodolite) | [GitLab](https://git.se.informatik.uni-kiel.de/she/theodolite) |
| Issue Tracking | [GitHub Issues](https://github.com/cau-se/theodolite/issues) | [GitLab Issues](https://git.se.informatik.uni-kiel.de/she/theodolite/-/issues) |
| Pull/Merge requests | [GitHub Pull requests](https://github.com/cau-se/theodolite/pulls) | [GitLab Merge requests](https://git.se.informatik.uni-kiel.de/she/theodolite/-/merge_requests) |
| Roadmap | | [GitLab Milestones](https://git.se.informatik.uni-kiel.de/she/theodolite/-/milestones) |
| CI/CD pipelines | | [GitLab CI/CD](https://git.se.informatik.uni-kiel.de/she/theodolite/-/pipelines) |
| Releases | [GitHub Releases](https://github.com/cau-se/theodolite/releases) | [GitLab Releases](https://git.se.informatik.uni-kiel.de/she/theodolite/-/releases) |
| Container images | [GitHub Packages](https://github.com/orgs/cau-se/packages?repo_name=theodolite) | |
## Contributors
* [Sören Henning](https://www.se.informatik.uni-kiel.de/en/team/soeren-henning-m-sc) (Maintainer)
* [Sören Henning](https://www.jku.at/en/lit-cyber-physical-systems-lab/about-us/our-team/dr-ing-soeren-henning/) (Maintainer)
* [Marcel Becker](https://www.linkedin.com/in/marcel-becker-11b39b246)
* [Jan Bensien](https://oceanrep.geomar.de/id/eprint/52342/)
* [Nico Biernat](https://github.com/NicoBiernat)
* [Lorenz Boguhn](https://github.com/lorenzboguhn)
* [Simon Ehrenstein](https://github.com/sehrenstein)
* [Willi Hasselbring](https://www.se.informatik.uni-kiel.de/en/team/prof.-dr.-wilhelm-willi-hasselbring)
* [Christopher Konkel](https://github.com/JustAnotherChristoph)
* [Luca Mertens](https://www.linkedin.com/in/luca-mertens-35a932201)
* [Tobias Pfandzelter](https://pfandzelter.com/)
* [Julia Rossow](https://www.linkedin.com/in/julia-rossow/)
* [Björn Vonheiden](https://github.com/bvonheid)
......
......@@ -8,10 +8,13 @@ nav_order: 9
Below you can find a list of publications that are directly related to Theodolite:
* S. Henning. “[Scalability Benchmarking of Cloud-Native Applications Applied to Event-Driven Microservices](https://doi.org/10.21941/kcss/2023/2)”. In: *Kiel Computer Science Series 2023/2*. 2023. Dissertation, Faculty of Engineering, Kiel University. DOI: [10.21941/kcss/2023/2](https://doi.org/10.21941/kcss/2023/2).
* S. Henning and W. Hasselbring. “[Benchmarking Scalability of Cloud-Native Applications](https://dl.gi.de/bitstream/handle/20.500.12116/40081/paper16.pdf)”. In: *Software Engineering*. 2023.
* S. Henning and W. Hasselbring. “[A Configurable Method for Benchmarking Scalability of Cloud-Native Applications](https://doi.org/10.1007/s10664-022-10162-1)”. In: *Empirical Software Engineering* 27. 2022. DOI: [10.1007/s10664-022-10162-1](https://doi.org/10.1007/s10664-022-10162-1).
* T. Pfandzelter, S. Henning, T. Schirmer, W. Hasselbring, and D. Bermbach. “[Streaming vs. Functions: A Cost Perspective on Cloud Event Processing](https://arxiv.org/pdf/2204.11509.pdf)”. In: *IEEE International Conference on Cloud Engineering*. 2022. In press.
* S. Henning and W. Hasselbring. “Demo Paper: Benchmarking Scalability of Cloud-Native Applications with Theodolite”. In: *IEEE International Conference on Cloud Engineering*. 2022. In press.
* T. Pfandzelter, S. Henning, T. Schirmer, W. Hasselbring, and D. Bermbach. “[Streaming vs. Functions: A Cost Perspective on Cloud Event Processing](https://arxiv.org/pdf/2204.11509.pdf)”. In: *IEEE International Conference on Cloud Engineering*. 2022. DOI: [10.1109/IC2E55432.2022.00015](https://doi.org/10.1109/IC2E55432.2022.00015).
* S. Henning and W. Hasselbring. “[Demo Paper: Benchmarking Scalability of Cloud-Native Applications with Theodolite](https://oceanrep.geomar.de/id/eprint/57336/)”. In: *IEEE International Conference on Cloud Engineering*. 2022. DOI: [10.1109/IC2E55432.2022.00037](https://doi.org/10.1109/IC2E55432.2022.00037).
* S. Henning, B. Wetzel, and W. Hasselbring. “[Cloud-Native Scalability Benchmarking with Theodolite Applied to the TeaStore Benchmark](https://oceanrep.geomar.de/id/eprint/57338/)”. In: *Softwaretechnik-Trends* 43 (1) (Symposium on Software Performance). 2022.
* S. Henning, B. Wetzel, and W. Hasselbring. “[Reproducible Benchmarking of Cloud-Native Applications With the Kubernetes Operator Pattern](http://ceur-ws.org/Vol-3043/short5.pdf)”. In: *Symposium on Software Performance*. 2021.
* S. Henning and W. Hasselbring. “[How to Measure Scalability of Distributed Stream Processing Engines?](https://research.spec.org/icpe_proceedings/2021/companion/p85.pdf)” In: *Companion of the ACM/SPEC International Conference on Performance Engineering*. 2021. DOI: [10.1145/3447545.3451190](https://doi.org/10.1145/3447545.3451190).
* S. Henning and W. Hasselbring. “[Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures](https://arxiv.org/abs/2009.00304)”. In: *Big Data Research* 25. 2021. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209).
* S. Henning and W. Hasselbring. “[Toward Efficient Scalability Benchmarking of Event-Driven Microservice Architectures at Large Scale](https://www.performance-symposium.org/fileadmin/user_upload/palladio-conference/2020/Papers/SSP2020_paper_14.pdf)”. In: *Softwaretechnik-Trends* 40 (3) (Symposium on Software Performance). 2020.
* S. Henning and W. Hasselbring. “[Toward Efficient Scalability Benchmarking of Event-Driven Microservice Architectures at Large Scale](https://fb-swt.gi.de/fileadmin/FB/SWT/Softwaretechnik-Trends/Verzeichnis/Band_40_Heft_3/SSP2020_Henning.pdf)”. In: *Softwaretechnik-Trends* 40 (3) (Symposium on Software Performance). 2020.
......@@ -18,6 +18,14 @@ All you need to get started is access to a Kubernetes cluster plus kubectl and H
helm install theodolite theodolite/theodolite -f https://raw.githubusercontent.com/cau-se/theodolite/main/helm/preconfigs/minimal.yaml
```
After installation, it may take some time until all components are ready. You can check the status of the installation by running:
```sh
kubectl get pods
```
In particular, the Kafka Schema Registry may restart a couple of times.
1. Get the Theodolite examples from the [Theodolite repository](https://github.com/cau-se/theodolite) and `cd` into its example directory:
```sh
......
---
title: Benchmark UC1
parent: Streaming Benchmarks
has_children: false
nav_order: 1
---
# Benchmark UC1: Database Storage
A simple, but common use case in event-driven architectures is that events or messages should be stored permanently, for example, in a NoSQL database.
## Dataflow Architecture
![Theodolite Benchmark UC1: Database Storage](../../assets/images/arch-uc1.svg){: .d-block .mx-auto }
The first step is to read data records from a messaging system. Then, these records are converted into another data format in order to match the often different formats required by the database. Finally, the converted records are written to an external database.
Per default, this benchmark does not use a real database, but instead writes all incoming records to system out. Otherwise, due to the simple, stateless stream processing topology, the benchmark would primarily test the database’s write capabilities. However, for implementations of some stream processing engines, also a real database can be configured.
## Further Reading
S. Henning and W. Hasselbring. “[Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures](https://arxiv.org/abs/2009.00304)”. In: *Big Data Research* 25. 2021. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209).
---
title: Benchmark UC2
parent: Streaming Benchmarks
has_children: false
nav_order: 1
---
# Benchmark UC2: Downsampling
Another common use case for stream processing architectures is reducing the amount of events, messages, or measurements by aggregating multiple records within consecutive, non-overlapping time windows. Typical aggregations compute the average, minimum, or maximum of measurements within a time window or
count the occurrence of same events. Such reduced amounts of data are required, for example, to save computing resources or to provide a better user experience (e.g., for data visualizations).
When using aggregation windows of fixed size that succeed each other without gaps (called [tumbling windows](https://kafka.apache.org/30/documentation/streams/developer-guide/dsl-api.html#tumbling-time-windows) in many stream processing engines), the (potentially varying) message frequency is reduced to a constant value.
This is also referred to as downsampling. Downsampling allows for applying many machine learning methods that require data of a fixed frequency.
## Dataflow Architecture
![Theodolite Benchmark UC2: Downsampling](../../assets/images/arch-uc2.svg){: .d-block .mx-auto }
The dataflow architecture first reads measurement data from an input stream and then assigns each measurement to a time window of fixed, but statically configurable size. Afterwards, an aggregation operator computes the summary statistics sum, count, minimum, maximum, average and population variance for a time window. Finally, the aggregation result containing all summary statistics is written to an output stream.
## Further Reading
S. Henning and W. Hasselbring. “[Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures](https://arxiv.org/abs/2009.00304)”. In: *Big Data Research* 25. 2021. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209).
---
title: Benchmark UC3
parent: Streaming Benchmarks
has_children: false
nav_order: 1
---
# Benchmark UC3: Time Attribute-Based Aggregation
A second type of temporal aggregation is aggregating messages that have the same time attribute. Such a time attribute is, for example, the hour of day, day of week, or day in the year. This type of aggregation can be used to compute, for example, an average course over the day, the week, or the year. It allows to demonstrate or discover seasonal patterns in the data.
## Dataflow Architecture
![Theodolite Benchmark UC3: Time Attribute-Based Aggregation](../../assets/images/arch-uc3.svg){: .d-block .mx-auto }
The first step is to read measurement data from the input stream. Then, a new key is set for each message, which consists of the original key (i.e., the identifier of a sensor) and the selected time attribute (e.g., day of week) extracted from the record’s timestamp. In the next step, the message is duplicated for each sliding window it is contained in. Then, all measurements of the same sensor and the same time attribute are aggregated for each sliding time window by computing the summary statistics sum, count, minimum, maximum, average and population variance. The aggregation results per identifier, time attribute, and window are written to an output stream.
## Further Reading
S. Henning and W. Hasselbring. “[Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures](https://arxiv.org/abs/2009.00304)”. In: *Big Data Research* 25. 2021. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209).
---
title: Benchmark UC4
parent: Streaming Benchmarks
has_children: false
nav_order: 1
---
# Benchmark UC4: Hierarchical Aggregation
For analyzing sensor data, often not only the individual measurements of sensors are of interest, but also aggregated data for
groups of sensors. When monitoring energy consumption in industrial facilities, for example, comparing the total consumption
of machine types often provides better insights than comparing the consumption of all individual machines. Additionally, it may
be necessary to combine groups further into larger groups and adjust these group hierarchies at runtime.
## Dataflow Architecture
![Theodolite Benchmark UC4: Hierarchical Aggregation](../../assets/images/arch-uc4.svg){: .d-block .mx-auto }
The dataflow architecture requires two input data streams: a stream of sensor measurements and a stream tracking changes to the hierarchies of sensor groups. In the consecutive steps, both streams are joined, measurements are duplicated for each relevant group, assigned to time windows, and the measurements for all sensors in a group per window are aggregated. Finally, the aggregation results are exposed via a new data stream. Additionally, the output stream is fed back as an input stream in order to compute aggregations for groups containing subgroups. To also support unknown record frequencies, this dataflow architecture can be configured to use sliding windows instead of tumbling windows (see [further reading](#further-reading)).
## Further Reading
* S. Henning and W. Hasselbring. “[Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures](https://arxiv.org/abs/2009.00304)”. In: *Big Data Research* 25. 2021. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209).
* S. Henning and W. Hasselbring. “[Scalable and reliable multi-dimensional sensor data aggregation in data-streaming architectures](https://doi.org/10.1007/s41688-020-00041-3)”. In: *Data-Enabled Discovery and Applications* 4.1. 2020. DOI: [10.1007/s41688-020-00041-3](https://doi.org/10.1007/s41688-020-00041-3).
\ No newline at end of file
---
title: Available Benchmarks
title: Streaming Benchmarks
has_children: true
nav_order: 7
---
# Theodolite Benchmarks
# Theodolite's Stream Processing Benchmarks
Theodolite comes with 4 application benchmarks, which are based on typical use cases for stream processing within microservices. For each benchmark, a corresponding [load generator](load-generator) is provided. Currently, Theodolite provides benchmark implementations for Apache Kafka Streams and Apache Flink.
Theodolite comes with 4 application benchmarks, which are based on typical use cases for stream processing within microservices. For each benchmark, a corresponding [load generator](load-generator) is provided. Currently, Theodolite provides benchmark implementations for Apache Kafka Streams, Apache Flink, Hazelcast Jet and Apache Beam (with Samza and Flink).
Theodolite's benchmarks (labeled UC1--UC4) represent some sort of event-driven microservice performing Industrial Internet of Things data analytics. Specifically, they are derived from a microservice-based research software for analyzing industrial power consumption data streams (the [Titan Control Center](https://github.com/cau-se/titan-ccp)).
Theodolite's benchmarks are based on typical use cases for stream processing within microservices. Specifically, all benchmarks represent some sort of microservice doing Industrial Internet of Things data analytics.
| Stream processing engine | [UC1](benchmark-uc1) | [UC2](benchmark-uc2) | [UC3](benchmark-uc3) | [UC4](benchmark-uc4) |
|:--------------------------|:---:|:---:|:---:|:---:|
| Apache Kafka Streams | ✓ | ✓ | ✓ | ✓ |
| Apache Flink | ✓ | ✓ | ✓ | ✓ |
| Hazelcast Jet | ✓ | ✓ | ✓ | ✓ |
| Apache Beam (Samza/Flink) | ✓ | ✓ | ✓ | ✓ |
## UC1: Database Storage
## Installation
A simple, but common use case in event-driven architectures is that events or messages should be stored permanently, for example, in a NoSQL database.
When [installing Theodolite](../installation) with Helm and the default configuration, also our stream processing benchmarks are automatically installed.
This can be verified by running `kubectl get benchmarks`, which should yield something like:
```
NAME AGE STATUS
uc1-beam-flink 2d20h Ready
uc1-beam-samza 2d20h Ready
uc1-flink 2d20h Ready
uc1-hazelcastjet 2d16h Ready
uc1-kstreams 2d20h Ready
uc2-beam-flink 2d20h Ready
uc2-beam-samza 2d20h Ready
uc2-flink 2d20h Ready
uc2-hazelcastjet 2d16h Ready
uc2-kstreams 2d20h Ready
uc3-beam-flink 2d20h Ready
uc3-beam-samza 2d20h Ready
uc3-flink 2d20h Ready
uc3-hazelcastjet 2d16h Ready
uc3-kstreams 2d20h Ready
uc4-beam-flink 2d20h Ready
uc4-beam-samza 2d20h Ready
uc4-flink 2d20h Ready
uc4-hazelcastjet 2d16h Ready
uc4-kstreams 2d20h Ready
```
## UC2: Downsampling
Alternatively, all benchmarks can also be found at [GitHub](https://github.com/cau-se/theodolite/tree/main/theodolite-benchmarks/definitions) and installed manually with `kubectl apply -f <benchmark-yaml-file>`. Additionally, you would need to package the benchmarks' Kubernetes resources into a ConfigMap by running:
Another common use case for stream processing architectures is reducing the amount of events, messages, or measurements by aggregating multiple records within consecutive, non-overlapping time windows. Typical aggregations compute the average, minimum, or maximum of measurements within a time window or
count the occurrence of same events. Such reduced amounts of data are required, for example, to save computing resources or to provide a better user experience (e.g., for data visualizations).
When using aggregation windows of fixed size that succeed each other without gaps (called [tumbling windows](https://kafka.apache.org/30/documentation/streams/developer-guide/dsl-api.html#tumbling-time-windows) in many stream processing engines), the (potentially varying) message frequency is reduced to a constant value.
This is also referred to as downsampling. Downsampling allows for applying many machine learning methods that require data of a fixed frequency.
```sh
kubectl create configmap <configmap-name-required-by-benchmark> --from-file <directory-with-benchmark-resources>
```
See the [install-configmaps.sh](https://github.com/cau-se/theodolite/blob/main/theodolite-benchmarks/definitions/install-configmaps.sh) script for examples.
## UC3: Time Attribute-Based Aggregation
## Running Benchmarks
A second type of temporal aggregation is aggregating messages that have the same time attribute. Such a time attribute is, for example, the hour of day, day of week, or day in the year. This type of aggregation can be used to compute, for example, an average course over the day, the week, or the year. It allows to demonstrate or discover seasonal patterns in the data.
To run a benchmark, you need to create and apply an `Execution` YAML file as described in the [running benchmarks documentation](../running-benchmarks).
Some preliminary results of our benchmarks can be found in our publication:
## UC4: Hierarchical Aggregation
* S. Henning and W. Hasselbring. “[Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures](https://arxiv.org/abs/2009.00304)”. In: *Big Data Research* 25. 2021. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209).
For analyzing sensor data, often not only the individual measurements of sensors are of interest, but also aggregated data for
groups of sensors. When monitoring energy consumption in industrial facilities, for example, comparing the total consumption
of machine types often provides better insights than comparing the consumption of all individual machines. Additionally, it may
be necessary to combine groups further into larger groups and adjust these group hierarchies at runtime.
## Control the Number of Load Generator Instances
Depending on the load to be generated, the Theodolite benchmarks create multiple load generator instances.
Per default, a single instance will generate up to 150&thinsp;000 messages per second.
If higher loads are to be generated, accordingly more instances are deployed.
However, the actual load that can be generated by a single load generator instance depends on the cluster configuration and might be lower.
To change the maximum number of messages per instance, run the following commands.
Set the `MAX_RECORDS_PER_INSTANCE` variable to the number of messages a single instance can generate in your cluster (use our Grafana dashboard to figure out that value).
```sh
export MAX_RECORDS_PER_INSTANCE=150000 # Change to your desired value
kubectl patch benchmarks uc1-beam-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc1-beam-samza --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc1-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc1-hazelcastjet --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc1-kstreams --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc2-beam-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc2-beam-samza --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc2-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc2-hazelcastjet --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc2-kstreams --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc3-beam-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc3-beam-samza --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc3-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc3-hazelcastjet --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc3-kstreams --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc4-beam-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc4-beam-samza --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc4-flink --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc4-hazelcastjet --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
kubectl patch benchmarks uc4-kstreams --type json --patch "[{op: replace, path: /spec/loadTypes/0/patchers/1/properties/loadGenMaxRecords, value: $MAX_RECORDS_PER_INSTANCE}]"
```
\ No newline at end of file
---
title: Load Generators
parent: Available Benchmarks
parent: Streaming Benchmarks
has_children: false
nav_order: 1
nav_order: 5
---
# Load Generator Framework
......
This diff is collapsed.
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
labels:
grafana_datasource: "1"
data:
datasource.yaml: |-
# config file version
apiVersion: 1
datasources:
# <string, required> name of the datasource. Required
- name: Prometheus
# <string, required> datasource type. Required
type: prometheus
# <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
access: proxy
# <bool> mark as default datasource. Max one per org
isDefault: true
# <int> org id. will default to orgId 1 if not specified
orgId: 1
# <string> url
url: http://prometheus-operated:9090 #http://localhost:9090
# <map> fields that will be converted to json and stored in json_data
jsonData:
timeInterval: "15s"
version: 1
# <bool> allow users to edit datasources from the UI.
editable: true
image:
repository: grafana/grafana
tag: 6.7.3
pullPolicy: IfNotPresent
# Administrator credentials when not using an existing secret (see below)
adminUser: admin
adminPassword: admin
## Sidecars that collect the configmaps with specified label and stores the included files them into the respective folders
## Requires at least Grafana 5 to work and can't be used together with parameters dashboardProviders, datasources and dashboards
sidecar:
image:
repository: "kiwigrid/k8s-sidecar"
tag: "1.1.0"
imagePullPolicy: IfNotPresent
dashboards:
enabled: true
SCProvider: true
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
# folder in the pod that should hold the collected dashboards (unless `defaultFolderName` is set)
folder: /tmp/dashboards
# The default folder name, it will create a subfolder under the `folder` and put dashboards in there instead
defaultFolderName: null
# If specified, the sidecar will search for dashboard config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces
searchNamespace: null
# provider configuration that lets grafana manage the dashboards
provider:
# name of the provider, should be unique
name: sidecarProvider
# orgid as configured in grafana
orgid: 1
# folder in which the dashboards should be imported in grafana
folder: ''
# type of the provider
type: file
# disableDelete to activate a import-only behaviour
disableDelete: false
# allow updating provisioned dashboards from the UI
allowUiUpdates: true
datasources:
enabled: true
# label that the configmaps with datasources are marked with
label: grafana_datasource
# If specified, the sidecar will search for datasource config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces
searchNamespace: null
service:
nodePort: 31199
type: NodePort
\ No newline at end of file
helm install kafka-lag-exporter https://github.com/lightbend/kafka-lag-exporter/releases/download/v0.6.0/kafka-lag-exporter-0.6.0.tgz \
--set clusters\[0\].name=my-confluent-cp-kafka \
--set clusters\[0\].bootstrapBrokers=my-confluent-cp-kafka:9092 \
--set pollIntervalSeconds=15 #5
# Helm could also create ServiceMonitor
image:
pullPolicy: IfNotPresent
clusters:
- name: "my-confluent-cp-kafka"
bootstrapBrokers: "my-confluent-cp-kafka:9092"
## The interval between refreshing metrics
pollIntervalSeconds: 15
prometheus:
serviceMonitor:
enabled: true
interval: "5s"
additionalLabels:
appScope: titan-ccp
# service monitor label selectors: https://github.com/helm/charts/blob/f5a751f174263971fafd21eee4e35416d6612a3d/stable/prometheus-operator/templates/prometheus/prometheus.yaml#L74
# additionalLabels:
# prometheus: k8s
\ No newline at end of file