diff --git a/analysis/README.md b/analysis/README.md index 3c96cf0b6e67a60ebbb4c610ca69fcbcb27876a0..8d37f01c011e74bf258e2d411bc72f32f0ddcfdc 100644 --- a/analysis/README.md +++ b/analysis/README.md @@ -9,7 +9,7 @@ benchmark execution results and plotting. The following notebooks are provided: For legacy reasons, we also provide the following notebooks, which, however, are not documented: * [scalability-graph.ipynb](scalability-graph.ipynb): Creates a scalability graph for a certain benchmark execution. -* [scalability-graph-final.ipynb](scalability-graph-final.ipynb): Combines the scalability graphs of multiple benchmarks executions (e.g. for comparing different configuration). +* [scalability-graph-plotter.ipynb](scalability-graph-plotter.ipynb): Combines the scalability graphs of multiple benchmarks executions (e.g. for comparing different configuration). * [lag-trend-graph.ipynb](lag-trend-graph.ipynb): Visualizes the consumer lag evaluation over time along with the computed trend. ## Usage diff --git a/analysis/demand-metric-plot.ipynb b/analysis/demand-metric-plot.ipynb index 95f371510bbcc8af785739c50bce42e969ea2b80..985d1fc91caec847f1795234903d1cbb34e3ddba 100644 --- a/analysis/demand-metric-plot.ipynb +++ b/analysis/demand-metric-plot.ipynb @@ -34,7 +34,7 @@ }, { "source": [ - "We need to specify the directory, where the demand CSV files can be found, and a dictionary that maps a system description (e.g. its name) to the corresponding CSV file (prefix). " + "We need to specify the directory, where the demand CSV files can be found, and a dictionary that maps a system description (e.g. its name) to the corresponding CSV file (prefix). To use Unicode narrow non-breaking spaces in the description format it as `u\"1000\\u202FmCPU\"`." ], "cell_type": "markdown", "metadata": {} diff --git a/analysis/demand-metric.ipynb b/analysis/demand-metric.ipynb index 525bde211afcabeecf52f1e88f3c91c02a77a152..bcea129b7cb07465fa99f32b6f8b2b6115e8a0aa 100644 --- a/analysis/demand-metric.ipynb +++ b/analysis/demand-metric.ipynb @@ -4,7 +4,7 @@ "source": [ "# Theodolite Analysis - Demand Metric\n", "\n", - "This notebook allows applies Theodolite's *demand* metric to describe scalability of a SUT based on Theodolite measurement data.\n", + "This notebook applies Theodolite's *demand* metric to describe scalability of a SUT based on Theodolite measurement data.\n", "\n", "Theodolite's *demand* metric is a function, mapping load intensities to the minimum required resources (e.g., instances) that are required to process this load. With this notebook, the *demand* metric function is approximated by a map of tested load intensities to their minimum required resources.\n", "\n", diff --git a/analysis/scalability-graph-finish.ipynb b/analysis/scalability-graph-plotter.ipynb similarity index 100% rename from analysis/scalability-graph-finish.ipynb rename to analysis/scalability-graph-plotter.ipynb diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000000000000000000000000000000000000..4fd13bdfc157efe8b3491695bb83972f96a82c5d --- /dev/null +++ b/docs/README.md @@ -0,0 +1,25 @@ +--- +title: Theodolite +nav_order: 1 +permalink: / +--- + +# Theodolite + +> A theodolite is a precision optical instrument for measuring angles between designated visible points in the horizontal and vertical planes. -- <cite>[Wikipedia](https://en.wikipedia.org/wiki/Theodolite)</cite> + +Theodolite is a framework for benchmarking the horizontal and vertical scalability of stream processing engines. It consists of three modules: + +## Theodolite Benchmarks + +Theodolite contains 4 application benchmarks, which are based on typical use cases for stream processing within microservices. For each benchmark, a corresponding workload generator is provided. Currently, this repository provides benchmark implementations for Kafka Streams. + + +## Theodolite Execution Framework + +Theodolite aims to benchmark scalability of stream processing engines for real use cases. Microservices that apply stream processing techniques are usually deployed in elastic cloud environments. Hence, Theodolite's cloud-native benchmarking framework deploys as components in a cloud environment, orchestrated by Kubernetes. More information on how to execute scalability benchmarks can be found in [Thedolite execution framework](execution). + + +## Theodolite Analysis Tools + +Theodolite's benchmarking method create a *scalability graph* allowing to draw conclusions about the scalability of a stream processing engine or its deployment. A scalability graph shows how resource demand evolves with an increasing workload. Theodolite provides Jupyter notebooks for creating such scalability graphs based on benchmarking results from the execution framework. More information can be found in [Theodolite analysis tool](analysis). diff --git a/docs/_config.yml b/docs/_config.yml new file mode 100644 index 0000000000000000000000000000000000000000..b0f0a13c22083b21a7c90ceaed44b846ffe55550 --- /dev/null +++ b/docs/_config.yml @@ -0,0 +1,6 @@ +title: "Theodolite" +remote_theme: pmarsceill/just-the-docs +#color_scheme: "dark" +aux_links: + "Theodolite on GitHub": + - "//github.com/cau-se/theodolite" \ No newline at end of file diff --git a/docs/release-process.md b/docs/release-process.md index f267d611fb08931dd766c2f9952655f9dae62e32..c53ea4423eb1dbf521d13286448f33a9613b71ef 100644 --- a/docs/release-process.md +++ b/docs/release-process.md @@ -1,3 +1,9 @@ +--- +title: Release Process +has_children: false +nav_order: 2 +--- + # Release Process We assume that we are creating the release `v0.1.1`. Please make sure to adjust diff --git a/execution/README.md b/execution/README.md index 358ce270400d1e4e4947a8ef736feac74c314163..2ad12f24c1d252194c4e58ec8994548496a09d8c 100644 --- a/execution/README.md +++ b/execution/README.md @@ -121,7 +121,9 @@ can be installed via Helm. We also provide a [default configuration](infrastruct To install it: ```sh -helm install kafka-lag-exporter https://github.com/lightbend/kafka-lag-exporter/releases/download/v0.6.3/kafka-lag-exporter-0.6.3.tgz -f infrastructure/kafka-lag-exporter/values.yaml +helm repo add kafka-lag-exporter https://lightbend.github.io/kafka-lag-exporter/repo/ +helm repo update +helm install kafka-lag-exporter kafka-lag-exporter/kafka-lag-exporter -f infrastructure/kafka-lag-exporter/values.yaml ``` ### Installing Theodolite diff --git a/execution/infrastructure/kafka/values.yaml b/execution/infrastructure/kafka/values.yaml index e65a5fc567d39c7389479d406fa9e6d7156b0f0a..9c708ca054bc017874522cebb4ad2157bdce85a7 100644 --- a/execution/infrastructure/kafka/values.yaml +++ b/execution/infrastructure/kafka/values.yaml @@ -55,7 +55,8 @@ cp-kafka: # "min.insync.replicas": 2 "auto.create.topics.enable": false "log.retention.ms": "10000" # 10s - #"log.retention.ms": "86400000" # 24h + # "log.retention.ms": "86400000" # 24h + # "group.initial.rebalance.delay.ms": "30000" # 30s "metrics.sample.window.ms": "5000" #5s ## ------------------------------------------------------ diff --git a/execution/infrastructure/prometheus/helm-values.yaml b/execution/infrastructure/prometheus/helm-values.yaml index bf503fe483e918ac7a6a7dc8722ea06cfd3aef6c..a356a455a14238c1aeb97cbe022a69715a5cbd97 100644 --- a/execution/infrastructure/prometheus/helm-values.yaml +++ b/execution/infrastructure/prometheus/helm-values.yaml @@ -36,6 +36,9 @@ nodeExporter: prometheusOperator: enabled: true + namespaces: + releaseNamespace: true + additional: [] prometheus: enabled: false diff --git a/execution/run_uc.py b/execution/run_uc.py index a0fcdbb6d57e5dc67d18e69b7d07fcdbfa809307..9bbb2876447438c1c3ac676091b11f6baa990622 100644 --- a/execution/run_uc.py +++ b/execution/run_uc.py @@ -94,11 +94,11 @@ def load_yaml_files(): :return: wg, app_svc, app_svc_monitor ,app_jmx, app_deploy """ print('Load kubernetes yaml files') - wg = load_yaml('uc-workload-generator/base/workloadGenerator.yaml') - app_svc = load_yaml('uc-application/base/aggregation-service.yaml') - app_svc_monitor = load_yaml('uc-application/base/service-monitor.yaml') - app_jmx = load_yaml('uc-application/base/jmx-configmap.yaml') - app_deploy = load_yaml('uc-application/base/aggregation-deployment.yaml') + wg = load_yaml('uc-workload-generator/workloadGenerator.yaml') + app_svc = load_yaml('uc-application/aggregation-service.yaml') + app_svc_monitor = load_yaml('uc-application/service-monitor.yaml') + app_jmx = load_yaml('uc-application/jmx-configmap.yaml') + app_deploy = load_yaml('uc-application/aggregation-deployment.yaml') print('Kubernetes yaml files loaded') return wg, app_svc, app_svc_monitor, app_jmx, app_deploy diff --git a/execution/run_uc1.sh b/execution/run_uc1.sh deleted file mode 100755 index 02c46d8832fc800c57453570b14a6bf02681326a..0000000000000000000000000000000000000000 --- a/execution/run_uc1.sh +++ /dev/null @@ -1,124 +0,0 @@ -#!/bin/bash - -EXP_ID=$1 -DIM_VALUE=$2 -INSTANCES=$3 -PARTITIONS=${4:-40} -CPU_LIMIT=${5:-1000m} -MEMORY_LIMIT=${6:-4Gi} -KAFKA_STREAMS_COMMIT_INTERVAL_MS=${7:-100} -EXECUTION_MINUTES=${8:-5} - -echo "EXP_ID: $EXP_ID" -echo "DIM_VALUE: $DIM_VALUE" -echo "INSTANCES: $INSTANCES" -echo "PARTITIONS: $PARTITIONS" -echo "CPU_LIMIT: $CPU_LIMIT" -echo "MEMORY_LIMIT: $MEMORY_LIMIT" -echo "KAFKA_STREAMS_COMMIT_INTERVAL_MS: $KAFKA_STREAMS_COMMIT_INTERVAL_MS" -echo "EXECUTION_MINUTES: $EXECUTION_MINUTES" - -# Create Topics -#PARTITIONS=40 -#kubectl run temp-kafka --rm --attach --restart=Never --image=solsson/kafka --command -- bash -c "./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic input --partitions $PARTITIONS --replication-factor 1; ./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic configuration --partitions 1 --replication-factor 1; ./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic output --partitions $PARTITIONS --replication-factor 1" -PARTITIONS=$PARTITIONS -kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic input --partitions $PARTITIONS --replication-factor 1; kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic configuration --partitions 1 --replication-factor 1; kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic output --partitions $PARTITIONS --replication-factor 1" - -# Start workload generator -NUM_SENSORS=$DIM_VALUE -WL_MAX_RECORDS=150000 -WL_INSTANCES=$(((NUM_SENSORS + (WL_MAX_RECORDS -1 ))/ WL_MAX_RECORDS)) - -cat <<EOF >uc-workload-generator/overlay/uc1-workload-generator/set_paramters.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-load-generator -spec: - replicas: $WL_INSTANCES - template: - spec: - containers: - - name: workload-generator - env: - - name: NUM_SENSORS - value: "$NUM_SENSORS" - - name: INSTANCES - value: "$WL_INSTANCES" -EOF -kubectl apply -k uc-workload-generator/overlay/uc1-workload-generator - -# Start application -REPLICAS=$INSTANCES -cat <<EOF >uc-application/overlay/uc1-application/set_paramters.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-aggregation -spec: - replicas: $REPLICAS - template: - spec: - containers: - - name: uc-application - env: - - name: COMMIT_INTERVAL_MS - value: "$KAFKA_STREAMS_COMMIT_INTERVAL_MS" - resources: - limits: - memory: $MEMORY_LIMIT - cpu: $CPU_LIMIT -EOF -kubectl apply -k uc-application/overlay/uc1-application - -# Execute for certain time -sleep $(($EXECUTION_MINUTES * 60)) - -# Run eval script -source ../.venv/bin/activate -python lag_analysis.py $EXP_ID uc1 $DIM_VALUE $INSTANCES $EXECUTION_MINUTES -deactivate - -# Stop workload generator and app -kubectl delete -k uc-workload-generator/overlay/uc1-workload-generator -kubectl delete -k uc-application/overlay/uc1-application - - -# Delete topics instead of Kafka -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input,output,configuration,titan-.*'" -# kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic '.*' -#sleep 30s # TODO check -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n '/^titan-.*/p;/^input$/p;/^output$/p;/^configuration$/p' -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n '/^titan-.*/p;/^input$/p;/^output$/p;/^configuration$/p' | wc -l -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" - -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input,output,configuration,titan-.*'" -echo "Finished execution, print topics:" -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' -while test $(kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(theodolite-.*|input|output|configuration)( - marked for deletion)?$/p' | wc -l) -gt 0 -do - kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input|output|configuration|theodolite-.*' --if-exists" - echo "Wait for topic deletion" - sleep 5s - #echo "Finished waiting, print topics:" - #kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' - # Sometimes a second deletion seems to be required -done -echo "Finish topic deletion, print topics:" -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' - -# delete zookeeper nodes used for workload generation -echo "Delete ZooKeeper configurations used for workload generation" -kubectl exec zookeeper-client -- bash -c "zookeeper-shell my-confluent-cp-zookeeper:2181 deleteall /workload-generation" -echo "Waiting for deletion" -while kubectl exec zookeeper-client -- bash -c "zookeeper-shell my-confluent-cp-zookeeper:2181 get /workload-generation" -do - echo "Wait for ZooKeeper state deletion." - sleep 5s -done -echo "Deletion finished" - -echo "Exiting script" - -KAFKA_LAG_EXPORTER_POD=$(kubectl get pod -l app.kubernetes.io/name=kafka-lag-exporter -o jsonpath="{.items[0].metadata.name}") -kubectl delete pod $KAFKA_LAG_EXPORTER_POD diff --git a/execution/run_uc2.sh b/execution/run_uc2.sh deleted file mode 100755 index 4544d3609ed807141455378b92ce3536ea2f92f6..0000000000000000000000000000000000000000 --- a/execution/run_uc2.sh +++ /dev/null @@ -1,129 +0,0 @@ -#!/bin/bash - -EXP_ID=$1 -DIM_VALUE=$2 -INSTANCES=$3 -PARTITIONS=${4:-40} -CPU_LIMIT=${5:-1000m} -MEMORY_LIMIT=${6:-4Gi} -KAFKA_STREAMS_COMMIT_INTERVAL_MS=${7:-100} -EXECUTION_MINUTES=${8:-5} - -echo "EXP_ID: $EXP_ID" -echo "DIM_VALUE: $DIM_VALUE" -echo "INSTANCES: $INSTANCES" -echo "PARTITIONS: $PARTITIONS" -echo "CPU_LIMIT: $CPU_LIMIT" -echo "MEMORY_LIMIT: $MEMORY_LIMIT" -echo "KAFKA_STREAMS_COMMIT_INTERVAL_MS: $KAFKA_STREAMS_COMMIT_INTERVAL_MS" -echo "EXECUTION_MINUTES: $EXECUTION_MINUTES" - -# Create Topics -#PARTITIONS=40 -#kubectl run temp-kafka --rm --attach --restart=Never --image=solsson/kafka --command -- bash -c "./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic input --partitions $PARTITIONS --replication-factor 1; ./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic configuration --partitions 1 --replication-factor 1; ./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic output --partitions $PARTITIONS --replication-factor 1" -PARTITIONS=$PARTITIONS -kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic input --partitions $PARTITIONS --replication-factor 1; kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic aggregation-feedback --partitions $PARTITIONS --replication-factor 1; kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic configuration --partitions 1 --replication-factor 1; kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic output --partitions $PARTITIONS --replication-factor 1" - -# Start workload generator -NUM_NESTED_GROUPS=$DIM_VALUE -WL_MAX_RECORDS=150000 -APPROX_NUM_SENSORS=$((4**NUM_NESTED_GROUPS)) -WL_INSTANCES=$(((APPROX_NUM_SENSORS + (WL_MAX_RECORDS -1 ))/ WL_MAX_RECORDS)) - -cat <<EOF >uc-workload-generator/overlay/uc2-workload-generator/set_paramters.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-load-generator -spec: - replicas: $WL_INSTANCES - template: - spec: - containers: - - name: workload-generator - env: - - name: NUM_SENSORS - value: "4" - - name: HIERARCHY - value: "full" - - name: NUM_NESTED_GROUPS - value: "$NUM_NESTED_GROUPS" - - name: INSTANCES - value: "$WL_INSTANCES" -EOF -kubectl apply -k uc-workload-generator/overlay/uc2-workload-generator - -# Start application -REPLICAS=$INSTANCES -cat <<EOF >uc-application/overlay/uc2-application/set_paramters.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-aggregation -spec: - replicas: $REPLICAS - template: - spec: - containers: - - name: uc-application - env: - - name: COMMIT_INTERVAL_MS - value: "$KAFKA_STREAMS_COMMIT_INTERVAL_MS" - resources: - limits: - memory: $MEMORY_LIMIT - cpu: $CPU_LIMIT -EOF -kubectl apply -k uc-application/overlay/uc2-application - -# Execute for certain time -sleep $(($EXECUTION_MINUTES * 60)) - -# Run eval script -source ../.venv/bin/activate -python lag_analysis.py $EXP_ID uc2 $DIM_VALUE $INSTANCES $EXECUTION_MINUTES -deactivate - -# Stop workload generator and app -kubectl delete -k uc-workload-generator/overlay/uc2-workload-generator -kubectl delete -k uc-application/overlay/uc2-application - - -# Delete topics instead of Kafka -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input,output,configuration,titan-.*'" -# kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic '.*' -#sleep 30s # TODO check -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n '/^titan-.*/p;/^input$/p;/^output$/p;/^configuration$/p' -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n '/^titan-.*/p;/^input$/p;/^output$/p;/^configuration$/p' | wc -l -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" - -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input,output,configuration,titan-.*'" -echo "Finished execution, print topics:" -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' -while test $(kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(theodolite-.*|input|aggregation-feedback|output|configuration)( - marked for deletion)?$/p' | wc -l) -gt 0 -do - kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input|aggregation-feedback|output|configuration|theodolite-.*' --if-exists" - echo "Wait for topic deletion" - sleep 5s - #echo "Finished waiting, print topics:" - #kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' - # Sometimes a second deletion seems to be required -done -echo "Finish topic deletion, print topics:" -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' - -# delete zookeeper nodes used for workload generation -echo "Delete ZooKeeper configurations used for workload generation" -kubectl exec zookeeper-client -- bash -c "zookeeper-shell my-confluent-cp-zookeeper:2181 deleteall /workload-generation" -echo "Waiting for deletion" -while kubectl exec zookeeper-client -- bash -c "zookeeper-shell my-confluent-cp-zookeeper:2181 get /workload-generation" -do - echo "Wait for ZooKeeper state deletion." - sleep 5s -done -echo "Deletion finished" - -echo "Exiting script" - -KAFKA_LAG_EXPORTER_POD=$(kubectl get pod -l app.kubernetes.io/name=kafka-lag-exporter -o jsonpath="{.items[0].metadata.name}") -kubectl delete pod $KAFKA_LAG_EXPORTER_POD diff --git a/execution/run_uc3.sh b/execution/run_uc3.sh deleted file mode 100755 index 4f2323f937f19d01a73482dea6aeaf5e922a0a3f..0000000000000000000000000000000000000000 --- a/execution/run_uc3.sh +++ /dev/null @@ -1,125 +0,0 @@ -#!/bin/bash - -EXP_ID=$1 -DIM_VALUE=$2 -INSTANCES=$3 -PARTITIONS=${4:-40} -CPU_LIMIT=${5:-1000m} -MEMORY_LIMIT=${6:-4Gi} -KAFKA_STREAMS_COMMIT_INTERVAL_MS=${7:-100} -EXECUTION_MINUTES=${8:-5} - -echo "EXP_ID: $EXP_ID" -echo "DIM_VALUE: $DIM_VALUE" -echo "INSTANCES: $INSTANCES" -echo "PARTITIONS: $PARTITIONS" -echo "CPU_LIMIT: $CPU_LIMIT" -echo "MEMORY_LIMIT: $MEMORY_LIMIT" -echo "KAFKA_STREAMS_COMMIT_INTERVAL_MS: $KAFKA_STREAMS_COMMIT_INTERVAL_MS" -echo "EXECUTION_MINUTES: $EXECUTION_MINUTES" - -# Create Topics -#PARTITIONS=40 -#kubectl run temp-kafka --rm --attach --restart=Never --image=solsson/kafka --command -- bash -c "./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic input --partitions $PARTITIONS --replication-factor 1; ./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic configuration --partitions 1 --replication-factor 1; ./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic output --partitions $PARTITIONS --replication-factor 1" -PARTITIONS=$PARTITIONS -kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic input --partitions $PARTITIONS --replication-factor 1; kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic configuration --partitions 1 --replication-factor 1; kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic output --partitions $PARTITIONS --replication-factor 1" - -# Start workload generator -NUM_SENSORS=$DIM_VALUE -WL_MAX_RECORDS=150000 -WL_INSTANCES=$(((NUM_SENSORS + (WL_MAX_RECORDS -1 ))/ WL_MAX_RECORDS)) - -cat <<EOF >uc-workload-generator/overlay/uc3-workload-generator/set_paramters.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-load-generator -spec: - replicas: $WL_INSTANCES - template: - spec: - containers: - - name: workload-generator - env: - - name: NUM_SENSORS - value: "$NUM_SENSORS" - - name: INSTANCES - value: "$WL_INSTANCES" -EOF -kubectl apply -k uc-workload-generator/overlay/uc3-workload-generator - - -# Start application -REPLICAS=$INSTANCES -cat <<EOF >uc-application/overlay/uc3-application/set_paramters.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-aggregation -spec: - replicas: $REPLICAS - template: - spec: - containers: - - name: uc-application - env: - - name: COMMIT_INTERVAL_MS - value: "$KAFKA_STREAMS_COMMIT_INTERVAL_MS" - resources: - limits: - memory: $MEMORY_LIMIT - cpu: $CPU_LIMIT -EOF -kubectl apply -k uc-application/overlay/uc3-application -kubectl scale deployment uc3-titan-ccp-aggregation --replicas=$REPLICAS - -# Execute for certain time -sleep $(($EXECUTION_MINUTES * 60)) - -# Run eval script -source ../.venv/bin/activate -python lag_analysis.py $EXP_ID uc3 $DIM_VALUE $INSTANCES $EXECUTION_MINUTES -deactivate - -# Stop workload generator and app -kubectl delete -k uc-workload-generator/overlay/uc3-workload-generator -kubectl delete -k uc-application/overlay/uc3-application - -# Delete topics instead of Kafka -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input,output,configuration,titan-.*'" -# kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic '.*' -#sleep 30s # TODO check -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n '/^titan-.*/p;/^input$/p;/^output$/p;/^configuration$/p' -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n '/^titan-.*/p;/^input$/p;/^output$/p;/^configuration$/p' | wc -l -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" - -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input,output,configuration,titan-.*'" -echo "Finished execution, print topics:" -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' -while test $(kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(theodolite-.*|input|output|configuration)( - marked for deletion)?$/p' | wc -l) -gt 0 -do - kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input|output|configuration|theodolite-.*' --if-exists" - echo "Wait for topic deletion" - sleep 5s - #echo "Finished waiting, print topics:" - #kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' - # Sometimes a second deletion seems to be required -done -echo "Finish topic deletion, print topics:" -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' - -# delete zookeeper nodes used for workload generation -echo "Delete ZooKeeper configurations used for workload generation" -kubectl exec zookeeper-client -- bash -c "zookeeper-shell my-confluent-cp-zookeeper:2181 deleteall /workload-generation" -echo "Waiting for deletion" -while kubectl exec zookeeper-client -- bash -c "zookeeper-shell my-confluent-cp-zookeeper:2181 get /workload-generation" -do - echo "Wait for ZooKeeper state deletion." - sleep 5s -done -echo "Deletion finished" - -echo "Exiting script" - -KAFKA_LAG_EXPORTER_POD=$(kubectl get pod -l app.kubernetes.io/name=kafka-lag-exporter -o jsonpath="{.items[0].metadata.name}") -kubectl delete pod $KAFKA_LAG_EXPORTER_POD diff --git a/execution/run_uc4.sh b/execution/run_uc4.sh deleted file mode 100755 index 08a38498839ef3c50a39c1ccfbd26914993ffbd3..0000000000000000000000000000000000000000 --- a/execution/run_uc4.sh +++ /dev/null @@ -1,124 +0,0 @@ -#!/bin/bash - -EXP_ID=$1 -DIM_VALUE=$2 -INSTANCES=$3 -PARTITIONS=${4:-40} -CPU_LIMIT=${5:-1000m} -MEMORY_LIMIT=${6:-4Gi} -KAFKA_STREAMS_COMMIT_INTERVAL_MS=${7:-100} -EXECUTION_MINUTES=${8:-5} - -echo "EXP_ID: $EXP_ID" -echo "DIM_VALUE: $DIM_VALUE" -echo "INSTANCES: $INSTANCES" -echo "PARTITIONS: $PARTITIONS" -echo "CPU_LIMIT: $CPU_LIMIT" -echo "MEMORY_LIMIT: $MEMORY_LIMIT" -echo "KAFKA_STREAMS_COMMIT_INTERVAL_MS: $KAFKA_STREAMS_COMMIT_INTERVAL_MS" -echo "EXECUTION_MINUTES: $EXECUTION_MINUTES" - -# Create Topics -#PARTITIONS=40 -#kubectl run temp-kafka --rm --attach --restart=Never --image=solsson/kafka --command -- bash -c "./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic input --partitions $PARTITIONS --replication-factor 1; ./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic configuration --partitions 1 --replication-factor 1; ./bin/kafka-topics.sh --zookeeper my-confluent-cp-zookeeper:2181 --create --topic output --partitions $PARTITIONS --replication-factor 1" -PARTITIONS=$PARTITIONS -kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic input --partitions $PARTITIONS --replication-factor 1; kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic configuration --partitions 1 --replication-factor 1; kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --create --topic output --partitions $PARTITIONS --replication-factor 1" - -# Start workload generator -NUM_SENSORS=$DIM_VALUE -WL_MAX_RECORDS=150000 -WL_INSTANCES=$(((NUM_SENSORS + (WL_MAX_RECORDS -1 ))/ WL_MAX_RECORDS)) - -cat <<EOF >uuc-workload-generator/overlay/uc4-workload-generator/set_paramters.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-load-generator -spec: - replicas: $WL_INSTANCES - template: - spec: - containers: - - name: workload-generator - env: - - name: NUM_SENSORS - value: "$NUM_SENSORS" - - name: INSTANCES - value: "$WL_INSTANCES" -EOF -kubectl apply -k uc-workload-generator/overlay/uc4-workload-generator - -# Start application -REPLICAS=$INSTANCES -cat <<EOF >uc-application/overlay/uc4-application/set_paramters.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-aggregation -spec: - replicas: $REPLICAS - template: - spec: - containers: - - name: uc-application - env: - - name: COMMIT_INTERVAL_MS - value: "$KAFKA_STREAMS_COMMIT_INTERVAL_MS" - resources: - limits: - memory: $MEMORY_LIMIT - cpu: $CPU_LIMIT -EOF -kubectl apply -k uc-application/overlay/uc4-application -kubectl scale deployment uc4-titan-ccp-aggregation --replicas=$REPLICAS - -# Execute for certain time -sleep $(($EXECUTION_MINUTES * 60)) - -# Run eval script -source ../.venv/bin/activate -python lag_analysis.py $EXP_ID uc4 $DIM_VALUE $INSTANCES $EXECUTION_MINUTES -deactivate - -# Stop workload generator and app -kubectl delete -k uc-workload-generator/overlay/uc4-workload-generator -kubectl delete -k uc-application/overlay/uc4-application - -# Delete topics instead of Kafka -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input,output,configuration,titan-.*'" -# kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic '.*' -#sleep 30s # TODO check -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n '/^titan-.*/p;/^input$/p;/^output$/p;/^configuration$/p' -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n '/^titan-.*/p;/^input$/p;/^output$/p;/^configuration$/p' | wc -l -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" - -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input,output,configuration,titan-.*'" -echo "Finished execution, print topics:" -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' -while test $(kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(theodolite-.*|input|output|configuration)( - marked for deletion)?$/p' | wc -l) -gt 0 -do - kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --delete --topic 'input|output|configuration|theodolite-.*' --if-exists" - echo "Wait for topic deletion" - sleep 5s - #echo "Finished waiting, print topics:" - #kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' - # Sometimes a second deletion seems to be required -done -echo "Finish topic deletion, print topics:" -#kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-zookeeper:2181 --list" | sed -n -E '/^(titan-.*|input|output|configuration)( - marked for deletion)?$/p' - -# delete zookeeper nodes used for workload generation -echo "Delete ZooKeeper configurations used for workload generation" -kubectl exec zookeeper-client -- bash -c "zookeeper-shell my-confluent-cp-zookeeper:2181 deleteall /workload-generation" -echo "Waiting for deletion" -while kubectl exec zookeeper-client -- bash -c "zookeeper-shell my-confluent-cp-zookeeper:2181 get /workload-generation" -do - echo "Wait for ZooKeeper state deletion." - sleep 5s -done -echo "Deletion finished" - -echo "Exiting script" - -KAFKA_LAG_EXPORTER_POD=$(kubectl get pod -l app.kubernetes.io/name=kafka-lag-exporter -o jsonpath="{.items[0].metadata.name}") -kubectl delete pod $KAFKA_LAG_EXPORTER_POD diff --git a/execution/uc-application/base/aggregation-deployment.yaml b/execution/uc-application/aggregation-deployment.yaml similarity index 100% rename from execution/uc-application/base/aggregation-deployment.yaml rename to execution/uc-application/aggregation-deployment.yaml diff --git a/execution/uc-application/base/aggregation-service.yaml b/execution/uc-application/aggregation-service.yaml similarity index 100% rename from execution/uc-application/base/aggregation-service.yaml rename to execution/uc-application/aggregation-service.yaml diff --git a/execution/uc-application/base/jmx-configmap.yaml b/execution/uc-application/jmx-configmap.yaml similarity index 100% rename from execution/uc-application/base/jmx-configmap.yaml rename to execution/uc-application/jmx-configmap.yaml diff --git a/execution/uc-application/base/kustomization.yaml b/execution/uc-application/kustomization.yaml similarity index 100% rename from execution/uc-application/base/kustomization.yaml rename to execution/uc-application/kustomization.yaml diff --git a/execution/uc-application/overlay/uc1-application/kustomization.yaml b/execution/uc-application/overlay/uc1-application/kustomization.yaml deleted file mode 100644 index 0d3820fe392e1d2224d78a8dd2415c4dce37c6e6..0000000000000000000000000000000000000000 --- a/execution/uc-application/overlay/uc1-application/kustomization.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization - -namePrefix: uc1- - -images: - - name: uc-app - newName: theodolite/theodolite-uc1-kstreams-app - newTag: latest - -bases: -- ../../base - -patchesStrategicMerge: -- set_paramters.yaml # Patch setting the resource parameters diff --git a/execution/uc-application/overlay/uc1-application/set_paramters.yaml b/execution/uc-application/overlay/uc1-application/set_paramters.yaml deleted file mode 100644 index cb85048128774ab421b89338d5b1ce23791acac8..0000000000000000000000000000000000000000 --- a/execution/uc-application/overlay/uc1-application/set_paramters.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-aggregation -spec: - replicas: 1 - template: - spec: - containers: - - name: uc-application - env: - - name: COMMIT_INTERVAL_MS - value: "100" - resources: - limits: - memory: 4Gi - cpu: 1000m diff --git a/execution/uc-application/overlay/uc2-application/kustomization.yaml b/execution/uc-application/overlay/uc2-application/kustomization.yaml deleted file mode 100644 index cd32cabf70fdfa666a5703c97bc4e4fad7800ba7..0000000000000000000000000000000000000000 --- a/execution/uc-application/overlay/uc2-application/kustomization.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization - -namePrefix: uc2- - -images: - - name: uc-app - newName: theodolite/theodolite-uc2-kstreams-app - newTag: latest - -bases: -- ../../base - -patchesStrategicMerge: -- set_paramters.yaml # Patch setting the resource parameters diff --git a/execution/uc-application/overlay/uc2-application/set_paramters.yaml b/execution/uc-application/overlay/uc2-application/set_paramters.yaml deleted file mode 100644 index cb85048128774ab421b89338d5b1ce23791acac8..0000000000000000000000000000000000000000 --- a/execution/uc-application/overlay/uc2-application/set_paramters.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-aggregation -spec: - replicas: 1 - template: - spec: - containers: - - name: uc-application - env: - - name: COMMIT_INTERVAL_MS - value: "100" - resources: - limits: - memory: 4Gi - cpu: 1000m diff --git a/execution/uc-application/overlay/uc3-application/kustomization.yaml b/execution/uc-application/overlay/uc3-application/kustomization.yaml deleted file mode 100644 index 5722cbca8cc79247063921a55252435804edefe6..0000000000000000000000000000000000000000 --- a/execution/uc-application/overlay/uc3-application/kustomization.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization - -namePrefix: uc3- - -images: - - name: uc-app - newName: theodolite/theodolite-uc3-kstreams-app - newTag: latest - -bases: -- ../../base - -patchesStrategicMerge: -- set_paramters.yaml # Patch setting the resource parameters diff --git a/execution/uc-application/overlay/uc3-application/set_paramters.yaml b/execution/uc-application/overlay/uc3-application/set_paramters.yaml deleted file mode 100644 index cb85048128774ab421b89338d5b1ce23791acac8..0000000000000000000000000000000000000000 --- a/execution/uc-application/overlay/uc3-application/set_paramters.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-aggregation -spec: - replicas: 1 - template: - spec: - containers: - - name: uc-application - env: - - name: COMMIT_INTERVAL_MS - value: "100" - resources: - limits: - memory: 4Gi - cpu: 1000m diff --git a/execution/uc-application/overlay/uc4-application/kustomization.yaml b/execution/uc-application/overlay/uc4-application/kustomization.yaml deleted file mode 100644 index b44a9bb643802735b740b74bdb47299fb413e5d3..0000000000000000000000000000000000000000 --- a/execution/uc-application/overlay/uc4-application/kustomization.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization - -namePrefix: uc4- - -images: - - name: uc-app - newName: theodolite/theodolite-uc4-kstreams-app - newTag: latest - -bases: -- ../../base - -patchesStrategicMerge: -- set_paramters.yaml # Patch setting the resource parameters diff --git a/execution/uc-application/overlay/uc4-application/set_paramters.yaml b/execution/uc-application/overlay/uc4-application/set_paramters.yaml deleted file mode 100644 index cb85048128774ab421b89338d5b1ce23791acac8..0000000000000000000000000000000000000000 --- a/execution/uc-application/overlay/uc4-application/set_paramters.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-aggregation -spec: - replicas: 1 - template: - spec: - containers: - - name: uc-application - env: - - name: COMMIT_INTERVAL_MS - value: "100" - resources: - limits: - memory: 4Gi - cpu: 1000m diff --git a/execution/uc-application/base/service-monitor.yaml b/execution/uc-application/service-monitor.yaml similarity index 100% rename from execution/uc-application/base/service-monitor.yaml rename to execution/uc-application/service-monitor.yaml diff --git a/execution/uc-workload-generator/base/kustomization.yaml b/execution/uc-workload-generator/kustomization.yaml similarity index 100% rename from execution/uc-workload-generator/base/kustomization.yaml rename to execution/uc-workload-generator/kustomization.yaml diff --git a/execution/uc-workload-generator/overlay/uc1-workload-generator/kustomization.yaml b/execution/uc-workload-generator/overlay/uc1-workload-generator/kustomization.yaml deleted file mode 100644 index 553b769a3bacd3356d6b5af5ba2e865acdd47a7c..0000000000000000000000000000000000000000 --- a/execution/uc-workload-generator/overlay/uc1-workload-generator/kustomization.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization - -namePrefix: uc1- - -images: - - name: workload-generator - newName: theodolite/theodolite-uc1-workload-generator - newTag: latest - -bases: -- ../../base - -patchesStrategicMerge: -- set_paramters.yaml # Patch setting the resource parameters diff --git a/execution/uc-workload-generator/overlay/uc1-workload-generator/set_paramters.yaml b/execution/uc-workload-generator/overlay/uc1-workload-generator/set_paramters.yaml deleted file mode 100644 index b275607c27723b1e7e5e7e2b5c02942731bed809..0000000000000000000000000000000000000000 --- a/execution/uc-workload-generator/overlay/uc1-workload-generator/set_paramters.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-load-generator -spec: - replicas: 1 - template: - spec: - containers: - - name: workload-generator - env: - - name: NUM_SENSORS - value: "25000" - - name: INSTANCES - value: "1" diff --git a/execution/uc-workload-generator/overlay/uc2-workload-generator/kustomization.yaml b/execution/uc-workload-generator/overlay/uc2-workload-generator/kustomization.yaml deleted file mode 100644 index ff68743355d55459f2df988e8dd42bf0b3b6ae64..0000000000000000000000000000000000000000 --- a/execution/uc-workload-generator/overlay/uc2-workload-generator/kustomization.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization - -namePrefix: uc2- - -images: - - name: workload-generator - newName: theodolite/theodolite-uc2-workload-generator - newTag: latest - -bases: -- ../../base - -patchesStrategicMerge: -- set_paramters.yaml # Patch setting the resource parameters diff --git a/execution/uc-workload-generator/overlay/uc2-workload-generator/set_paramters.yaml b/execution/uc-workload-generator/overlay/uc2-workload-generator/set_paramters.yaml deleted file mode 100644 index 187cb4717195537288e58035dcdda5f34fc9ceed..0000000000000000000000000000000000000000 --- a/execution/uc-workload-generator/overlay/uc2-workload-generator/set_paramters.yaml +++ /dev/null @@ -1,19 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-load-generator -spec: - replicas: 1 - template: - spec: - containers: - - name: workload-generator - env: - - name: NUM_SENSORS - value: "4" - - name: HIERARCHY - value: "full" - - name: NUM_NESTED_GROUPS - value: "5" - - name: INSTANCES - value: "1" diff --git a/execution/uc-workload-generator/overlay/uc3-workload-generator/kustomization.yaml b/execution/uc-workload-generator/overlay/uc3-workload-generator/kustomization.yaml deleted file mode 100644 index a7022480fcfe401f3e4e4c3898c3d79930198d3e..0000000000000000000000000000000000000000 --- a/execution/uc-workload-generator/overlay/uc3-workload-generator/kustomization.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization - -namePrefix: uc3- - -images: - - name: workload-generator - newName: theodolite/theodolite-uc3-workload-generator - newTag: latest - -bases: -- ../../base - -patchesStrategicMerge: -- set_paramters.yaml # Patch setting the resource parameters diff --git a/execution/uc-workload-generator/overlay/uc3-workload-generator/set_paramters.yaml b/execution/uc-workload-generator/overlay/uc3-workload-generator/set_paramters.yaml deleted file mode 100644 index b275607c27723b1e7e5e7e2b5c02942731bed809..0000000000000000000000000000000000000000 --- a/execution/uc-workload-generator/overlay/uc3-workload-generator/set_paramters.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-load-generator -spec: - replicas: 1 - template: - spec: - containers: - - name: workload-generator - env: - - name: NUM_SENSORS - value: "25000" - - name: INSTANCES - value: "1" diff --git a/execution/uc-workload-generator/overlay/uc4-workload-generator/kustomization.yaml b/execution/uc-workload-generator/overlay/uc4-workload-generator/kustomization.yaml deleted file mode 100644 index 5efb0eb25a26371cdddfcc7969a2d10131dbb448..0000000000000000000000000000000000000000 --- a/execution/uc-workload-generator/overlay/uc4-workload-generator/kustomization.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization - -namePrefix: uc4- - -images: - - name: workload-generator - newName: theodolite/theodolite-uc4-workload-generator - newTag: latest - -bases: -- ../../base - -patchesStrategicMerge: -- set_paramters.yaml # Patch setting the resource parameters diff --git a/execution/uc-workload-generator/overlay/uc4-workload-generator/set_paramters.yaml b/execution/uc-workload-generator/overlay/uc4-workload-generator/set_paramters.yaml deleted file mode 100644 index b275607c27723b1e7e5e7e2b5c02942731bed809..0000000000000000000000000000000000000000 --- a/execution/uc-workload-generator/overlay/uc4-workload-generator/set_paramters.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: titan-ccp-load-generator -spec: - replicas: 1 - template: - spec: - containers: - - name: workload-generator - env: - - name: NUM_SENSORS - value: "25000" - - name: INSTANCES - value: "1" diff --git a/execution/uc-workload-generator/base/workloadGenerator.yaml b/execution/uc-workload-generator/workloadGenerator.yaml similarity index 100% rename from execution/uc-workload-generator/base/workloadGenerator.yaml rename to execution/uc-workload-generator/workloadGenerator.yaml