Skip to content
Snippets Groups Projects
Commit 96b94d81 authored by Sören Henning's avatar Sören Henning
Browse files

Merge branch '109-implement-kotlin-prototype' into 'theodolite-kotlin'

Resolve "Implement Quarkus/Kotlin protype"

See merge request !78
parents 51d80701 1d42d7f9
No related branches found
No related tags found
5 merge requests!159Re-implementation of Theodolite with Kotlin/Quarkus,!157Update Graal Image in CI pipeline,!83WIP: Re-implementation of Theodolite with Kotlin/Quarkus,!82Add logger,!78Resolve "Implement Quarkus/Kotlin protype"
Pipeline #1778 skipped
Showing
with 418 additions and 27 deletions
...@@ -9,7 +9,7 @@ benchmark execution results and plotting. The following notebooks are provided: ...@@ -9,7 +9,7 @@ benchmark execution results and plotting. The following notebooks are provided:
For legacy reasons, we also provide the following notebooks, which, however, are not documented: For legacy reasons, we also provide the following notebooks, which, however, are not documented:
* [scalability-graph.ipynb](scalability-graph.ipynb): Creates a scalability graph for a certain benchmark execution. * [scalability-graph.ipynb](scalability-graph.ipynb): Creates a scalability graph for a certain benchmark execution.
* [scalability-graph-final.ipynb](scalability-graph-final.ipynb): Combines the scalability graphs of multiple benchmarks executions (e.g. for comparing different configuration). * [scalability-graph-plotter.ipynb](scalability-graph-plotter.ipynb): Combines the scalability graphs of multiple benchmarks executions (e.g. for comparing different configuration).
* [lag-trend-graph.ipynb](lag-trend-graph.ipynb): Visualizes the consumer lag evaluation over time along with the computed trend. * [lag-trend-graph.ipynb](lag-trend-graph.ipynb): Visualizes the consumer lag evaluation over time along with the computed trend.
## Usage ## Usage
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Theodolite Analysis - Plotting the Demand Metric # Theodolite Analysis - Plotting the Demand Metric
This notebook creates a plot, showing scalability as a function that maps load intensities to the resources required for processing them. It is able to combine multiple such plots in one figure, for example, to compare multiple systems or configurations. This notebook creates a plot, showing scalability as a function that maps load intensities to the resources required for processing them. It is able to combine multiple such plots in one figure, for example, to compare multiple systems or configurations.
The notebook takes a CSV file for each plot mapping load intensities to minimum required resources, computed by the `demand-metric-plot.ipynb` notebook. The notebook takes a CSV file for each plot mapping load intensities to minimum required resources, computed by the `demand-metric-plot.ipynb` notebook.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
First, we need to import some libraries, which are required for creating the plots. First, we need to import some libraries, which are required for creating the plots.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import os import os
import pandas as pd import pandas as pd
from functools import reduce from functools import reduce
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter from matplotlib.ticker import FuncFormatter
from matplotlib.ticker import MaxNLocator from matplotlib.ticker import MaxNLocator
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
We need to specify the directory, where the demand CSV files can be found, and a dictionary that maps a system description (e.g. its name) to the corresponding CSV file (prefix). We need to specify the directory, where the demand CSV files can be found, and a dictionary that maps a system description (e.g. its name) to the corresponding CSV file (prefix). To use Unicode narrow non-breaking spaces in the description format it as `u"1000\u202FmCPU"`.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
results_dir = '<path-to>/results' results_dir = '<path-to>/results'
experiments = { experiments = {
'System XYZ': 'exp200', 'System XYZ': 'exp200',
} }
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Now, we combie all systems described in `experiments`. Now, we combie all systems described in `experiments`.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
dataframes = [pd.read_csv(os.path.join(results_dir, f'{v}_demand.csv')).set_index('load').rename(columns={"resources": k}) for k, v in experiments.items()] dataframes = [pd.read_csv(os.path.join(results_dir, f'{v}_demand.csv')).set_index('load').rename(columns={"resources": k}) for k, v in experiments.items()]
df = reduce(lambda df1,df2: df1.join(df2,how='outer'), dataframes) df = reduce(lambda df1,df2: df1.join(df2,how='outer'), dataframes)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
We might want to display the mappings before we plot it. We might want to display the mappings before we plot it.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
df df
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The following code creates a MatPlotLib figure showing the scalability plots for all specified systems. You might want to adjust its styling etc. according to your preferences. Make sure to also set a filename. The following code creates a MatPlotLib figure showing the scalability plots for all specified systems. You might want to adjust its styling etc. according to your preferences. Make sure to also set a filename.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
plt.style.use('ggplot') plt.style.use('ggplot')
plt.rcParams['axes.facecolor']='w' plt.rcParams['axes.facecolor']='w'
plt.rcParams['axes.edgecolor']='555555' plt.rcParams['axes.edgecolor']='555555'
#plt.rcParams['ytick.color']='black' #plt.rcParams['ytick.color']='black'
plt.rcParams['grid.color']='dddddd' plt.rcParams['grid.color']='dddddd'
plt.rcParams['axes.spines.top']='false' plt.rcParams['axes.spines.top']='false'
plt.rcParams['axes.spines.right']='false' plt.rcParams['axes.spines.right']='false'
plt.rcParams['legend.frameon']='true' plt.rcParams['legend.frameon']='true'
plt.rcParams['legend.framealpha']='1' plt.rcParams['legend.framealpha']='1'
plt.rcParams['legend.edgecolor']='1' plt.rcParams['legend.edgecolor']='1'
plt.rcParams['legend.borderpad']='1' plt.rcParams['legend.borderpad']='1'
@FuncFormatter @FuncFormatter
def load_formatter(x, pos): def load_formatter(x, pos):
return f'{(x/1000):.0f}k' return f'{(x/1000):.0f}k'
markers = ['s', 'D', 'o', 'v', '^', '<', '>', 'p', 'X'] markers = ['s', 'D', 'o', 'v', '^', '<', '>', 'p', 'X']
def splitSerToArr(ser): def splitSerToArr(ser):
return [ser.index, ser.as_matrix()] return [ser.index, ser.as_matrix()]
plt.figure() plt.figure()
#plt.figure(figsize=(4.8, 3.6)) # For other plot sizes #plt.figure(figsize=(4.8, 3.6)) # For other plot sizes
#ax = df.plot(kind='line', marker='o') #ax = df.plot(kind='line', marker='o')
for i, column in enumerate(df): for i, column in enumerate(df):
plt.plot(df[column].dropna(), marker=markers[i], label=column) plt.plot(df[column].dropna(), marker=markers[i], label=column)
plt.legend() plt.legend()
ax = plt.gca() ax = plt.gca()
#ax = df.plot(kind='line',x='dim_value', legend=False, use_index=True) #ax = df.plot(kind='line',x='dim_value', legend=False, use_index=True)
ax.set_ylabel('number of instances') ax.set_ylabel('number of instances')
ax.set_xlabel('messages/second') ax.set_xlabel('messages/second')
ax.set_ylim(ymin=0) ax.set_ylim(ymin=0)
#ax.set_xlim(xmin=0) #ax.set_xlim(xmin=0)
ax.yaxis.set_major_locator(MaxNLocator(integer=True)) ax.yaxis.set_major_locator(MaxNLocator(integer=True))
ax.xaxis.set_major_formatter(FuncFormatter(load_formatter)) ax.xaxis.set_major_formatter(FuncFormatter(load_formatter))
plt.savefig('temp.pdf', bbox_inches='tight') plt.savefig('temp.pdf', bbox_inches='tight')
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
``` ```
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Theodolite Analysis - Demand Metric # Theodolite Analysis - Demand Metric
This notebook allows applies Theodolite's *demand* metric to describe scalability of a SUT based on Theodolite measurement data. This notebook applies Theodolite's *demand* metric to describe scalability of a SUT based on Theodolite measurement data.
Theodolite's *demand* metric is a function, mapping load intensities to the minimum required resources (e.g., instances) that are required to process this load. With this notebook, the *demand* metric function is approximated by a map of tested load intensities to their minimum required resources. Theodolite's *demand* metric is a function, mapping load intensities to the minimum required resources (e.g., instances) that are required to process this load. With this notebook, the *demand* metric function is approximated by a map of tested load intensities to their minimum required resources.
The final output when running this notebook will be a CSV file, providig this mapping. It can be used to create nice plots of a system's scalability using the `demand-metric-plot.ipynb` notebook. The final output when running this notebook will be a CSV file, providig this mapping. It can be used to create nice plots of a system's scalability using the `demand-metric-plot.ipynb` notebook.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
In the following cell, we need to specifiy: In the following cell, we need to specifiy:
* `exp_id`: The experiment id that is to be analyzed. * `exp_id`: The experiment id that is to be analyzed.
* `warmup_sec`: The number of seconds which are to be ignored in the beginning of each experiment. * `warmup_sec`: The number of seconds which are to be ignored in the beginning of each experiment.
* `max_lag_trend_slope`: The maximum tolerable increase in queued messages per second. * `max_lag_trend_slope`: The maximum tolerable increase in queued messages per second.
* `measurement_dir`: The directory where the measurement data files are to be found. * `measurement_dir`: The directory where the measurement data files are to be found.
* `results_dir`: The directory where the computed demand CSV files are to be stored. * `results_dir`: The directory where the computed demand CSV files are to be stored.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
exp_id = 200 exp_id = 200
warmup_sec = 60 warmup_sec = 60
max_lag_trend_slope = 2000 max_lag_trend_slope = 2000
measurement_dir = '<path-to>/measurements' measurement_dir = '<path-to>/measurements'
results_dir = '<path-to>/results' results_dir = '<path-to>/results'
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
With the following call, we compute our demand mapping. With the following call, we compute our demand mapping.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from src.demand import demand from src.demand import demand
demand = demand(exp_id, measurement_dir, max_lag_trend_slope, warmup_sec) demand = demand(exp_id, measurement_dir, max_lag_trend_slope, warmup_sec)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
We might already want to plot a simple visualization here: We might already want to plot a simple visualization here:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
demand.plot(kind='line',x='load',y='resources') demand.plot(kind='line',x='load',y='resources')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Finally we store the results in a CSV file. Finally we store the results in a CSV file.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import os import os
demand.to_csv(os.path.join(results_dir, f'exp{exp_id}_demand.csv'), index=False) demand.to_csv(os.path.join(results_dir, f'exp{exp_id}_demand.csv'), index=False)
``` ```
......
---
title: Theodolite
nav_order: 1
permalink: /
---
# Theodolite
> A theodolite is a precision optical instrument for measuring angles between designated visible points in the horizontal and vertical planes. -- <cite>[Wikipedia](https://en.wikipedia.org/wiki/Theodolite)</cite>
Theodolite is a framework for benchmarking the horizontal and vertical scalability of stream processing engines. It consists of three modules:
## Theodolite Benchmarks
Theodolite contains 4 application benchmarks, which are based on typical use cases for stream processing within microservices. For each benchmark, a corresponding workload generator is provided. Currently, this repository provides benchmark implementations for Kafka Streams.
## Theodolite Execution Framework
Theodolite aims to benchmark scalability of stream processing engines for real use cases. Microservices that apply stream processing techniques are usually deployed in elastic cloud environments. Hence, Theodolite's cloud-native benchmarking framework deploys as components in a cloud environment, orchestrated by Kubernetes. More information on how to execute scalability benchmarks can be found in [Thedolite execution framework](execution).
## Theodolite Analysis Tools
Theodolite's benchmarking method create a *scalability graph* allowing to draw conclusions about the scalability of a stream processing engine or its deployment. A scalability graph shows how resource demand evolves with an increasing workload. Theodolite provides Jupyter notebooks for creating such scalability graphs based on benchmarking results from the execution framework. More information can be found in [Theodolite analysis tool](analysis).
title: "Theodolite"
remote_theme: pmarsceill/just-the-docs
#color_scheme: "dark"
aux_links:
"Theodolite on GitHub":
- "//github.com/cau-se/theodolite"
\ No newline at end of file
---
title: Release Process
has_children: false
nav_order: 2
---
# Release Process # Release Process
We assume that we are creating the release `v0.1.1`. Please make sure to adjust We assume that we are creating the release `v0.1.1`. Please make sure to adjust
......
...@@ -48,14 +48,15 @@ cp-kafka: ...@@ -48,14 +48,15 @@ cp-kafka:
# cpu: 100m # cpu: 100m
# memory: 128Mi # memory: 128Mi
configurationOverrides: configurationOverrides:
#"offsets.topic.replication.factor": "3" # offsets.topic.replication.factor: "3"
"message.max.bytes": "134217728" # 128 MB "message.max.bytes": "134217728" # 128 MB
"replica.fetch.max.bytes": "134217728" # 128 MB "replica.fetch.max.bytes": "134217728" # 128 MB
# "default.replication.factor": 3 # "default.replication.factor": 3
# "min.insync.replicas": 2 # "min.insync.replicas": 2
"auto.create.topics.enable": false "auto.create.topics.enable": false
"log.retention.ms": "10000" # 10s "log.retention.ms": "10000" # 10s
#"log.retention.ms": "86400000" # 24h # "log.retention.ms": "86400000" # 24h
# "group.initial.rebalance.delay.ms": "30000" # 30s
"metrics.sample.window.ms": "5000" #5s "metrics.sample.window.ms": "5000" #5s
## ------------------------------------------------------ ## ------------------------------------------------------
......
## ------------------------------------------------------
## Zookeeper
## ------------------------------------------------------
cp-zookeeper:
enabled: true
servers: 1
image: confluentinc/cp-zookeeper
imageTag: 5.4.0
## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace.
## https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
imagePullSecrets:
# - name: "regcred"
heapOptions: "-Xms512M -Xmx512M"
persistence:
enabled: false
resources: {}
## If you do want to specify resources, uncomment the following lines, adjust them as necessary,
## and remove the curly braces after 'resources:'
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## ------------------------------------------------------
## Kafka
## ------------------------------------------------------
cp-kafka:
enabled: true
brokers: 1
image: confluentinc/cp-enterprise-kafka
imageTag: 5.4.0
## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace.
## https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
imagePullSecrets:
# - name: "regcred"
heapOptions: "-Xms512M -Xmx512M"
persistence:
enabled: false
resources: {}
## If you do want to specify resources, uncomment the following lines, adjust them as necessary,
## and remove the curly braces after 'resources:'
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
configurationOverrides:
offsets.topic.replication.factor: "1"
"message.max.bytes": "134217728" # 128 MB
"replica.fetch.max.bytes": "134217728" # 128 MB
# "default.replication.factor": 3
# "min.insync.replicas": 2
"auto.create.topics.enable": false
"log.retention.ms": "10000" # 10s
#"log.retention.ms": "86400000" # 24h
"metrics.sample.window.ms": "5000" #5s
# access kafka from outside
nodeport:
enabled: true
## ------------------------------------------------------
## Schema Registry
## ------------------------------------------------------
cp-schema-registry:
enabled: true
image: confluentinc/cp-schema-registry
imageTag: 5.4.0
## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace.
## https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
imagePullSecrets:
# - name: "regcred"
heapOptions: "-Xms512M -Xmx512M"
resources: {}
## If you do want to specify resources, uncomment the following lines, adjust them as necessary,
## and remove the curly braces after 'resources:'
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
cp-kafka-rest:
enabled: false
cp-kafka-connect:
enabled: false
cp-ksql-server:
enabled: false
cp-control-center:
enabled: false
...@@ -20,9 +20,11 @@ dependencies { ...@@ -20,9 +20,11 @@ dependencies {
testImplementation 'io.quarkus:quarkus-junit5' testImplementation 'io.quarkus:quarkus-junit5'
testImplementation 'io.rest-assured:rest-assured' testImplementation 'io.rest-assured:rest-assured'
implementation 'org.slf4j:slf4j-simple:1.7.29'
implementation 'io.github.microutils:kotlin-logging:1.12.0'
implementation 'io.fabric8:kubernetes-client:5.0.0-alpha-2' implementation 'io.fabric8:kubernetes-client:5.0.0-alpha-2'
//implementation 'com.fkorotkov:kubernetes-dsl:2.8.1' compile group: 'org.apache.kafka', name: 'kafka-clients', version: '2.7.0'
compile group: 'org.apache.zookeeper', name: 'zookeeper', version: '3.6.2'
} }
group 'theodolite' group 'theodolite'
...@@ -47,9 +49,8 @@ compileKotlin { ...@@ -47,9 +49,8 @@ compileKotlin {
compileTestKotlin { compileTestKotlin {
kotlinOptions.jvmTarget = JavaVersion.VERSION_11 kotlinOptions.jvmTarget = JavaVersion.VERSION_11
} }
detekt { detekt {
failFast = true // fail build on any finding failFast = true // fail build on any finding
buildUponDefaultConfig = true buildUponDefaultConfig = true
ignoreFailures = true ignoreFailures = true
} }
\ No newline at end of file
package theodolite
import javax.ws.rs.GET
import javax.ws.rs.Path
import javax.ws.rs.Produces
import javax.ws.rs.core.MediaType
@Path("/hello-resteasy")
class GreetingResource {
@GET
@Produces(MediaType.TEXT_PLAIN)
fun hello() = "Hello RESTEasy"
}
\ No newline at end of file
package theodolite package theodolite
import io.quarkus.runtime.annotations.QuarkusMain import io.quarkus.runtime.annotations.QuarkusMain
import theodolite.execution.TheodoliteExecutor
import mu.KotlinLogging
private val logger = KotlinLogging.logger {}
@QuarkusMain @QuarkusMain
object Main { object Main {
@JvmStatic @JvmStatic
fun main(args: Array<String>) { fun main(args: Array<String>) {
println("Running main method") val theodolite = TheodoliteExecutor()
theodolite.run()
logger.info("Application started")
//Quarkus.run()
} }
} }
package theodolite.evaluation
interface SLOChecker {
}
\ No newline at end of file
package theodolite.execution
import mu.KotlinLogging
import theodolite.util.AbstractBenchmark
import theodolite.util.LoadDimension
import theodolite.util.Resource
import theodolite.util.Results
import java.time.Duration
private val logger = KotlinLogging.logger {}
/**
* The Benchmark Executor runs a single experiment.
*
* @property benchmark
* @property results
* @property executionDuration
* @constructor Create empty Benchmark executor
*/
abstract class BenchmarkExecutor(val benchmark: AbstractBenchmark, val results: Results, val executionDuration: Duration) {
/**
* Run a experiment for the given parametrization, evaluate the experiment and save the result.
*
* @param load load to be tested.
* @param res resources to be tested.
* @return True, if the number of resources are suitable for the given load, false otherwise.
*/
abstract fun runExperiment(load: LoadDimension, res: Resource): Boolean
/**
* Wait while the benchmark is running and log the number of minutes executed every 1 minute.
*
*/
fun waitAndLog() {
for (i in 1.rangeTo(executionDuration.toMinutes())) {
Thread.sleep(Duration.ofMinutes(1).toMillis())
logger.info { "Executed: $i minutes" }
}
}
}
package theodolite.execution
import theodolite.util.AbstractBenchmark
import theodolite.util.LoadDimension
import theodolite.util.Resource
import theodolite.util.Results
import java.time.Duration
class BenchmarkExecutorImpl(benchmark: AbstractBenchmark, results: Results, executionDuration: Duration) : BenchmarkExecutor(benchmark, results, executionDuration) {
override fun runExperiment(load: LoadDimension, res: Resource): Boolean {
benchmark.start(load, res)
this.waitAndLog()
benchmark.clearClusterEnvironment()
// todo evaluate
val result = false // if success else false
this.results.setResult(Pair(load, res), result)
return result;
}
}
\ No newline at end of file
package theodolite.execution
import theodolite.util.AbstractBenchmark
import theodolite.util.LoadDimension
import theodolite.util.Resource
import theodolite.util.Results
import java.time.Duration
class TestBenchmarkExecutorImpl(private val mockResults: Array<Array<Boolean>>, benchmark: AbstractBenchmark, results: Results):
BenchmarkExecutor(benchmark, results, executionDuration = Duration.ofSeconds(1)) {
override fun runExperiment(load: LoadDimension, res: Resource): Boolean {
val result = this.mockResults[load.get()][res.get()]
this.results.setResult(Pair(load, res), result)
return result;
}
}
\ No newline at end of file
package theodolite.execution
import mu.KotlinLogging
import theodolite.k8s.UC1Benchmark
import theodolite.strategies.restriction.LowerBoundRestriction
import theodolite.strategies.searchstrategy.CompositeStrategy
import theodolite.strategies.searchstrategy.LinearSearch
import theodolite.util.*
import java.nio.file.Paths
import java.time.Duration
private val logger = KotlinLogging.logger {}
class TheodoliteExecutor() {
val projectDirAbsolutePath = Paths.get("").toAbsolutePath().toString()
val resourcesPath = Paths.get(projectDirAbsolutePath, "./../../../resources/main/yaml/")
private fun loadConfig(): Config {
logger.info { resourcesPath }
val benchmark: UC1Benchmark = UC1Benchmark(
AbstractBenchmark.Config(
clusterZookeeperConnectionString = "my-confluent-cp-zookeeper:2181",
clusterKafkaConnectionString = "my-confluent-cp-kafka:9092",
externalZookeeperConnectionString = "localhost:2181",
externalKafkaConnectionString = "localhost:9092",
schemaRegistryConnectionString = "http://my-confluent-cp-schema-registry:8081",
kafkaPartition = 40,
kafkaReplication = 1,
kafkaTopics = listOf("input", "output"),
// TODO("handle path in a more nice way (not absolut)")
ucDeploymentPath = "$resourcesPath/aggregation-deployment.yaml",
ucServicePath = "$resourcesPath/aggregation-service.yaml",
wgDeploymentPath = "$resourcesPath/workloadGenerator.yaml",
configMapPath = "$resourcesPath/jmx-configmap.yaml",
ucImageURL = "ghcr.io/cau-se/theodolite-uc1-kstreams-app:latest",
wgImageURL = "ghcr.io/cau-se/theodolite-uc1-workload-generator:theodolite-kotlin-latest"
)
)
val results: Results = Results()
val executionDuration = Duration.ofSeconds(60 * 5)
val executor: BenchmarkExecutor = BenchmarkExecutorImpl(benchmark, results, executionDuration)
val restrictionStrategy = LowerBoundRestriction(results)
val searchStrategy = LinearSearch(executor)
return Config(
loads = listOf(5000, 10000).map { number -> LoadDimension(number) },
resources = (1..6).map { number -> Resource(number) },
compositeStrategy = CompositeStrategy(
executor,
searchStrategy,
restrictionStrategies = setOf(restrictionStrategy)
),
executionDuration = executionDuration
)
}
fun run() {
// read or get benchmark config
val config = this.loadConfig()
// execute benchmarks for each load
for (load in config.loads) {
config.compositeStrategy.findSuitableResource(load, config.resources)
}
}
}
package theodolite.k8s
import io.fabric8.kubernetes.api.model.ConfigMap
import io.fabric8.kubernetes.client.NamespacedKubernetesClient
class ConfigMapManager(private val client: NamespacedKubernetesClient) {
fun deploy(configMap: ConfigMap) {
this.client.configMaps().createOrReplace(configMap)
}
fun delete(configMap: ConfigMap) {
this.client.configMaps().delete(configMap)
}
}
package theodolite.k8s
import io.fabric8.kubernetes.api.model.Container
import io.fabric8.kubernetes.api.model.EnvVar
import io.fabric8.kubernetes.api.model.EnvVarSource
import io.fabric8.kubernetes.api.model.Quantity
import io.fabric8.kubernetes.api.model.apps.Deployment
import io.fabric8.kubernetes.client.NamespacedKubernetesClient
import mu.KotlinLogging
private val logger = KotlinLogging.logger {}
class DeploymentManager(private val client: NamespacedKubernetesClient) {
/**
* Sets the ContainerEvironmentVariables, creates new if variable don t exist.
* @param container - The Container
* @param map - Map of k=Name,v =Value of EnviromentVariables
*/
private fun setContainerEnv(container: Container, map: Map<String, String>) {
map.forEach { k, v ->
// filter for mathing name and set value
val x = container.env.filter { envVar -> envVar.name == k }
if (x.isEmpty()) {
val newVar = EnvVar(k, v, EnvVarSource())
container.env.add(newVar)
} else {
x.forEach {
it.value = v
}
}
}
}
/**
* Set the environment Variable for a container
*/
fun setWorkloadEnv(workloadDeployment: Deployment, containerName: String, map: Map<String, String>) {
workloadDeployment.spec.template.spec.containers.filter { it.name == containerName }
.forEach { it: Container ->
setContainerEnv(it, map)
}
}
/**
* Change the RessourceLimit of a container (Usally SUT)
*/
fun changeRessourceLimits(deployment: Deployment, ressource: String, containerName: String, limit: String) {
deployment.spec.template.spec.containers.filter { it.name == containerName }.forEach {
it.resources.limits.replace(ressource, Quantity(limit))
}
}
/**
* Change the image name of a container (SUT and the Worklaodgenerators)
*/
fun setImageName(deployment: Deployment, containerName: String, image: String) {
deployment.spec.template.spec.containers.filter { it.name == containerName }.forEach {
it.image = image
}
}
/**
* Change the image name of a container (SUT and the Worklaodgenerators)
*/
fun setReplica(deployment: Deployment, replicas: Int) {
deployment.spec.setReplicas(replicas)
}
// TODO potential add exception handling
fun deploy(deployment: Deployment) {
this.client.apps().deployments().createOrReplace(deployment)
}
// TODO potential add exception handling
fun delete(deployment: Deployment) {
this.client.apps().deployments().delete(deployment)
}
}
package theodolite.k8s
import io.fabric8.kubernetes.api.model.Service
import io.fabric8.kubernetes.client.NamespacedKubernetesClient
class ServiceManager(private val client: NamespacedKubernetesClient) {
fun changeServiceName(service: Service, newName: String) {
service.metadata.apply {
name = newName
}
}
fun deploy(service: Service) {
client.services().createOrReplace(service)
}
fun delete(service: Service) {
client.services().delete(service)
}
}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment