Skip to content
Snippets Groups Projects
Commit 51d80701 authored by Sören Henning's avatar Sören Henning
Browse files

Merge branch 'master' into theodolite-kotlin

parents e9357c10 605510cb
No related branches found
No related tags found
3 merge requests!159Re-implementation of Theodolite with Kotlin/Quarkus,!157Update Graal Image in CI pipeline,!83WIP: Re-implementation of Theodolite with Kotlin/Quarkus
Pipeline #1643 passed
Pipeline: theodolite

#1644

    ...@@ -3,18 +3,24 @@ ...@@ -3,18 +3,24 @@
    This directory contains Jupyter notebooks for analyzing and visualizing This directory contains Jupyter notebooks for analyzing and visualizing
    benchmark execution results and plotting. The following notebooks are provided: benchmark execution results and plotting. The following notebooks are provided:
    * [demand-metric.ipynb](demand-metric.ipynb): Create CSV files describing scalability according to the Theodolite `demand` metric.
    * [demand-metric-plot.ipynb](demand-metric-plot.ipynb): Create plots based on such CSV files of the `demand` metric.
    For legacy reasons, we also provide the following notebooks, which, however, are not documented:
    * [scalability-graph.ipynb](scalability-graph.ipynb): Creates a scalability graph for a certain benchmark execution. * [scalability-graph.ipynb](scalability-graph.ipynb): Creates a scalability graph for a certain benchmark execution.
    * [scalability-graph-final.ipynb](scalability-graph-final.ipynb): Combines the scalability graphs of multiple benchmarks executions (e.g. for comparing different configuration). * [scalability-graph-final.ipynb](scalability-graph-final.ipynb): Combines the scalability graphs of multiple benchmarks executions (e.g. for comparing different configuration).
    * [lag-trend-graph.ipynb](lag-trend-graph.ipynb): Visualizes the consumer lag evaluation over time along with the computed trend. * [lag-trend-graph.ipynb](lag-trend-graph.ipynb): Visualizes the consumer lag evaluation over time along with the computed trend.
    ## Usage ## Usage
    Basically, the Theodolite Analysis Jupyter notebooks should be runnable by any Jupyter server. To make it a bit easier, In general, the Theodolite Analysis Jupyter notebooks should be runnable by any Jupyter server. To make it a bit easier,
    we provide introductions for running notebooks with Docker and with Visual Studio Code. These intoduction may also be we provide introductions for running notebooks with Docker and with Visual Studio Code. These intoduction may also be
    a good starting point for using another service. a good starting point for using another service.
    For analyzing and visualizing benchmark results, either Docker or a Jupyter installation with Python 3.7 or newer is For analyzing and visualizing benchmark results, either Docker or a Jupyter installation with Python 3.7 or 3.8 is
    required (e.g., in a virtual environment). required (e.g., in a virtual environment). **Please note that Python 3.9 seems not to be working as not all our
    dependencies are ported to Python 3.9 yet.**
    ### Running with Docker ### Running with Docker
    ... ...
    ......
    %% Cell type:markdown id: tags:
    # Theodolite Analysis - Plotting the Demand Metric
    This notebook creates a plot, showing scalability as a function that maps load intensities to the resources required for processing them. It is able to combine multiple such plots in one figure, for example, to compare multiple systems or configurations.
    The notebook takes a CSV file for each plot mapping load intensities to minimum required resources, computed by the `demand-metric-plot.ipynb` notebook.
    %% Cell type:markdown id: tags:
    First, we need to import some libraries, which are required for creating the plots.
    %% Cell type:code id: tags:
    ``` python
    import os
    import pandas as pd
    from functools import reduce
    import matplotlib.pyplot as plt
    from matplotlib.ticker import FuncFormatter
    from matplotlib.ticker import MaxNLocator
    ```
    %% Cell type:markdown id: tags:
    We need to specify the directory, where the demand CSV files can be found, and a dictionary that maps a system description (e.g. its name) to the corresponding CSV file (prefix).
    %% Cell type:code id: tags:
    ``` python
    results_dir = '<path-to>/results'
    experiments = {
    'System XYZ': 'exp200',
    }
    ```
    %% Cell type:markdown id: tags:
    Now, we combie all systems described in `experiments`.
    %% Cell type:code id: tags:
    ``` python
    dataframes = [pd.read_csv(os.path.join(results_dir, f'{v}_demand.csv')).set_index('load').rename(columns={"resources": k}) for k, v in experiments.items()]
    df = reduce(lambda df1,df2: df1.join(df2,how='outer'), dataframes)
    ```
    %% Cell type:markdown id: tags:
    We might want to display the mappings before we plot it.
    %% Cell type:code id: tags:
    ``` python
    df
    ```
    %% Cell type:markdown id: tags:
    The following code creates a MatPlotLib figure showing the scalability plots for all specified systems. You might want to adjust its styling etc. according to your preferences. Make sure to also set a filename.
    %% Cell type:code id: tags:
    ``` python
    plt.style.use('ggplot')
    plt.rcParams['axes.facecolor']='w'
    plt.rcParams['axes.edgecolor']='555555'
    #plt.rcParams['ytick.color']='black'
    plt.rcParams['grid.color']='dddddd'
    plt.rcParams['axes.spines.top']='false'
    plt.rcParams['axes.spines.right']='false'
    plt.rcParams['legend.frameon']='true'
    plt.rcParams['legend.framealpha']='1'
    plt.rcParams['legend.edgecolor']='1'
    plt.rcParams['legend.borderpad']='1'
    @FuncFormatter
    def load_formatter(x, pos):
    return f'{(x/1000):.0f}k'
    markers = ['s', 'D', 'o', 'v', '^', '<', '>', 'p', 'X']
    def splitSerToArr(ser):
    return [ser.index, ser.as_matrix()]
    plt.figure()
    #plt.figure(figsize=(4.8, 3.6)) # For other plot sizes
    #ax = df.plot(kind='line', marker='o')
    for i, column in enumerate(df):
    plt.plot(df[column].dropna(), marker=markers[i], label=column)
    plt.legend()
    ax = plt.gca()
    #ax = df.plot(kind='line',x='dim_value', legend=False, use_index=True)
    ax.set_ylabel('number of instances')
    ax.set_xlabel('messages/second')
    ax.set_ylim(ymin=0)
    #ax.set_xlim(xmin=0)
    ax.yaxis.set_major_locator(MaxNLocator(integer=True))
    ax.xaxis.set_major_formatter(FuncFormatter(load_formatter))
    plt.savefig('temp.pdf', bbox_inches='tight')
    ```
    %% Cell type:code id: tags:
    ``` python
    ```
    ... ...
    %% Cell type:markdown id: tags:
    # Theodolite Analysis - Demand Metric
    This notebook allows applies Theodolite's *demand* metric to describe scalability of a SUT based on Theodolite measurement data.
    Theodolite's *demand* metric is a function, mapping load intensities to the minimum required resources (e.g., instances) that are required to process this load. With this notebook, the *demand* metric function is approximated by a map of tested load intensities to their minimum required resources.
    The final output when running this notebook will be a CSV file, providig this mapping. It can be used to create nice plots of a system's scalability using the `demand-metric-plot.ipynb` notebook.
    %% Cell type:markdown id: tags:
    In the following cell, we need to specifiy:
    * `exp_id`: The experiment id that is to be analyzed.
    * `warmup_sec`: The number of seconds which are to be ignored in the beginning of each experiment.
    * `max_lag_trend_slope`: The maximum tolerable increase in queued messages per second.
    * `measurement_dir`: The directory where the measurement data files are to be found.
    * `results_dir`: The directory where the computed demand CSV files are to be stored.
    %% Cell type:code id: tags:
    ``` python
    exp_id = 200
    warmup_sec = 60
    max_lag_trend_slope = 2000
    measurement_dir = '<path-to>/measurements'
    results_dir = '<path-to>/results'
    ```
    %% Cell type:markdown id: tags:
    With the following call, we compute our demand mapping.
    %% Cell type:code id: tags:
    ``` python
    from src.demand import demand
    demand = demand(exp_id, measurement_dir, max_lag_trend_slope, warmup_sec)
    ```
    %% Cell type:markdown id: tags:
    We might already want to plot a simple visualization here:
    %% Cell type:code id: tags:
    ``` python
    demand.plot(kind='line',x='load',y='resources')
    ```
    %% Cell type:markdown id: tags:
    Finally we store the results in a CSV file.
    %% Cell type:code id: tags:
    ``` python
    import os
    demand.to_csv(os.path.join(results_dir, f'exp{exp_id}_demand.csv'), index=False)
    ```
    ... ...
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    print("hello") print("hello")
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    import os import os
    from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
    import pandas as pd import pandas as pd
    from sklearn.linear_model import LinearRegression from sklearn.linear_model import LinearRegression
    import matplotlib.pyplot as plt import matplotlib.pyplot as plt
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    os.getcwd() os.getcwd()
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    exp_id = 2012 exp_id = 2012
    warmup_sec = 60 warmup_sec = 60
    warmup_partitions_sec = 120 warmup_partitions_sec = 120
    threshold = 2000 #slope threshold = 2000 #slope
    #directory = '../results' #directory = '../results'
    directory = '<path-to>/results' directory = '<path-to>/results'
    directory_out = '<path-to>/results-inst' directory_out = '<path-to>/results-inst'
    ``` ```
    %% Cell type:code id: tags:outputPrepend,outputPrepend %% Cell type:code id: tags:outputPrepend,outputPrepend
    ``` ```
    #exp_id = 35 #exp_id = 35
    #os.chdir("./results-final") #os.chdir("./results-final")
    raw_runs = [] raw_runs = []
    filenames = [filename for filename in os.listdir(directory) if filename.startswith(f"exp{exp_id}") and filename.endswith("totallag.csv")] filenames = [filename for filename in os.listdir(directory) if filename.startswith(f"exp{exp_id}") and filename.endswith("totallag.csv")]
    for filename in filenames: for filename in filenames:
    #print(filename) #print(filename)
    run_params = filename[:-4].split("_") run_params = filename[:-4].split("_")
    dim_value = run_params[2] dim_value = run_params[2]
    instances = run_params[3] instances = run_params[3]
    df = pd.read_csv(os.path.join(directory, filename)) df = pd.read_csv(os.path.join(directory, filename))
    #input = df.loc[df['topic'] == "input"] #input = df.loc[df['topic'] == "input"]
    input = df input = df
    #print(input) #print(input)
    input['sec_start'] = input.loc[0:, 'timestamp'] - input.iloc[0]['timestamp'] input['sec_start'] = input.loc[0:, 'timestamp'] - input.iloc[0]['timestamp']
    #print(input) #print(input)
    #print(input.iloc[0, 'timestamp']) #print(input.iloc[0, 'timestamp'])
    regress = input.loc[input['sec_start'] >= warmup_sec] # Warm-Up regress = input.loc[input['sec_start'] >= warmup_sec] # Warm-Up
    #regress = input #regress = input
    #input.plot(kind='line',x='timestamp',y='value',color='red') #input.plot(kind='line',x='timestamp',y='value',color='red')
    #plt.show() #plt.show()
    X = regress.iloc[:, 2].values.reshape(-1, 1) # values converts it into a numpy array X = regress.iloc[:, 2].values.reshape(-1, 1) # values converts it into a numpy array
    Y = regress.iloc[:, 3].values.reshape(-1, 1) # -1 means that calculate the dimension of rows, but have 1 column Y = regress.iloc[:, 3].values.reshape(-1, 1) # -1 means that calculate the dimension of rows, but have 1 column
    linear_regressor = LinearRegression() # create object for the class linear_regressor = LinearRegression() # create object for the class
    linear_regressor.fit(X, Y) # perform linear regression linear_regressor.fit(X, Y) # perform linear regression
    Y_pred = linear_regressor.predict(X) # make predictions Y_pred = linear_regressor.predict(X) # make predictions
    trend_slope = linear_regressor.coef_[0][0] trend_slope = linear_regressor.coef_[0][0]
    #print(linear_regressor.coef_) #print(linear_regressor.coef_)
    row = {'dim_value': int(dim_value), 'instances': int(instances), 'trend_slope': trend_slope} row = {'dim_value': int(dim_value), 'instances': int(instances), 'trend_slope': trend_slope}
    #print(row) #print(row)
    raw_runs.append(row) raw_runs.append(row)
    lags = pd.DataFrame(raw_runs) lags = pd.DataFrame(raw_runs)
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    lags.head() lags.head()
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    raw_partitions = [] raw_partitions = []
    filenames = [filename for filename in os.listdir(directory) if filename.startswith(f"exp{exp_id}") and filename.endswith("partitions.csv")] filenames = [filename for filename in os.listdir(directory) if filename.startswith(f"exp{exp_id}") and filename.endswith("partitions.csv")]
    for filename in filenames: for filename in filenames:
    #print(filename) #print(filename)
    run_params = filename[:-4].split("_") run_params = filename[:-4].split("_")
    dim_value = run_params[2] dim_value = run_params[2]
    instances = run_params[3] instances = run_params[3]
    df = pd.read_csv(os.path.join(directory, filename)) df = pd.read_csv(os.path.join(directory, filename))
    #input = df.loc[df['topic'] == "input"] #input = df.loc[df['topic'] == "input"]
    input = df input = df
    #print(input) #print(input)
    input['sec_start'] = input.loc[0:, 'timestamp'] - input.iloc[0]['timestamp'] input['sec_start'] = input.loc[0:, 'timestamp'] - input.iloc[0]['timestamp']
    #print(input) #print(input)
    #print(input.iloc[0, 'timestamp']) #print(input.iloc[0, 'timestamp'])
    input = input.loc[input['sec_start'] >= warmup_sec] # Warm-Up input = input.loc[input['sec_start'] >= warmup_sec] # Warm-Up
    #regress = input #regress = input
    input = input.loc[input['topic'] >= 'input'] input = input.loc[input['topic'] >= 'input']
    mean = input['value'].mean() mean = input['value'].mean()
    #input.plot(kind='line',x='timestamp',y='value',color='red') #input.plot(kind='line',x='timestamp',y='value',color='red')
    #plt.show() #plt.show()
    row = {'dim_value': int(dim_value), 'instances': int(instances), 'partitions': mean} row = {'dim_value': int(dim_value), 'instances': int(instances), 'partitions': mean}
    #print(row) #print(row)
    raw_partitions.append(row) raw_partitions.append(row)
    partitions = pd.DataFrame(raw_partitions) partitions = pd.DataFrame(raw_partitions)
    #runs = lags.join(partitions.set_index(['dim_value', 'instances']), on=['dim_value', 'instances']) #runs = lags.join(partitions.set_index(['dim_value', 'instances']), on=['dim_value', 'instances'])
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    raw_obs_instances = [] raw_obs_instances = []
    filenames = [filename for filename in os.listdir(directory) if filename.startswith(f"exp{exp_id}") and filename.endswith("instances.csv")] filenames = [filename for filename in os.listdir(directory) if filename.startswith(f"exp{exp_id}") and filename.endswith("instances.csv")]
    for filename in filenames: for filename in filenames:
    run_params = filename[:-4].split("_") run_params = filename[:-4].split("_")
    dim_value = run_params[2] dim_value = run_params[2]
    instances = run_params[3] instances = run_params[3]
    df = pd.read_csv(os.path.join(directory, filename)) df = pd.read_csv(os.path.join(directory, filename))
    if df.empty: if df.empty:
    continue continue
    #input = df.loc[df['topic'] == "input"] #input = df.loc[df['topic'] == "input"]
    input = df input = df
    #print(input) #print(input)
    input['sec_start'] = input.loc[0:, 'timestamp'] - input.iloc[0]['timestamp'] input['sec_start'] = input.loc[0:, 'timestamp'] - input.iloc[0]['timestamp']
    #print(input) #print(input)
    #print(input.iloc[0, 'timestamp']) #print(input.iloc[0, 'timestamp'])
    input = input.loc[input['sec_start'] >= warmup_sec] # Warm-Up input = input.loc[input['sec_start'] >= warmup_sec] # Warm-Up
    #regress = input #regress = input
    #input = input.loc[input['topic'] >= 'input'] #input = input.loc[input['topic'] >= 'input']
    #mean = input['value'].mean() #mean = input['value'].mean()
    #input.plot(kind='line',x='timestamp',y='value',color='red') #input.plot(kind='line',x='timestamp',y='value',color='red')
    #plt.show() #plt.show()
    #row = {'dim_value': int(dim_value), 'instances': int(instances), 'obs_instances': mean} #row = {'dim_value': int(dim_value), 'instances': int(instances), 'obs_instances': mean}
    #print(row) #print(row)
    raw_obs_instances.append(row) raw_obs_instances.append(row)
    obs_instances = pd.DataFrame(raw_obs_instances) obs_instances = pd.DataFrame(raw_obs_instances)
    obs_instances.head() obs_instances.head()
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    runs = lags runs = lags
    #runs = lags.join(partitions.set_index(['dim_value', 'instances']), on=['dim_value', 'instances'])#.join(obs_instances.set_index(['dim_value', 'instances']), on=['dim_value', 'instances']) #runs = lags.join(partitions.set_index(['dim_value', 'instances']), on=['dim_value', 'instances'])#.join(obs_instances.set_index(['dim_value', 'instances']), on=['dim_value', 'instances'])
    #runs["failed"] = runs.apply(lambda row: (abs(row['instances'] - row['obs_instances']) / row['instances']) > 0.1, axis=1) #runs["failed"] = runs.apply(lambda row: (abs(row['instances'] - row['obs_instances']) / row['instances']) > 0.1, axis=1)
    #runs.loc[runs['failed']==True] #runs.loc[runs['failed']==True]
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    #threshold = 1000 #threshold = 1000
    # Set to true if the trend line has a slope less than # Set to true if the trend line has a slope less than
    runs["suitable"] = runs.apply(lambda row: row['trend_slope'] < threshold, axis=1) runs["suitable"] = runs.apply(lambda row: row['trend_slope'] < threshold, axis=1)
    runs.columns = runs.columns.str.strip() runs.columns = runs.columns.str.strip()
    runs.sort_values(by=["dim_value", "instances"]) runs.sort_values(by=["dim_value", "instances"])
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    filtered = runs[runs.apply(lambda x: x['suitable'], axis=1)] filtered = runs[runs.apply(lambda x: x['suitable'], axis=1)]
    grouped = filtered.groupby(['dim_value'])['instances'].min() grouped = filtered.groupby(['dim_value'])['instances'].min()
    min_suitable_instances = grouped.to_frame().reset_index() min_suitable_instances = grouped.to_frame().reset_index()
    min_suitable_instances min_suitable_instances
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    min_suitable_instances.to_csv(os.path.join(directory_out, f'../results-inst/exp{exp_id}_min-suitable-instances.csv'), index=False) min_suitable_instances.to_csv(os.path.join(directory_out, f'exp{exp_id}_min-suitable-instances.csv'), index=False)
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    min_suitable_instances.plot(kind='line',x='dim_value',y='instances') min_suitable_instances.plot(kind='line',x='dim_value',y='instances')
    # min_suitable_instances.plot(kind='line',x='dim_value',y='instances', logy=True) # min_suitable_instances.plot(kind='line',x='dim_value',y='instances', logy=True)
    plt.show() plt.show()
    ``` ```
    %% Cell type:code id: tags: %% Cell type:code id: tags:
    ``` ```
    ``` ```
    ... ...
    ......
    import os
    from datetime import datetime, timedelta, timezone
    import pandas as pd
    from sklearn.linear_model import LinearRegression
    def demand(exp_id, directory, threshold, warmup_sec):
    raw_runs = []
    # Compute SL, i.e., lag trend, for each tested configuration
    filenames = [filename for filename in os.listdir(directory) if filename.startswith(f"exp{exp_id}") and filename.endswith("totallag.csv")]
    for filename in filenames:
    #print(filename)
    run_params = filename[:-4].split("_")
    dim_value = run_params[2]
    instances = run_params[3]
    df = pd.read_csv(os.path.join(directory, filename))
    #input = df.loc[df['topic'] == "input"]
    input = df
    #print(input)
    input['sec_start'] = input.loc[0:, 'timestamp'] - input.iloc[0]['timestamp']
    #print(input)
    #print(input.iloc[0, 'timestamp'])
    regress = input.loc[input['sec_start'] >= warmup_sec] # Warm-Up
    #regress = input
    #input.plot(kind='line',x='timestamp',y='value',color='red')
    #plt.show()
    X = regress.iloc[:, 2].values.reshape(-1, 1) # values converts it into a numpy array
    Y = regress.iloc[:, 3].values.reshape(-1, 1) # -1 means that calculate the dimension of rows, but have 1 column
    linear_regressor = LinearRegression() # create object for the class
    linear_regressor.fit(X, Y) # perform linear regression
    Y_pred = linear_regressor.predict(X) # make predictions
    trend_slope = linear_regressor.coef_[0][0]
    #print(linear_regressor.coef_)
    row = {'load': int(dim_value), 'resources': int(instances), 'trend_slope': trend_slope}
    #print(row)
    raw_runs.append(row)
    runs = pd.DataFrame(raw_runs)
    # Set suitable = True if SLOs are met, i.e., lag trend is below threshold
    runs["suitable"] = runs.apply(lambda row: row['trend_slope'] < threshold, axis=1)
    # Sort results table (unsure if required)
    runs.columns = runs.columns.str.strip()
    runs.sort_values(by=["load", "resources"])
    # Filter only suitable configurations
    filtered = runs[runs.apply(lambda x: x['suitable'], axis=1)]
    # Compute demand per load intensity
    grouped = filtered.groupby(['load'])['resources'].min()
    demand_per_load = grouped.to_frame().reset_index()
    return demand_per_load
    ...@@ -153,11 +153,11 @@ declarations for different volume types. ...@@ -153,11 +153,11 @@ declarations for different volume types.
    Using a [hostPath volume](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) is the easiest option when Using a [hostPath volume](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) is the easiest option when
    running Theodolite locally, e.g., with minikube or kind. running Theodolite locally, e.g., with minikube or kind.
    Just modify `infrastructure/kubernetes/volumeSingle.yaml` by setting `path` to the directory on your host machine where Just modify `infrastructure/kubernetes/volume-hostpath.yaml` by setting `path` to the directory on your host machine where
    all benchmark results should be stored and run: all benchmark results should be stored and run:
    ```sh ```sh
    kubectl apply -f infrastructure/kubernetes/volumeSingle.yaml kubectl apply -f infrastructure/kubernetes/volume-hostpath.yaml
    ``` ```
    ##### *local* volume ##### *local* volume
    ...@@ -166,12 +166,12 @@ A [local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) is ...@@ -166,12 +166,12 @@ A [local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) is
    access (e.g. via SSH) to one of your cluster nodes. access (e.g. via SSH) to one of your cluster nodes.
    You first need to create a directory on a selected node where all benchmark results should be stored. Next, modify You first need to create a directory on a selected node where all benchmark results should be stored. Next, modify
    `infrastructure/kubernetes/volumeCluster.yaml` by setting `<node-name>` to your selected node (this node will most `infrastructure/kubernetes/volume-local.yaml` by setting `<node-name>` to your selected node. (This node will most
    likely also execute the job). Further, you have to set `path` to the directory on the node you just created. To deploy likely also execute the [Theodolite job](#Execution).) Further, you have to set `path` to the directory on the node you just created. To deploy
    you volume run: you volume run:
    ```sh ```sh
    kubectl apply -f infrastructure/kubernetes/volumeCluster.yaml kubectl apply -f infrastructure/kubernetes/volume-local.yaml
    ``` ```
    ##### Other volumes ##### Other volumes
    ...@@ -195,7 +195,7 @@ RBAC is enabled on your cluster (see installation of [Theodolite RBAC](#Theodoli ...@@ -195,7 +195,7 @@ RBAC is enabled on your cluster (see installation of [Theodolite RBAC](#Theodoli
    To start the execution of a benchmark run (with `<your-theodolite-yaml>` being your job definition): To start the execution of a benchmark run (with `<your-theodolite-yaml>` being your job definition):
    ```sh ```sh
    kubectl apply -f <your-theodolite-yaml> kubectl create -f <your-theodolite-yaml>
    ``` ```
    This will create a pod with a name such as `your-job-name-xxxxxx`. You can verifiy this via `kubectl get pods`. With This will create a pod with a name such as `your-job-name-xxxxxx`. You can verifiy this via `kubectl get pods`. With
    ... ...
    ......
    image:
    pullPolicy: IfNotPresent
    clusters: clusters:
    - name: "my-confluent-cp-kafka" - name: "my-confluent-cp-kafka"
    bootstrapBrokers: "my-confluent-cp-kafka:9092" bootstrapBrokers: "my-confluent-cp-kafka:9092"
    ... ...
    ......
    ...@@ -55,6 +55,7 @@ cp-kafka: ...@@ -55,6 +55,7 @@ cp-kafka:
    # "min.insync.replicas": 2 # "min.insync.replicas": 2
    "auto.create.topics.enable": false "auto.create.topics.enable": false
    "log.retention.ms": "10000" # 10s "log.retention.ms": "10000" # 10s
    #"log.retention.ms": "86400000" # 24h
    "metrics.sample.window.ms": "5000" #5s "metrics.sample.window.ms": "5000" #5s
    ## ------------------------------------------------------ ## ------------------------------------------------------
    ... ...
    ......
    ...@@ -11,7 +11,7 @@ spec: ...@@ -11,7 +11,7 @@ spec:
    claimName: theodolite-pv-claim claimName: theodolite-pv-claim
    containers: containers:
    - name: theodolite - name: theodolite
    image: bvonheid/theodolite:latest image: ghcr.io/cau-se/theodolite:latest
    # imagePullPolicy: Never # Used to pull "own" local image # imagePullPolicy: Never # Used to pull "own" local image
    env: env:
    - name: UC # mandatory - name: UC # mandatory
    ... ...
    ......
    0% Loading or .
    You are about to add 0 people to the discussion. Proceed with caution.
    Please to comment