Skip to content
Snippets Groups Projects
Commit ad3058ee authored by Benedikt Wetzel's avatar Benedikt Wetzel
Browse files

Update demant-metric notebooks in order to run with the new implementation

Co-authored-by: default avatarBjörn Vonheiden <bjoern.vonheiden@hotmail.de>
parent 3f1b94c2
No related branches found
No related tags found
1 merge request!190Update demant-metric notebooks in order to run with the new implementation
%% Cell type:markdown id: tags:
# Theodolite Analysis - Plotting the Demand Metric
This notebook creates a plot, showing scalability as a function that maps load intensities to the resources required for processing them. It is able to combine multiple such plots in one figure, for example, to compare multiple systems or configurations.
The notebook takes a CSV file for each plot mapping load intensities to minimum required resources, computed by the `demand-metric-plot.ipynb` notebook.
%% Cell type:markdown id: tags:
First, we need to import some libraries, which are required for creating the plots.
%% Cell type:code id: tags:
``` python
```
import os
import pandas as pd
from functools import reduce
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from matplotlib.ticker import MaxNLocator
```
%% Cell type:markdown id: tags:
We need to specify the directory, where the demand CSV files can be found, and a dictionary that maps a system description (e.g. its name) to the corresponding CSV file (prefix). To use Unicode narrow non-breaking spaces in the description format it as `u"1000\u202FmCPU"`.
%% Cell type:code id: tags:
``` python
```
results_dir = '<path-to>/results'
plot_name = '<plot-name>'
experiments = {
'System XYZ': 'exp200',
}
```
%% Cell type:markdown id: tags:
Now, we combie all systems described in `experiments`.
%% Cell type:code id: tags:
``` python
```
dataframes = [pd.read_csv(os.path.join(results_dir, f'{v}_demand.csv')).set_index('load').rename(columns={"resources": k}) for k, v in experiments.items()]
df = reduce(lambda df1,df2: df1.join(df2,how='outer'), dataframes)
```
%% Cell type:markdown id: tags:
We might want to display the mappings before we plot it.
%% Cell type:code id: tags:
``` python
```
df
```
%% Output
System XYZ
load
50000 1
%% Cell type:markdown id: tags:
The following code creates a MatPlotLib figure showing the scalability plots for all specified systems. You might want to adjust its styling etc. according to your preferences. Make sure to also set a filename.
%% Cell type:code id: tags:
``` python
```
plt.style.use('ggplot')
plt.rcParams['pdf.fonttype'] = 42 # TrueType fonts
plt.rcParams['ps.fonttype'] = 42 # TrueType fonts
plt.rcParams['axes.facecolor']='w'
plt.rcParams['axes.edgecolor']='555555'
#plt.rcParams['ytick.color']='black'
plt.rcParams['grid.color']='dddddd'
plt.rcParams['axes.spines.top']='false'
plt.rcParams['axes.spines.right']='false'
plt.rcParams['legend.frameon']='true'
plt.rcParams['legend.framealpha']='1'
plt.rcParams['legend.edgecolor']='1'
plt.rcParams['legend.borderpad']='1'
@FuncFormatter
def load_formatter(x, pos):
return f'{(x/1000):.0f}k'
markers = ['s', 'D', 'o', 'v', '^', '<', '>', 'p', 'X']
def splitSerToArr(ser):
return [ser.index, ser.as_matrix()]
plt.figure()
#plt.figure(figsize=(4.8, 3.6)) # For other plot sizes
#ax = df.plot(kind='line', marker='o')
for i, column in enumerate(df):
plt.plot(df[column].dropna(), marker=markers[i], label=column)
plt.legend()
ax = plt.gca()
#ax = df.plot(kind='line',x='dim_value', legend=False, use_index=True)
ax.set_ylabel('number of instances')
ax.set_xlabel('messages/second')
ax.set_ylim(ymin=0)
#ax.set_xlim(xmin=0)
ax.yaxis.set_major_locator(MaxNLocator(integer=True))
ax.xaxis.set_major_formatter(FuncFormatter(load_formatter))
plt.savefig('temp.pdf', bbox_inches='tight')
plt.savefig(results_dir + '/' + plot_name + '.pdf', bbox_inches='tight')
```
%% Output
%% Cell type:code id: tags:
``` python
```
```
......
%% Cell type:markdown id: tags:
# Theodolite Analysis - Demand Metric
This notebook applies Theodolite's *demand* metric to describe scalability of a SUT based on Theodolite measurement data.
Theodolite's *demand* metric is a function, mapping load intensities to the minimum required resources (e.g., instances) that are required to process this load. With this notebook, the *demand* metric function is approximated by a map of tested load intensities to their minimum required resources.
The final output when running this notebook will be a CSV file, providig this mapping. It can be used to create nice plots of a system's scalability using the `demand-metric-plot.ipynb` notebook.
%% Cell type:markdown id: tags:
In the following cell, we need to specifiy:
* `exp_id`: The experiment id that is to be analyzed.
* `warmup_sec`: The number of seconds which are to be ignored in the beginning of each experiment.
* `max_lag_trend_slope`: The maximum tolerable increase in queued messages per second.
* `measurement_dir`: The directory where the measurement data files are to be found.
* `results_dir`: The directory where the computed demand CSV files are to be stored.
%% Cell type:code id: tags:
``` python
```
exp_id = 200
warmup_sec = 60
max_lag_trend_slope = 2000
measurement_dir = '<path-to>/measurements'
results_dir = '<path-to>/results'
```
%% Cell type:markdown id: tags:
With the following call, we compute our demand mapping.
%% Cell type:code id: tags:
``` python
```
from src.demand import demand
demand = demand(exp_id, measurement_dir, max_lag_trend_slope, warmup_sec)
```
%% Cell type:markdown id: tags:
We might already want to plot a simple visualization here:
%% Cell type:code id: tags:
``` python
```
demand.plot(kind='line',x='load',y='resources')
```
%% Output
<matplotlib.axes._subplots.AxesSubplot at 0x7f58e20d8c10>
%% Cell type:markdown id: tags:
Finally we store the results in a CSV file.
%% Cell type:code id: tags:
``` python
```
import os
demand.to_csv(os.path.join(results_dir, f'exp{exp_id}_demand.csv'), index=False)
```
......
import os
from datetime import datetime, timedelta, timezone
import pandas as pd
from pandas.core.frame import DataFrame
from sklearn.linear_model import LinearRegression
def demand(exp_id, directory, threshold, warmup_sec):
raw_runs = []
# Compute SL, i.e., lag trend, for each tested configuration
filenames = [filename for filename in os.listdir(directory) if filename.startswith(f"exp{exp_id}") and filename.endswith("totallag.csv")]
filenames = [filename for filename in os.listdir(directory) if filename.startswith(f"exp{exp_id}") and filename.__contains__("lag-trend") and filename.endswith(".csv")]
for filename in filenames:
#print(filename)
run_params = filename[:-4].split("_")
dim_value = run_params[2]
instances = run_params[3]
dim_value = run_params[1]
instances = run_params[2]
df = pd.read_csv(os.path.join(directory, filename))
#input = df.loc[df['topic'] == "input"]
input = df
#print(input)
input['sec_start'] = input.loc[0:, 'timestamp'] - input.iloc[0]['timestamp']
#print(input)
#print(input.iloc[0, 'timestamp'])
regress = input.loc[input['sec_start'] >= warmup_sec] # Warm-Up
#regress = input
#input.plot(kind='line',x='timestamp',y='value',color='red')
#plt.show()
X = regress.iloc[:, 1].values.reshape(-1, 1) # values converts it into a numpy array
Y = regress.iloc[:, 2].values.reshape(-1, 1) # -1 means that calculate the dimension of rows, but have 1 column
X = regress.iloc[:, 2].values.reshape(-1, 1) # values converts it into a numpy array
Y = regress.iloc[:, 3].values.reshape(-1, 1) # -1 means that calculate the dimension of rows, but have 1 column
linear_regressor = LinearRegression() # create object for the class
linear_regressor.fit(X, Y) # perform linear regression
Y_pred = linear_regressor.predict(X) # make predictions
......@@ -42,18 +38,19 @@ def demand(exp_id, directory, threshold, warmup_sec):
runs = pd.DataFrame(raw_runs)
# Set suitable = True if SLOs are met, i.e., lag trend is below threshold
runs["suitable"] = runs.apply(lambda row: row['trend_slope'] < threshold, axis=1)
# Sort results table (unsure if required)
runs.columns = runs.columns.str.strip()
runs.sort_values(by=["load", "resources"])
# Group by the load and resources to handle repetitions, and take from the reptitions the median
# for even reptitions the the average of the two middle values is used
medians = runs.groupby(by=['load', 'resources'], as_index=False).median()
# Filter only suitable configurations
filtered = runs[runs.apply(lambda x: x['suitable'], axis=1)]
# Compute demand per load intensity
grouped = filtered.groupby(['load'])['resources'].min()
demand_per_load = grouped.to_frame().reset_index()
# Set suitable = True if SLOs are met, i.e., lag trend is below threshold_ratio
# Calculate the absolute threshold for each row based on threshold_ratio and check if lag is below this threshold
medians["suitable"] = medians.apply(lambda row: row['trend_slope'] < threshold, axis=1)
suitable = medians[medians.apply(lambda x: x['suitable'], axis=1)]
#print(suitable)
# Compute minimal demand per load intensity
demand_per_load = suitable.groupby(by=['load'], as_index=False)['resources'].min()
return demand_per_load
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment