diff --git a/analysis/demand-metric-plot.ipynb b/analysis/demand-metric-plot.ipynb
index 90ef227dbf6a4566760329b615d5f59b4cc2bc25..71e08f0590f819a63b1bdd6bf13b57ac665f65bc 100644
--- a/analysis/demand-metric-plot.ipynb
+++ b/analysis/demand-metric-plot.ipynb
@@ -1,22 +1,22 @@
 {
  "cells": [
   {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": [
     "# Theodolite Analysis - Plotting the Demand Metric\n",
     "\n",
     "This notebook creates a plot, showing scalability as a function that maps load intensities to the resources required for processing them. It is able to combine multiple such plots in one figure, for example, to compare multiple systems or configurations.\n",
     "\n",
     "The notebook takes a CSV file for each plot mapping load intensities to minimum required resources, computed by the `demand-metric-plot.ipynb` notebook."
-   ],
-   "cell_type": "markdown",
-   "metadata": {}
+   ]
   },
   {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": [
     "First, we need to import some libraries, which are required for creating the plots."
-   ],
-   "cell_type": "markdown",
-   "metadata": {}
+   ]
   },
   {
    "cell_type": "code",
@@ -33,11 +33,11 @@
    ]
   },
   {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": [
     "We need to specify the directory, where the demand CSV files can be found, and a dictionary that maps a system description (e.g. its name) to the corresponding CSV file (prefix). To use Unicode narrow non-breaking spaces in the description format it as `u\"1000\\u202FmCPU\"`."
-   ],
-   "cell_type": "markdown",
-   "metadata": {}
+   ]
   },
   {
    "cell_type": "code",
@@ -53,11 +53,11 @@
    ]
   },
   {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": [
     "Now, we combie all systems described in `experiments`."
-   ],
-   "cell_type": "markdown",
-   "metadata": {}
+   ]
   },
   {
    "cell_type": "code",
@@ -71,11 +71,11 @@
    ]
   },
   {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": [
     "We might want to display the mappings before we plot it."
-   ],
-   "cell_type": "markdown",
-   "metadata": {}
+   ]
   },
   {
    "cell_type": "code",
@@ -87,11 +87,11 @@
    ]
   },
   {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": [
     "The following code creates a MatPlotLib figure showing the scalability plots for all specified systems. You might want to adjust its styling etc. according to your preferences. Make sure to also set a filename."
-   ],
-   "cell_type": "markdown",
-   "metadata": {}
+   ]
   },
   {
    "cell_type": "code",
@@ -149,27 +149,33 @@
   }
  ],
  "metadata": {
+  "file_extension": ".py",
+  "interpreter": {
+   "hash": "e9e076445e1891a25f59b525adcc71b09846b3f9cf034ce4147fc161b19af121"
+  },
+  "kernelspec": {
+   "display_name": "Python 3.8.10 64-bit ('.venv': venv)",
+   "name": "python3"
+  },
   "language_info": {
-   "name": "python",
    "codemirror_mode": {
     "name": "ipython",
     "version": 3
    },
-   "version": "3.8.5-final"
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.8.10"
   },
-  "orig_nbformat": 2,
-  "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "npconvert_exporter": "python",
+  "orig_nbformat": 2,
   "pygments_lexer": "ipython3",
-  "version": 3,
-  "kernelspec": {
-   "name": "python37064bitvenvvenv6c432ee1239d4f3cb23f871068b0267d",
-   "display_name": "Python 3.7.0 64-bit ('.venv': venv)",
-   "language": "python"
-  }
+  "version": 3
  },
  "nbformat": 4,
  "nbformat_minor": 2
-}
\ No newline at end of file
+}
diff --git a/analysis/demand-metric.ipynb b/analysis/demand-metric.ipynb
index bcea129b7cb07465fa99f32b6f8b2b6115e8a0aa..fbf3ee02960a1e06457eef5dda96cb6d0a1a75ac 100644
--- a/analysis/demand-metric.ipynb
+++ b/analysis/demand-metric.ipynb
@@ -1,6 +1,8 @@
 {
  "cells": [
   {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": [
     "# Theodolite Analysis - Demand Metric\n",
     "\n",
@@ -9,11 +11,11 @@
     "Theodolite's *demand* metric is a function, mapping load intensities to the minimum required resources (e.g., instances) that are required to process this load. With this notebook, the *demand* metric function is approximated by a map of tested load intensities to their minimum required resources.\n",
     "\n",
     "The final output when running this notebook will be a CSV file, providig this mapping. It can be used to create nice plots of a system's scalability using the `demand-metric-plot.ipynb` notebook."
-   ],
-   "cell_type": "markdown",
-   "metadata": {}
+   ]
   },
   {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": [
     "In the following cell, we need to specifiy:\n",
     "\n",
@@ -22,9 +24,7 @@
     "* `max_lag_trend_slope`: The maximum tolerable increase in queued messages per second.\n",
     "* `measurement_dir`: The directory where the measurement data files are to be found.\n",
     "* `results_dir`: The directory where the computed demand CSV files are to be stored."
-   ],
-   "cell_type": "markdown",
-   "metadata": {}
+   ]
   },
   {
    "cell_type": "code",
@@ -40,11 +40,11 @@
    ]
   },
   {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": [
     "With the following call, we compute our demand mapping."
-   ],
-   "cell_type": "markdown",
-   "metadata": {}
+   ]
   },
   {
    "cell_type": "code",
@@ -58,11 +58,11 @@
    ]
   },
   {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": [
     "We might already want to plot a simple visualization here:"
-   ],
-   "cell_type": "markdown",
-   "metadata": {}
+   ]
   },
   {
    "cell_type": "code",
@@ -74,11 +74,11 @@
    ]
   },
   {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": [
     "Finally we store the results in a CSV file."
-   ],
-   "cell_type": "markdown",
-   "metadata": {}
+   ]
   },
   {
    "cell_type": "code",
@@ -93,27 +93,33 @@
   }
  ],
  "metadata": {
+  "file_extension": ".py",
+  "interpreter": {
+   "hash": "e9e076445e1891a25f59b525adcc71b09846b3f9cf034ce4147fc161b19af121"
+  },
+  "kernelspec": {
+   "display_name": "Python 3.8.10 64-bit ('.venv': venv)",
+   "name": "python3"
+  },
   "language_info": {
-   "name": "python",
    "codemirror_mode": {
     "name": "ipython",
     "version": 3
    },
-   "version": "3.8.5-final"
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.8.10"
   },
-  "orig_nbformat": 2,
-  "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "npconvert_exporter": "python",
+  "orig_nbformat": 2,
   "pygments_lexer": "ipython3",
-  "version": 3,
-  "kernelspec": {
-   "name": "python37064bitvenvvenv6c432ee1239d4f3cb23f871068b0267d",
-   "display_name": "Python 3.7.0 64-bit ('.venv': venv)",
-   "language": "python"
-  }
+  "version": 3
  },
  "nbformat": 4,
  "nbformat_minor": 2
-}
\ No newline at end of file
+}
diff --git a/analysis/src/demand.py b/analysis/src/demand.py
index dfb20c05af8e9a134eedd2cdb584c961a82369f5..2178ab7c5dc5f7e4c04ebb58d4c14c9bf8b1aeff 100644
--- a/analysis/src/demand.py
+++ b/analysis/src/demand.py
@@ -1,59 +1,51 @@
 import os
 from datetime import datetime, timedelta, timezone
 import pandas as pd
+from pandas.core.frame import DataFrame
 from sklearn.linear_model import LinearRegression
 
 def demand(exp_id, directory, threshold, warmup_sec):
     raw_runs = []
 
-    # Compute SL, i.e., lag trend, for each tested configuration
-    filenames = [filename for filename in os.listdir(directory) if filename.startswith(f"exp{exp_id}") and filename.endswith("totallag.csv")]
+    # Compute SLI, i.e., lag trend, for each tested configuration
+    filenames = [filename for filename in os.listdir(directory) if filename.startswith(f"exp{exp_id}") and "lag-trend" in filename and filename.endswith(".csv")]
     for filename in filenames:
-        #print(filename)
         run_params = filename[:-4].split("_")
-        dim_value = run_params[2]
-        instances = run_params[3]
+        dim_value = run_params[1]
+        instances = run_params[2]
 
         df = pd.read_csv(os.path.join(directory, filename))
-        #input = df.loc[df['topic'] == "input"]
         input = df
-        #print(input)
+
         input['sec_start'] = input.loc[0:, 'timestamp'] - input.iloc[0]['timestamp']
-        #print(input)
-        #print(input.iloc[0, 'timestamp'])
+    
         regress = input.loc[input['sec_start'] >= warmup_sec] # Warm-Up
-        #regress = input
 
-        #input.plot(kind='line',x='timestamp',y='value',color='red')
-        #plt.show()
+        X = regress.iloc[:, 1].values.reshape(-1, 1)  # values converts it into a numpy array
+        Y = regress.iloc[:, 2].values.reshape(-1, 1)  # -1 means that calculate the dimension of rows, but have 1 column
 
-        X = regress.iloc[:, 2].values.reshape(-1, 1)  # values converts it into a numpy array
-        Y = regress.iloc[:, 3].values.reshape(-1, 1)  # -1 means that calculate the dimension of rows, but have 1 column
         linear_regressor = LinearRegression()  # create object for the class
         linear_regressor.fit(X, Y)  # perform linear regression
         Y_pred = linear_regressor.predict(X)  # make predictions
 
         trend_slope = linear_regressor.coef_[0][0]
-        #print(linear_regressor.coef_)
 
         row = {'load': int(dim_value), 'resources': int(instances), 'trend_slope': trend_slope}
-        #print(row)
         raw_runs.append(row)
 
     runs = pd.DataFrame(raw_runs)
 
-    # Set suitable = True if SLOs are met, i.e., lag trend is below threshold
-    runs["suitable"] =  runs.apply(lambda row: row['trend_slope'] < threshold, axis=1)
-
-    # Sort results table (unsure if required)
-    runs.columns = runs.columns.str.strip()
-    runs.sort_values(by=["load", "resources"])
+    # Group by the load and resources to handle repetitions, and take from the reptitions the median
+    # for even reptitions, the mean of the two middle values is used
+    medians = runs.groupby(by=['load', 'resources'], as_index=False).median()
 
-    # Filter only suitable configurations
-    filtered = runs[runs.apply(lambda x: x['suitable'], axis=1)]
-
-    # Compute demand per load intensity
-    grouped = filtered.groupby(['load'])['resources'].min()
-    demand_per_load = grouped.to_frame().reset_index()
+    # Set suitable = True if SLOs are met, i.e., lag trend slope is below threshold
+    medians["suitable"] =  medians.apply(lambda row: row['trend_slope'] < threshold, axis=1)
 
+    suitable = medians[medians.apply(lambda x: x['suitable'], axis=1)]
+    
+    # Compute minimal demand per load intensity
+    demand_per_load = suitable.groupby(by=['load'], as_index=False)['resources'].min()
+    
     return demand_per_load
+
diff --git a/theodolite/src/main/kotlin/theodolite/execution/TheodoliteExecutor.kt b/theodolite/src/main/kotlin/theodolite/execution/TheodoliteExecutor.kt
index f5054dc2d8c3525562118b559ab8987215dc4ea1..addf30acde31ee8e3e53c20a5e2b57a03587d08e 100644
--- a/theodolite/src/main/kotlin/theodolite/execution/TheodoliteExecutor.kt
+++ b/theodolite/src/main/kotlin/theodolite/execution/TheodoliteExecutor.kt
@@ -115,10 +115,10 @@ class TheodoliteExecutor(
         val ioHandler = IOHandler()
         val resultsFolder = ioHandler.getResultFolderURL()
         this.config.executionId = getAndIncrementExecutionID(resultsFolder + "expID.txt")
-        ioHandler.writeToJSONFile(this.config, "$resultsFolder${this.config.executionId}-execution-configuration")
+        ioHandler.writeToJSONFile(this.config, "${resultsFolder}exp${this.config.executionId}-execution-configuration")
         ioHandler.writeToJSONFile(
             kubernetesBenchmark,
-            "$resultsFolder${this.config.executionId}-benchmark-configuration"
+            "${resultsFolder}exp${this.config.executionId}-benchmark-configuration"
         )
 
         val config = buildConfig()
@@ -130,7 +130,7 @@ class TheodoliteExecutor(
         }
         ioHandler.writeToJSONFile(
             config.compositeStrategy.benchmarkExecutor.results,
-            "$resultsFolder${this.config.executionId}-result"
+            "${resultsFolder}exp${this.config.executionId}-result"
         )
     }