diff --git a/README.md b/README.md
index f2673f4b9ed0c46987963f8b455e19def802db79..804a193df21f3883ecf9a727af5a743b77a9cceb 100644
--- a/README.md
+++ b/README.md
@@ -4,20 +4,17 @@
 
 Theodolite is a framework for benchmarking the horizontal and vertical scalability of stream processing engines. It consists of three modules:
 
-## Theodolite Benchmarks
-
-Theodolite contains 4 application benchmarks, which are based on typical use cases for stream processing within microservices. For each benchmark, a corresponding workload generator is provided. Currently, this repository provides benchmark implementations for Apache Kafka Streams and Apache Flink. The benchmark sources can be found in [Thedolite benchmarks](benchmarks).
-
-
-## Theodolite Execution Framework
-
-Theodolite aims to benchmark scalability of stream processing engines for real use cases. Microservices that apply stream processing techniques are usually deployed in elastic cloud environments. Hence, Theodolite's cloud-native benchmarking framework deploys its components in a cloud environment, orchestrated by Kubernetes. More information on how to execute scalability benchmarks can be found in [Thedolite execution framework](execution).
+## Theodolite Benchmarking Tool
 
+Theodolite aims to benchmark scalability of stream processing engines for real use cases. Microservices that apply stream processing techniques are usually deployed in elastic cloud environments. Hence, Theodolite's cloud-native benchmarking framework deploys its components in a cloud environment, orchestrated by Kubernetes. It is recommended to install Theodolite with the package manager Helm. The Theodolite Helm chart along with instructions how to install it can be found in the [`helm`](helm) directory.
 
 ## Theodolite Analysis Tools
 
-Theodolite's benchmarking method creates a *scalability graph* allowing to draw conclusions about the scalability of a stream processing engine or its deployment. A scalability graph shows how resource demand evolves with an increasing workload. Theodolite provides Jupyter notebooks for creating such scalability graphs based on benchmarking results from the execution framework. More information can be found in [Theodolite analysis tool](analysis).
+Theodolite's benchmarking method maps load intensities to the resource amounts that are required for processing them. A plot showing how resource demand evolves with an increasing load allows to draw conclusions about the scalability of a stream processing engine or its deployment. Theodolite provides Jupyter notebooks for creating such plots based on benchmarking results from the execution framework. More information can be found in [Theodolite analysis tool](analysis).
+
+## Theodolite Benchmarks
 
+Theodolite comes with 4 application benchmarks, which are based on typical use cases for stream processing within microservices. For each benchmark, a corresponding load generator is provided. Currently, this repository provides benchmark implementations for Apache Kafka Streams and Apache Flink. The benchmark sources can be found in [Thedolite benchmarks](theodolite-benchmarks).
 
 ## How to Cite
 
diff --git a/docs/README.md b/docs/README.md
index 4fd13bdfc157efe8b3491695bb83972f96a82c5d..eb0848d52ec4235c6325ba0a373ea2628e52a102 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -10,16 +10,20 @@ permalink: /
 
 Theodolite is a framework for benchmarking the horizontal and vertical scalability of stream processing engines. It consists of three modules:
 
-## Theodolite Benchmarks
+## Theodolite Benchmarking Tool
 
-Theodolite contains 4 application benchmarks, which are based on typical use cases for stream processing within microservices. For each benchmark, a corresponding workload generator is provided. Currently, this repository provides benchmark implementations for Kafka Streams.
+Theodolite aims to benchmark scalability of stream processing engines for real use cases. Microservices that apply stream processing techniques are usually deployed in elastic cloud environments. Hence, Theodolite's cloud-native benchmarking framework deploys its components in a cloud environment, orchestrated by Kubernetes. It is recommended to install Theodolite with the package manager Helm. The Theodolite Helm chart along with instructions how to install it can be found in the [`helm`](helm) directory.
 
+## Theodolite Analysis Tools
 
-## Theodolite Execution Framework
+Theodolite's benchmarking method maps load intensities to the resource amounts that are required for processing them. A plot showing how resource demand evolves with an increasing load allows to draw conclusions about the scalability of a stream processing engine or its deployment. Theodolite provides Jupyter notebooks for creating such plots based on benchmarking results from the execution framework. More information can be found in [Theodolite analysis tool](analysis).
 
-Theodolite aims to benchmark scalability of stream processing engines for real use cases. Microservices that apply stream processing techniques are usually deployed in elastic cloud environments. Hence, Theodolite's cloud-native benchmarking framework deploys as components in a cloud environment, orchestrated by Kubernetes. More information on how to execute scalability benchmarks can be found in [Thedolite execution framework](execution).
+## Theodolite Benchmarks
 
+Theodolite comes with 4 application benchmarks, which are based on typical use cases for stream processing within microservices. For each benchmark, a corresponding load generator is provided. Currently, this repository provides benchmark implementations for Apache Kafka Streams and Apache Flink. The benchmark sources can be found in [Thedolite benchmarks](theodolite-benchmarks).
 
-## Theodolite Analysis Tools
+## How to Cite
+
+If you use Theodolite, please cite
 
-Theodolite's benchmarking method create a *scalability graph* allowing to draw conclusions about the scalability of a stream processing engine or its deployment. A scalability graph shows how resource demand evolves with an increasing workload. Theodolite provides Jupyter notebooks for creating such scalability graphs based on benchmarking results from the execution framework. More information can be found in [Theodolite analysis tool](analysis).
+> Sören Henning and Wilhelm Hasselbring. (2021). Theodolite: Scalability Benchmarking of Distributed Stream Processing Engines in Microservice Architectures. Big Data Research, Volume 25. DOI: [10.1016/j.bdr.2021.100209](https://doi.org/10.1016/j.bdr.2021.100209). arXiv:[2009.00304](https://arxiv.org/abs/2009.00304).
diff --git a/helm/README.md b/helm/README.md
index 078c9c9a2b3f896d5cf5a30e7c2540a36f8057e4..1a3428b5e601de0c6c33f9dab236321e95592c6c 100644
--- a/helm/README.md
+++ b/helm/README.md
@@ -2,55 +2,47 @@
 
 ## Installation
 
-Install the chart via:
+The Theodolite Helm chart with all its dependencies can be installed via:
 
 ```sh
 helm dependencies update .
 helm install theodolite .
 ```
 
-This chart installs requirements to execute benchmarks with Theodolite.
+## Customize Installation
 
-Dependencies and subcharts:
+As usual, the installation with Helm can be configured by passing a values YAML file:
 
-- Prometheus Operator
-- Prometheus
-- Grafana (incl. dashboard and data source configuration)
-- Kafka
-- Zookeeper
-- A Kafka client pod
-
-## Test
-
-Test the installation:
-
-```sh
-helm test theodolite
+```
+helm install theodolite . -f <your-config.yaml>
 ```
 
-Our test files are located [here](templates/../../theodolite-chart/templates/tests). Many subcharts have their own tests, these are also executed and are placed in the respective /templates folders. 
-
-Please note: If a test fails, Helm will stop testing.
+We provide a minimal configuration, especially suited for development environments, with the `preconfigs/minimal.yaml`
+file.
 
-It is possible that the tests are not running successfully at the moment. This is because the Helm tests of the subchart cp-confluent receive a timeout exception. There is an [issue](https://github.com/confluentinc/cp-helm-charts/issues/318) for this problem on GitHub.
+Per default, Helm installs the Theodolite CRDs used for the operator. If Theodolite will not be used as operator or if
+the CRDs are already installed, you can skip their installation by adding the flag `--skip-crds`.
 
-## Configuration
+## Test Installation
 
-In development environments Kubernetes resources are often low. To reduce resource consumption, we provide an `one-broker-value.yaml` file. This file can be used with:
+Test the installation with:
 
 ```sh
-helm install theodolite . -f preconfigs/one-broker-values.yaml
+helm test theodolite
 ```
 
+Our test files are located [here](templates/tests). Many subcharts have their own tests, which are also executed.
+Please note: If a test fails, Helm will stop testing.
+
 ## Uninstall this Chart
 
-To uninstall/delete the `theodolite` deployment (by default Helm will be install all CRDs (`execution` and `benchmark`) automatically. If Helm should not install these CRDs, use the flag `--skip-crds`)
+The Theodolite Helm can easily be removed with:
 
 ```sh
 helm uninstall theodolite
 ```
 
-This command does not remove the CRDs which are created by this chart. Remove them manually with:
+Helm does not remove any CRDs created by this chart. You can remove them manually with:
 
 ```sh
 # CRDs from Theodolite
@@ -69,9 +61,20 @@ kubectl delete crd thanosrulers.monitoring.coreos.com
 
 ## Development
 
-**Hints**:
+### Dependencies
+
+The following 3rd party charts are used by Theodolite:
+
+- Kube Prometheus Stack (to install the Prometheus Operator, which is used to create a Prometheus instances)
+- Grafana (including a dashboard and a data source configuration)
+- Confluent Platform (for Kafka and Zookeeper)
+- Kafka Lag Exporter (used to collect monitoring data of the Kafka lag)
+
+### Hints
+
+#### Grafana
 
-- Grafana configuration: Grafana ConfigMaps contains expressions like {{ topic }}. Helm uses the same syntax for template function. More information [here](https://github.com/helm/helm/issues/2798)
+Grafana ConfigMaps contain expressions like `{{ topic }}`. Helm uses the same syntax for template function. More information [here](https://github.com/helm/helm/issues/2798)
   - Escape braces: {{ "{{" topic }}
   - Let Helm render the template as raw string: {{ `{{ <config>}}` }}
   
\ No newline at end of file
diff --git a/helm/preconfigs/minimal.yaml b/helm/preconfigs/minimal.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..c11b7ad0d3d9a76ab3d1e46184ad4f33bd2accb6
--- /dev/null
+++ b/helm/preconfigs/minimal.yaml
@@ -0,0 +1,8 @@
+cp-helm-charts:
+  cp-zookeeper:
+    servers: 1
+
+  cp-kafka:
+    brokers: 1
+    configurationOverrides:
+      offsets.topic.replication.factor: "1"
diff --git a/helm/preconfigs/one-broker-values.yaml b/helm/preconfigs/one-broker-values.yaml
deleted file mode 100644
index c53c1f1eb8bc7a17f192d70a6f10f8cacc09c98f..0000000000000000000000000000000000000000
--- a/helm/preconfigs/one-broker-values.yaml
+++ /dev/null
@@ -1,15 +0,0 @@
-cp-helm-charts:
-    ## ------------------------------------------------------
-    ## Zookeeper
-    ## ------------------------------------------------------
-    cp-zookeeper:
-      servers: 1 # default: 3 
-
-  ## ------------------------------------------------------
-  ## Kafka
-  ## ------------------------------------------------------
-    cp-kafka:
-        brokers: 1 # default: 10
-
-        configurationOverrides:
-          offsets.topic.replication.factor: "1"
\ No newline at end of file