diff --git a/docs/creating-a-benchmark.md b/docs/creating-a-benchmark.md
index 06b9e17ecaf5f1f25c719495a204d95ae7e09785..f27935ff9d7008d9b1486e7eb0f460d855eae54b 100644
--- a/docs/creating-a-benchmark.md
+++ b/docs/creating-a-benchmark.md
@@ -29,6 +29,11 @@ spec:
          files:
             - uc1-load-generator-service.yaml
             - uc1-load-generator-deployment.yaml
+  resourceTypes:
+    - typeName: "Instances"
+      patchers:
+        - type: "ReplicaPatcher"
+          resource: "uc1-kstreams-deployment.yaml"
   loadTypes:
     - typeName: "NumSensors"
       patchers:
@@ -41,6 +46,15 @@ spec:
           resource: "uc1-load-generator-deployment.yaml"
           properties:
             loadGenMaxRecords: "150000"
+  slos:
+    - name: "lag trend"
+      sloType: "lag trend"
+      prometheusUrl: "http://prometheus-operated:9090"
+      offset: 0
+      properties:
+        threshold: 3000
+        externalSloUrl: "http://localhost:80/evaluate-slope"
+        warmup: 60 # in seconds
   kafkaConfig:
     bootstrapServer: "theodolite-kafka-kafka-bootstrap:9092"
     topics:
@@ -60,8 +74,6 @@ Infrastructure resources live over the entire duration of a benchmark run. They
 
 ### Resources
 
-#### ConfigMap
-
 The recommended way to link Kubernetes resources files from a Benchmark is by bundling them in one or multiple ConfigMaps and refer to that ConfigMap from `sut.resources`, `loadGenerator.resources` or `infrastructure.resources`.
 To create a ConfigMap from all the Kubernetes resources in a directory run:
 
@@ -79,21 +91,13 @@ configMap:
   - example-service.yaml
 ```
 
-#### Filesystem
-
-Alternatively, resources can also be read from the filesystem, Theodolite has access to. This usually requires that the Benchmark resources are available in a volume, which is mounted into the Theodolite container.
-
-```yaml
-filesystem:
-  path: example/path/to/files
-  files:
-  - example-deployment.yaml
-  - example-service.yaml
-```
-
 ### Actions
 
 Sometimes it is not sufficient to just define resources that are created and deleted when running a benchmark. Instead, it might be necessary to define certain actions that will be executed before running or after stopping the benchmark.
+Theodolite supports *actions*, which can run before (`beforeActions`) or after `afterActions` all `sut`, `loadGenerator` or `infrastructure` resources are deployed.
+Theodolite provides two types of actions:
+
+#### Exec Actions
 
 Theodolite allows to execute commands on running pods. This is similar to `kubectl exec` or Kubernetes' [container lifecycle handlers](https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). Theodolite actions can run before (`beforeActions`) or after `afterActions` all `sut`, `loadGenerator` or `infrastructure` resources are deployed.
 For example, the following actions will create a file in a pod with label `app: logger` before the SUT is started and delete if after the SUT is stopped:
@@ -102,26 +106,48 @@ For example, the following actions will create a file in a pod with label `app:
   sut:
     resources: # ...
     beforeActions:
-      - selector:
-          pod:
-            matchLabels:
-              app: logger
-        exec:
+      - exec:
+          selector:
+            pod:
+              matchLabels:
+                app: logger
+            container: logger # optional
           command: ["touch", "file-used-by-logger.txt"]
           timeoutSeconds: 90
     afterActions:
-      - selector:
-          pod:
-            matchLabels:
-              app: logger
-        exec:
+      - exec:
+          selector:
+            pod:
+              matchLabels:
+                app: logger
+            container: logger # optional
           command: [ "rm", "file-used-by-logger.txt" ]
           timeoutSeconds: 90
 ```
 
 Theodolite checks if all referenced pods are available for the specified actions. That means these pods must either be defined in `infrastructure` or already deployed in the cluster. If not all referenced pods are available, the benchmark will not be set as `Ready`. Consequently, an action cannot be executed on a pod that is defined as an SUT or load generator resource.
 
-*Note: Actions should be used sparingly. While it is possible to define entire benchmarks imperatively as actions, it is considered better practice to define as much as possible using declarative, native Kubernetes resource files.*
+*Note: Exec actions should be used sparingly. While it is possible to define entire benchmarks imperatively as actions, it is considered better practice to define as much as possible using declarative, native Kubernetes resource files.*
+
+#### Delete Actions
+
+Sometimes it is required to delete Kubernetes resources before or after running a benchmark.
+This is typically the case for resources that are automatically created while running a benchmark.
+For example, Kafka Streams creates internal Kafka topics. When using the [Strimzi](https://strimzi.io/) Kafka operator, we can delete these topics by deleting the corresponding Kafka topic resource.
+
+As shown in the following example, delete actions select the resources to be deleted by specifying their *apiVersion*, *kind* and a [regular expression](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html) for their name.
+
+```yaml
+  sut:
+    resources: # ...
+    beforeActions:
+      - delete:
+          selector:
+            apiVersion: kafka.strimzi.io/v1beta2
+            kind: KafkaTopic
+            nameRegex: ^some-internal-topic-.*
+```
+
 
 <!--
 A Benchmark refers to other Kubernetes resources (e.g., Deployments, Services, ConfigMaps), which describe the system under test, the load generator and infrastructure components such as a middleware used in the benchmark. To manage those resources, Theodolite needs to have access to them. This is done by bundling resources in ConfigMaps.
@@ -140,6 +166,33 @@ See the [patcher API reference](api-reference/patchers) for an overview of avail
 
 If a benchmark is [executed by an Execution](running-benchmarks), these patchers are used to configure SUT and load generator according to the [load and resource values](creating-an-execution) set in the Execution.
 
+## Service Level Objectives SLOs
+
+SLOs provide a way to quantify whether a certain load intensity can be handled by a certain amount of provisioned resources.
+In Theodolite, SLOs are evaluated by requesting monitoring data from Prometheus and analyzing it in a benchmark-specific way.
+An Execution must at least define one SLO to be checked.
+
+A good choice to get started is defining an SLO of type `generic`:
+
+```yaml
+- sloType: "generic"
+  prometheusUrl: "http://prometheus-operated:9090"
+  offset: 0
+  properties:
+    externalSloUrl: "http://localhost:8082"
+    promQLQuery: "sum by(job) (kafka_streams_stream_task_metrics_dropped_records_total>=0)"
+    warmup: 60 # in seconds
+    queryAggregation: max
+    repetitionAggregation: median
+    operator: lte
+    threshold: 1000
+```
+
+All you have to do is to define a [PromQL query](https://prometheus.io/docs/prometheus/latest/querying/basics/) describing which metrics should be requested (`promQLQuery`) and how the resulting time series should be evaluated. With `queryAggregation` you specify how the resulting time series is aggregated to a single value and `repetitionAggregation` describes how the results of multiple repetitions are aggregated. Possible values are
+`mean`, `median`, `mode`, `sum`, `count`, `max`, `min`, `std`, `var`, `skew`, `kurt` as well as percentiles such as `p99` or `p99.9`. The result of aggregation all repetitions is checked against `threshold`. This check is performed using an `operator`, which describes that the result must be "less than" (`lt`), "less than equal" (`lte`), "greater than" (`gt`) or "greater than equal" (`gte`) to the threshold.
+
+In case you need to evaluate monitoring data in a more flexible fashion, you can also change the value of `externalSloUrl` to your custom SLO checker. Have a look at the source code of the [generic SLO checker](https://github.com/cau-se/theodolite/tree/master/slo-checker/generic) to get started.
+
 ## Kafka Configuration
 
 Theodolite allows to automatically create and remove Kafka topics for each SLO experiment by setting a `kafkaConfig`.
diff --git a/docs/creating-an-execution.md b/docs/creating-an-execution.md
index e33019a574ec7d4a3486c9ea7778efeb6e959260..fbbfb4d3310ba19117ca78cad8c383b9f354e068 100644
--- a/docs/creating-an-execution.md
+++ b/docs/creating-an-execution.md
@@ -16,7 +16,7 @@ kind: execution
 metadata:
   name: theodolite-example-execution
 spec:
-  benchmark: "uc1-kstreams"
+  benchmark: "example-benchmark"
   load:
     loadType: "NumSensors"
     loadValues: [25000, 50000]
@@ -24,20 +24,19 @@ spec:
     resourceType: "Instances"
     resourceValues: [1, 2]
   slos:
-    - sloType: "lag trend"
-      prometheusUrl: "http://prometheus-operated:9090"
-      offset: 0
+    - name: "lag trend"
       properties:
         threshold: 2000
-        externalSloUrl: "http://localhost:80/evaluate-slope"
-        warmup: 60 # in seconds
   execution:
-    strategy: "LinearSearch"
+    metric: "demand"
+    strategy:
+      name: "RestrictionSearch"
+      restrictions:
+        - "LowerBound"
+      searchStrategy: "LinearSearch"
     duration: 300 # in seconds
     repetitions: 1
     loadGenerationDelay: 30 # in seconds
-    restrictions:
-      - "LowerBound"
   configOverrides:
     - patcher:
         type: "SchedulerNamePatcher"
@@ -53,41 +52,20 @@ Similar to [Kubernetes Jobs](https://kubernetes.io/docs/concepts/workloads/contr
 
 An Execution always refers to a Benchmark. For the Execution to run, the Benchmark must be registered with Kubernetes and it must be in state *Ready*. If this is not the case, the Execution will remain in state *Pending*.
 
-As a Benchmark may define multiple supported load and resource types, an Execution has to pick exactly one of each by its name. Additionally, it defines the set of load values and resource values the benchmark should be executed with. Both these values are represented as integers, which are interpreted in a [Benchmark-specific way](creating-a-benchmark) to configure the SUT and load generator.
+## Selecting Load Type, Resource Type and SLOs
 
-## Definition of SLOs
-
-SLOs provide a way to quantify whether a certain load intensity can be handled by a certain amount of provisioned resources.
-In Theodolite, SLO are evaluated by requesting monitoring data from Prometheus and analyzing it in a benchmark-specific way.
-An Execution must at least define one SLO to be checked.
-
-A good choice to get started is defining an SLO of type `generic`:
-
-```yaml
-- sloType: "generic"
-  prometheusUrl: "http://prometheus-operated:9090"
-  offset: 0
-  properties:
-    externalSloUrl: "http://localhost:8082"
-    promQLQuery: "sum by(job) (kafka_streams_stream_task_metrics_dropped_records_total>=0)"
-    warmup: 60 # in seconds
-    queryAggregation: max
-    repetitionAggregation: median
-    operator: lte
-    threshold: 1000
-```
-
-All you have to do is to define a [PromQL query](https://prometheus.io/docs/prometheus/latest/querying/basics/) describing which metrics should be requested (`promQLQuery`) and how the resulting time series should be evaluated. With `queryAggregation` you specify how the resulting time series is aggregated to a single value and `repetitionAggregation` describes how the results of multiple repetitions are aggregated. Possible values are
-`mean`, `median`, `mode`, `sum`, `count`, `max`, `min`, `std`, `var`, `skew`, `kurt` as well as percentiles such as `p99` or `p99.9`. The result of aggregation all repetitions is checked against `threshold`. This check is performed using an `operator`, which describes that the result must be "less than" (`lt`), "less than equal" (`lte`), "greater than" (`gt`) or "greater than equal" (`gte`) to the threshold.
-
-In case you need to evaluate monitoring data in a more flexible fashion, you can also change the value of `externalSloUrl` to your custom SLO checker. Have a look at the source code of the [generic SLO checker](https://github.com/cau-se/theodolite/tree/master/slo-checker/generic) to get started.
+As a Benchmark may define multiple supported load and resource types, an Execution has to pick exactly one of each by its name. Additionally, it defines the set of load values and resource values the benchmark should be executed with.
+Both these values are represented as integers, which are interpreted in a [Benchmark-specific way](creating-a-benchmark#load-and-resource-types) to configure the SUT and load generator.
+Similarly, an Execution can select a subset of the [SLOs defined in the Benchmark](creating-a-benchmark#service-level-objectives-slos). Additionally, these SLOs can be configured by their `properties`.
+<!-- TODO: What happpens if slos are not set? -->
 
 ## Experimental Setup
 
 According to Theodolite's measurement method, isolated SLO experiments are performed for different combinations of load intensity and resource amounts.
 The experimental setup can be configured by:
 
-* A search strategy (`strategy`), which determines which load and resource combinations should be tested. Supported values are `FullSearch`, `LinearSearch` and `BinarySearch`. Additionally, a `restrictions` can be set to `LowerBound`.
+* A [scalability metric](concepts/metrics) (`metric`). Supported values are `demand` and `capacity`, with `demand` being the default.
+* A [search strategy](concepts/search-strategies) (`strategy`), which determines which load and resource combinations should be tested. Supported values are `FullSearch`, `LinearSearch` and `BinarySearch`. Additionally, a `restrictions` can be set to `LowerBound`.
 * The `duration` per SLO experiment in seconds.
 * The number of repetitions (`repetitions`) for each SLO experiment.
 * A `loadGenerationDelay`, specifying the time in seconds before the load generation starts.
diff --git a/docs/drafts/filesystem.md b/docs/drafts/filesystem.md
new file mode 100644
index 0000000000000000000000000000000000000000..d45d9f8b09c7e5f01099d8d5edc1929d8360b9ea
--- /dev/null
+++ b/docs/drafts/filesystem.md
@@ -0,0 +1,13 @@
+## Creating a Benchmark with Filesystem Resources
+
+#### Filesystem
+
+Alternatively, resources can also be read from the filesystem, Theodolite has access to. This usually requires that the Benchmark resources are available in a volume, which is mounted into the Theodolite container.
+
+```yaml
+filesystem:
+  path: example/path/to/files
+  files:
+  - example-deployment.yaml
+  - example-service.yaml
+```
diff --git a/docs/installation.md b/docs/installation.md
index a01cfd14dff18d61f9d0e7de434fcaf9971280eb..a46caea3ef97e6225a5c0af33e8bffe94a160e32 100644
--- a/docs/installation.md
+++ b/docs/installation.md
@@ -38,11 +38,6 @@ To store the results of benchmark executions in a [PersistentVolume](https://kub
 You can also use an existing PersistentVolumeClaim by setting `operator.resultsVolume.persistent.existingClaim`.
 If persistence is not enabled, all results will be gone upon pod termination.
 
-
-### Standalone mode
-
-Per default, Theodolite is installed in operator mode, which allows to run and manage benchmarks through the Kubernetes API. For running Theodolite in standalone mode, it is sufficient to disable the operator by setting `operator.enabled` to `false`. Additionally, you might want to add the command line argument `--skip-crds`. With these settings, only Theodolite's dependencies as well as resources to get the necessary permissions are installed.
-
 ### Random scheduler
 
 Installation of the random scheduler can be enabled via `randomScheduler.enabled`. Please note that the random scheduler is neither required in operator mode nor in standalone mode. However, it has to be installed if benchmark executions should use random scheduling.
@@ -56,19 +51,6 @@ In cases, where you need to install multiple Theodolite instances, it's best to
 
 *Note that for meaningful results, usually only one benchmark should be executed at a time.*
 
-### Installation with a release name other than `theodolite`
-
-When using another release name than `theodolite`, make sure to adjust the Confluent Schema Registry configuration of you `values.yaml` accordingly:
-
-```yaml
-cp-helm-charts:
-  cp-schema-registry:
-    kafka:
-      bootstrapServers: <your-release-name>-kafka-kafka-bootstrap:9092
-```
-
-This seems unfortunately to be necessary as Helm does not let us inject values into dependency charts.
-
 
 ## Test the Installation
 
diff --git a/docs/running-benchmarks.md b/docs/running-benchmarks.md
index c4459183870d945d159551e54aab10cce7ea372e..812de3c763beedf6b6618f9c2c097766811fab79 100644
--- a/docs/running-benchmarks.md
+++ b/docs/running-benchmarks.md
@@ -59,7 +59,7 @@ To run a benchmark, an Execution YAML file needs to be created such as the follo
 apiVersion: theodolite.rocks/v1beta1
 kind: execution
 metadata:
-  name: theodolite-example-execution # (1) give your execution a name
+  name: theodolite-example-execution # (1) give a name to your execution
 spec:
   benchmark: "uc1-kstreams" # (2) refer to the benchmark to be run
   load:
@@ -68,21 +68,14 @@ spec:
   resources:
     resourceType: "Instances" # (5) chose one of the benchmark's resource types
     resourceValues: [1, 2] # (6) select a set of resource amounts
-  slos: # (7) set your SLOs
-    - sloType: "lag trend"
-      prometheusUrl: "http://prometheus-operated:9090"
-      offset: 0
-      properties:
-        threshold: 2000
-        externalSloUrl: "http://localhost:80/evaluate-slope"
-        warmup: 60 # in seconds
   execution:
-    strategy: "LinearSearch" # (8) chose a search strategy
-    restrictions: ["LowerBound"] # (9) add restrictions for the strategy
-    duration: 300 # (10) set the experiment duration in seconds
-    repetitions: 1 # (11) set the number of repetitions
-    loadGenerationDelay: 30 # (12) configure a delay before load generation
-  configOverrides: []
+    strategy:
+      name: "RestrictionSearch" # (8) chose a search strategy
+      restrictions: ["LowerBound"] # (9) configure the search strategy
+      searchStrategy: "LinearSearch" # (10) configure the search strategy (cont.)
+    duration: 300 # (11) set the experiment duration in seconds
+    repetitions: # (12) set the number of repetitions
+    loadGenerationDelay: 30 # (13) configure a delay before load generation
 ```
 
 See [Creating an Execution](creating-an-execution) for a more detailed explanation on how to create Executions.