Make resources accessible from ConfigMap
Currently, all benchmark Kubernetes resources files (i.e. the stuff that's deployed and scaled by Theodolite) have to be available from inside the Theodolite pod. While for Theodolite's standalone mode that is totally fine, for the operator this comes with some obstacles. Mainly, this is due to the fact that volumes such as ConfigMaps have to be attached to a pod on startup. That is, if we add a new benchmark (i.e., kubectl create -f <benchmark.yaml>
), we have to add all benchmark resource files to a ConfigMap which is already existing. While this works in principle if we attach a (maybe empty) ConfigMap per default, we have to be aware of potential naming collisions. ConfigMap do not support nested file structures, so all files have to be placed on top-level. First, this means we will end up with quite long file names such as uc1-kstreams-my-resource-name.yaml
. Second, having multiple stream processing engines, multiple benchmarks and multiple resources makes the entire file management quite chaotic. And third and most severe, we need to be aware of all other files in that ConfigMap, even if we are only users of Theodolite and would like to add our own benchmarks, we have to make sure that no file with that name does exist already.
For management of benchmarks, it would be easiest to have one (or multiple) ConfigMaps associated with a benchmark. As we cannot mount new ConfigMaps to a pod without restarting it, I propose to let the operator read the associated ConfigMaps. This shouldn't be a big issue as the operator already has access to Benchmark
and Execution
resources, so reading also ConfigMaps
should be fine. A possible adjustment of the Benchmark
CRD could look as follows:
appResource:
- configmap: "uc1-kstreams"
files:
- "deployment.yaml"
- "aggregation-service.yaml"
- "jmx-configmap.yaml"
- "service-monitor.yaml"
loadGenResource:
- configmap: "uc1-kstreams"
files:
- "load-generator-deployment.yaml"
- "load-generator-service.yaml"
We might think of also allowing the following, when referring to container-local files:
appResource:
- files: # means local file system
- "deployment.yaml"
- "aggregation-service.yaml"
- "jmx-configmap.yaml"
- "service-monitor.yaml"
I'm not sure if there is a need for doing this. For legacy reasons, it could also be possible to make the files
thing optional, such that "old" declarations stay valid, but I think that's not necessary.
An open question remains how benchmarks in standalone mode should be handled. For standalone mode, I think I would prefer to mount the resource files. That feels more "standalone" (although we might have the necessary permissions anyway) and is easier to test locally. The second solution should also work fine in standalone mode, for the first solution we would to decide if we also allow it or if in standalone mode, only the second solution is allowed.
Alternative schema:
appResource:
- type: ConfigMap
name: "uc1-kstreams"
files: # optional
- "deployment.yaml"
- "aggregation-service.yaml"
- "jmx-configmap.yaml"
- "service-monitor.yaml"
- type: local
files: # means local file system
- "deployment.yaml"
- "aggregation-service.yaml"
- "jmx-configmap.yaml"
- "service-monitor.yaml"