Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • she/theodolite
1 result
Show changes
Showing
with 26 additions and 598 deletions
apiVersion: v1
kind: Pod
metadata:
name: kafka-client
spec:
containers:
- name: kafka-client
image: confluentinc/cp-enterprise-kafka:5.4.0
command:
- sh
- -c
- "exec tail -f /dev/null"
\ No newline at end of file
## ------------------------------------------------------
## Zookeeper
## ------------------------------------------------------
cp-zookeeper:
enabled: true
servers: 3
image: confluentinc/cp-zookeeper
imageTag: 5.4.0
## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace.
## https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
imagePullSecrets:
# - name: "regcred"
heapOptions: "-Xms512M -Xmx512M"
persistence:
enabled: false
resources: {}
## If you do want to specify resources, uncomment the following lines, adjust them as necessary,
## and remove the curly braces after 'resources:'
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## ------------------------------------------------------
## Kafka
## ------------------------------------------------------
cp-kafka:
enabled: true
brokers: 10
image: confluentinc/cp-enterprise-kafka
imageTag: 5.4.0
## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace.
## https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
imagePullSecrets:
# - name: "regcred"
heapOptions: "-Xms512M -Xmx512M"
persistence:
enabled: false
resources: {}
## If you do want to specify resources, uncomment the following lines, adjust them as necessary,
## and remove the curly braces after 'resources:'
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
configurationOverrides:
# offsets.topic.replication.factor: "3"
"message.max.bytes": "134217728" # 128 MB
"replica.fetch.max.bytes": "134217728" # 128 MB
# "default.replication.factor": 3
# "min.insync.replicas": 2
"auto.create.topics.enable": false
"log.retention.ms": "10000" # 10s
# "log.retention.ms": "86400000" # 24h
# "group.initial.rebalance.delay.ms": "30000" # 30s
"metrics.sample.window.ms": "5000" #5s
## ------------------------------------------------------
## Schema Registry
## ------------------------------------------------------
cp-schema-registry:
enabled: true
image: confluentinc/cp-schema-registry
imageTag: 5.4.0
## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace.
## https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
imagePullSecrets:
# - name: "regcred"
heapOptions: "-Xms512M -Xmx512M"
resources: {}
## If you do want to specify resources, uncomment the following lines, adjust them as necessary,
## and remove the curly braces after 'resources:'
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
cp-kafka-rest:
enabled: false
cp-kafka-connect:
enabled: false
cp-ksql-server:
enabled: false
cp-control-center:
enabled: false
## ------------------------------------------------------
## Zookeeper
## ------------------------------------------------------
cp-zookeeper:
enabled: true
servers: 1
image: confluentinc/cp-zookeeper
imageTag: 5.4.0
## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace.
## https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
imagePullSecrets:
# - name: "regcred"
heapOptions: "-Xms512M -Xmx512M"
persistence:
enabled: false
resources: {}
## If you do want to specify resources, uncomment the following lines, adjust them as necessary,
## and remove the curly braces after 'resources:'
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## ------------------------------------------------------
## Kafka
## ------------------------------------------------------
cp-kafka:
enabled: true
brokers: 1
image: confluentinc/cp-enterprise-kafka
imageTag: 5.4.0
## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace.
## https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
imagePullSecrets:
# - name: "regcred"
heapOptions: "-Xms512M -Xmx512M"
persistence:
enabled: false
resources: {}
## If you do want to specify resources, uncomment the following lines, adjust them as necessary,
## and remove the curly braces after 'resources:'
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
configurationOverrides:
offsets.topic.replication.factor: "1"
"message.max.bytes": "134217728" # 128 MB
"replica.fetch.max.bytes": "134217728" # 128 MB
# "default.replication.factor": 3
# "min.insync.replicas": 2
"auto.create.topics.enable": false
"log.retention.ms": "10000" # 10s
#"log.retention.ms": "86400000" # 24h
"metrics.sample.window.ms": "5000" #5s
# access kafka from outside
nodeport:
enabled: true
## ------------------------------------------------------
## Schema Registry
## ------------------------------------------------------
cp-schema-registry:
enabled: true
image: confluentinc/cp-schema-registry
imageTag: 5.4.0
## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace.
## https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
imagePullSecrets:
# - name: "regcred"
heapOptions: "-Xms512M -Xmx512M"
resources: {}
## If you do want to specify resources, uncomment the following lines, adjust them as necessary,
## and remove the curly braces after 'resources:'
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
cp-kafka-rest:
enabled: false
cp-kafka-connect:
enabled: false
cp-ksql-server:
enabled: false
cp-control-center:
enabled: false
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: theodolite
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: theodolite
subjects:
- kind: ServiceAccount
name: theodolite
\ No newline at end of file
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: theodolite
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- delete
- list
- get
- create
- apiGroups:
- ""
resources:
- services
- pods
- configmaps
verbs:
- delete
- list
- get
- create
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
- get
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- delete
- list
- create
- get
- apiGroups:
- theodolite.rocks
resources:
- executions
- benchmarks
verbs:
- delete
- list
- get
- create
- watch
- update
- patch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- delete
- get
- create
- update
\ No newline at end of file
apiVersion: v1
kind: ServiceAccount
metadata:
name: theodolite
\ No newline at end of file
apiVersion: apps/v1
kind: Deployment
metadata:
name: theodolite-results-access
labels:
app: theodolite-results-access
spec:
replicas: 1
selector:
matchLabels:
app: theodolite-results-access
template:
metadata:
labels:
app: theodolite-results-access
spec:
containers:
- name: theodolite-results-access
image: busybox:latest
command:
- sh
- -c
- exec tail -f /dev/null
volumeMounts:
- mountPath: /app/results
name: theodolite-pv-storage
volumes:
- name: theodolite-pv-storage
persistentVolumeClaim:
claimName: theodolite-pv-claim
apiVersion: v1
kind: PersistentVolume
metadata:
name: theodolite-pv-volume
labels:
type: local
spec:
storageClassName: theodolite
capacity:
storage: 100m
accessModes:
- ReadWriteOnce
hostPath:
path: </your/path/to/results/folder>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: theodolite-pv-claim
spec:
storageClassName: theodolite
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100m
apiVersion: v1
kind: PersistentVolume
metadata:
name: theodolite-pv-volume
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: </your/path/to/results/folder>
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <node-name>
---
# https://kubernetes.io/docs/concepts/storage/storage-classes/#local
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: theodolite-pv-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
apiVersion: v1
kind: Pod
metadata:
name: theodolite-results-access
spec:
restartPolicy: Always
containers:
- name: theodolite-results-access
image: busybox:latest
command:
- sh
- -c
- exec tail -f /dev/null
volumeMounts:
- mountPath: /app/results
name: theodolite-pv-storage
volumes:
- name: theodolite-pv-storage
persistentVolumeClaim:
claimName: theodolite-pv-claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: theodolite-pv-claim
spec:
storageClassName: "oci-bv"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: default
\ No newline at end of file
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
\ No newline at end of file
alertmanager:
enabled: false
grafana:
enabled: false
kubeApiServer:
enabled: false
kubelet:
enabled: false
kubeControllerManager:
enabled: false
coreDns:
enabled: false
kubeDns:
enabled: false
kubeEtcd:
enabled: false
kubeScheduler:
enabled: false
kubeProxy:
enabled: false
kubeStateMetrics:
enabled: false
nodeExporter:
enabled: false
prometheusOperator:
enabled: true
namespaces:
releaseNamespace: true
additional: []
prometheus:
enabled: false
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
spec:
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
#app: cp-kafka
appScope: titan-ccp
resources:
requests:
memory: 400Mi
#scrapeInterval: 1s
enableAdminAPI: true
\ No newline at end of file
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
\ No newline at end of file
apiVersion: v1
kind: Pod
metadata:
name: zookeeper-client
labels:
app: zookeeper-client
spec:
containers:
- name: zookeeper-client
image: zookeeper:3.7.0
command:
- sh
- -c
- "exec tail -f /dev/null"
apiVersion: batch/v1
kind: Job
metadata:
name: theodolite
spec:
template:
spec:
securityContext:
runAsUser: 0 # Set the permissions for write access to the volumes.
containers:
- name: lag-analysis
image: ghcr.io/cau-se/theodolite-slo-checker-lag-trend:latest
ports:
- containerPort: 80
name: analysis
- name: theodolite
image: ghcr.io/cau-se/theodolite:latest
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# - name: MODE
# value: yaml-executor # Default is `yaml-executor`
- name: THEODOLITE_EXECUTION
value: "/deployments/execution/execution.yaml" # The name of this file must correspond to the filename of the execution, from which the config map is created.
- name: THEODOLITE_BENCHMARK
value: "/deployments/benchmark/benchmark.yaml" # The name of this file must correspond to the filename of the benchmark, from which the config map is created.
- name: THEODOLITE_APP_RESOURCES
value: "/deployments/benchmark-resources"
- name: RESULTS_FOLDER # Folder for saving results
value: /deployments/results # Default is the pwd (/deployments)
# - name: CREATE_RESULTS_FOLDER # Specify whether the specified result folder should be created if it does not exist.
# value: "false" # Default is false.
volumeMounts:
- mountPath: "/deployments/results" # the mounted path must corresponds to the value of `RESULT_FOLDER`.
name: theodolite-pv-storage
- mountPath: "/deployments/benchmark-resources" # must correspond to the value of `THEODOLITE_APP_RESOURCES`.
name: benchmark-resources
- mountPath: "/deployments/benchmark" # must correspond to the value of `THEODOLITE_BENCHMARK`.
name: benchmark
- mountPath: "/deployments/execution" # must correspond to the value of `THEODOLITE_EXECUTION`.
name: execution
restartPolicy: Never
# Uncomment if RBAC is enabled and configured
serviceAccountName: theodolite
# Multiple volumes are needed to provide the corresponding files.
# The names must correspond to the created configmaps and the volumeMounts.
volumes:
- name: theodolite-pv-storage
persistentVolumeClaim:
claimName: theodolite-pv-claim
- name: benchmark-resources
configMap:
name: benchmark-resources-configmap
- name: benchmark
configMap:
name: benchmark-configmap
- name: execution
configMap:
name: execution-configmap
backoffLimit: 4
\ No newline at end of file
......@@ -6,31 +6,27 @@ sources:
- https://github.com/cau-se/theodolite
maintainers:
- name: Sören Henning
email: soeren.henning@email.uni-kiel.de
url: https://www.se.informatik.uni-kiel.de/en/team/soeren-henning-m-sc
email: soeren.henning@jku.at
url: https://www.jku.at/lit-cyber-physical-systems-lab/ueber-uns/team/dr-ing-soeren-henning/
icon: https://www.theodolite.rocks/assets/logo/theodolite-stacked-transparent.svg
type: application
dependencies:
- name: grafana
version: 6.17.5
version: 6.17.*
repository: https://grafana.github.io/helm-charts
condition: grafana.enabled
- name: kube-prometheus-stack
version: 20.0.1
version: 41.7.*
repository: https://prometheus-community.github.io/helm-charts
condition: kube-prometheus-stack.enabled
- name: cp-helm-charts
version: 0.6.0
repository: https://soerenhenning.github.io/cp-helm-charts
condition: cp-helm-charts.enabled
- name: strimzi-kafka-operator
version: 0.28.0
version: 0.29.*
repository: https://strimzi.io/charts/
condition: strimzi.enabled
version: 0.9.0-SNAPSHOT
version: 0.10.0-SNAPSHOT
appVersion: 0.9.0-SNAPSHOT
appVersion: 0.10.0-SNAPSHOT
......@@ -9,7 +9,7 @@ helm dependencies update .
helm install theodolite .
```
**Hint for Windows users:** The Theodolite Helm chart makes use of some symbolic links. These are not properly created when this repository is checked out with Windows. There are a couple of solutions presented in this [Stack Overflow post](https://stackoverflow.com/q/5917249/4121056). A simpler workaround is to manually delete the symbolic links and replace them by the files and folders, they are pointing to. The relevant symbolic links are `benchmark-definitions` and the files inside `crd`.
**Hint for Windows users:** The Theodolite Helm chart makes use of some symbolic links. These are not properly created when this repository is checked out with Windows. There are a couple of solutions presented in this [Stack Overflow post](https://stackoverflow.com/q/5917249/4121056). A simpler workaround is to manually delete the symbolic links and replace them by the files and folders, they are pointing to. The relevant symbolic links are `benchmark-definitions/examples`, `benchmark-definitions/theodolite-benchmarks` and the files inside `crd`.
## Customize Installation
......@@ -47,10 +47,10 @@ helm uninstall theodolite
Helm does not remove any CRDs created by this chart. You can remove them manually with:
```sh
# CRDs from Theodolite
# CRDs for Theodolite
kubectl delete crd executions.theodolite.rocks
kubectl delete crd benchmarks.theodolite.rocks
# CRDs from Prometheus operator (see https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#uninstall-chart)
# CRDs for Prometheus operator (see https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#uninstall-chart)
kubectl delete crd alertmanagerconfigs.monitoring.coreos.com
kubectl delete crd alertmanagers.monitoring.coreos.com
kubectl delete crd podmonitors.monitoring.coreos.com
......@@ -59,6 +59,17 @@ kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com
kubectl delete crd thanosrulers.monitoring.coreos.com
# CRDs for Strimzi
kubectl delete crd kafkabridges.kafka.strimzi.io
kubectl delete crd kafkaconnectors.kafka.strimzi.io
kubectl delete crd kafkaconnects.kafka.strimzi.io
kubectl delete crd kafkamirrormaker2s.kafka.strimzi.io
kubectl delete crd kafkamirrormakers.kafka.strimzi.io
kubectl delete crd kafkarebalances.kafka.strimzi.io
kubectl delete crd kafkas.kafka.strimzi.io
kubectl delete crd kafkatopics.kafka.strimzi.io
kubectl delete crd kafkausers.kafka.strimzi.io
kubectl delete crd strimzipodsets.core.strimzi.io
```
## Development
......@@ -67,10 +78,11 @@ kubectl delete crd thanosrulers.monitoring.coreos.com
The following 3rd party charts are used by Theodolite:
- Kube Prometheus Stack (to install the Prometheus Operator, which is used to create a Prometheus instances)
- Grafana (including a dashboard and a data source configuration)
- Confluent Platform (for Kafka and Zookeeper)
- Kafka Lag Exporter (used to collect monitoring data of the Kafka lag)
- Kube Prometheus Stack
- to install the Prometheus Operator, which is used to create a Prometheus instances
- to deploy Grafana (including a dashboard and a data source configuration)
- Grafana (deprecated as replaced by Kube Prometheus Stack)
- Strimzi (for managing Kafka and Zookeeper)
### Hints
......