Skip to content
Snippets Groups Projects
Commit 2071b542 authored by Sören Henning's avatar Sören Henning
Browse files

Merge branch 'main' into code-cleanup

parents 7fc9e55e 17441964
No related branches found
No related tags found
No related merge requests found
Showing
with 55 additions and 515 deletions
...@@ -27,6 +27,18 @@ Patchers can be seen as functions which take a value as input and modify a Kuber ...@@ -27,6 +27,18 @@ Patchers can be seen as functions which take a value as input and modify a Kuber
* **properties**: * **properties**:
* loadGenMaxRecords: 150000 * loadGenMaxRecords: 150000
* **DataVolumeLoadGeneratorReplicaPatcher**: Takes the total load that should be generated and computes the number of instances needed for this load based on the `maxVolume` ((load + maxVolume - 1) / maxVolume) and calculates the load per instance (loadPerInstance = load / instances). The number of instances are set for the load generator and the given variable is set to the load per instance.
* **type**: "DataVolumeLoadGeneratorReplicaPatcher"
* **resource**: "osp-load-generator-deployment.yaml"
* **properties**:
* maxVolume: "50"
* container: "workload-generator"
* variableName: "DATA_VOLUME"
* **ReplicaPatcher**: Allows to modify the number of Replicas for a kubernetes deployment.
* **type**: "ReplicaPatcher"
* **resource**: "uc1-kstreams-deployment.yaml"
* **EnvVarPatcher**: Modifies the value of an environment variable for a container in a Kubernetes deployment. * **EnvVarPatcher**: Modifies the value of an environment variable for a container in a Kubernetes deployment.
* **type**: "EnvVarPatcher" * **type**: "EnvVarPatcher"
* **resource**: "uc1-load-generator-deployment.yaml" * **resource**: "uc1-load-generator-deployment.yaml"
...@@ -34,6 +46,14 @@ Patchers can be seen as functions which take a value as input and modify a Kuber ...@@ -34,6 +46,14 @@ Patchers can be seen as functions which take a value as input and modify a Kuber
* container: "workload-generator" * container: "workload-generator"
* variableName: "NUM_SENSORS" * variableName: "NUM_SENSORS"
* **ConfigMapYamlPatcher**: allows to add/modify a key-value pair in a YAML file of a ConfigMap
* **type**: "ConfigMapYamlPatcher"
* **resource**: "flink-configuration-configmap.yaml"
* **properties**:
* fileName: "flink-conf.yaml"
* variableName: "jobmanager.memory.process.size"
* **value**: "4Gb"
* **NodeSelectorPatcher**: Changes the node selection field in Kubernetes resources. * **NodeSelectorPatcher**: Changes the node selection field in Kubernetes resources.
* **type**: "NodeSelectorPatcher" * **type**: "NodeSelectorPatcher"
* **resource**: "uc1-load-generator-deployment.yaml" * **resource**: "uc1-load-generator-deployment.yaml"
......
apiVersion: v1
kind: ConfigMap
metadata:
name: flink-config
labels:
app: flink
data:
flink-conf.yaml: |+
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 1 #TODO
#blob.server.port: 6124
#jobmanager.rpc.port: 6123
#taskmanager.rpc.port: 6122
#queryable-state.proxy.ports: 6125
#jobmanager.memory.process.size: 4Gb
#taskmanager.memory.process.size: 4Gb
#parallelism.default: 1 #TODO
metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter
metrics.reporter.prom.interval: 10 SECONDS
taskmanager.network.detailed-metrics: true
# -> gives metrics about inbound/outbound network queue lengths
log4j-console.properties: |+
# This affects logging for both user code and Flink
rootLogger.level = INFO
rootLogger.appenderRef.console.ref = ConsoleAppender
rootLogger.appenderRef.rolling.ref = RollingFileAppender
# Uncomment this if you want to _only_ change Flink's logging
#logger.flink.name = org.apache.flink
#logger.flink.level = INFO
# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
# Log all infos to the console
appender.console.name = ConsoleAppender
appender.console.type = CONSOLE
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
# Log all infos in the given rolling file
appender.rolling.name = RollingFileAppender
appender.rolling.type = RollingFile
appender.rolling.append = false
appender.rolling.fileName = ${sys:log.file}
appender.rolling.filePattern = ${sys:log.file}.%i
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 10
# Suppress the irrelevant (wrong) warnings from the Netty channel handler
logger.netty.name = org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline
logger.netty.level = OFF
\ No newline at end of file
...@@ -63,33 +63,5 @@ spec: ...@@ -63,33 +63,5 @@ spec:
port: 6123 port: 6123
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 60 periodSeconds: 60
# volumeMounts:
# - name: flink-config-volume-rw
# mountPath: /opt/flink/conf
# - name: job-artifacts-volume
# mountPath: /opt/flink/usrlib
securityContext: securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
# initContainers:
# - name: init-jobmanager
# image: busybox:1.28
# command: ['cp', '-a', '/flink-config/.', '/flink-config-rw/']
# volumeMounts:
# - name: flink-config-volume
# mountPath: /flink-config/
# - name: flink-config-volume-rw
# mountPath: /flink-config-rw/
# volumes:
# - name: flink-config-volume
# configMap:
# name: flink-config
# items:
# - key: flink-conf.yaml
# path: flink-conf.yaml
# - key: log4j-console.properties
# path: log4j-console.properties
# - name: flink-config-volume-rw
# emptyDir: {}
# - name: job-artifacts-volume
# hostPath:
# path: /host/path/to/job/artifacts
...@@ -20,29 +20,8 @@ spec: ...@@ -20,29 +20,8 @@ spec:
image: ghcr.io/cau-se/theodolite-uc1-beam-flink:latest image: ghcr.io/cau-se/theodolite-uc1-beam-flink:latest
args: ["taskmanager"] args: ["taskmanager"]
env: env:
- name: KAFKA_BOOTSTRAP_SERVERS
value: "theodolite-kafka-kafka-bootstrap:9092"
- name: SCHEMA_REGISTRY_URL
value: "http://theodolite-kafka-schema-registry:8081"
- name: CHECKPOINTING
value: "false"
# - name: PARALLELISM
# value: "1"
- name: "FLINK_STATE_BACKEND"
value: "rocksdb"
- name: JOB_MANAGER_RPC_ADDRESS - name: JOB_MANAGER_RPC_ADDRESS
value: "flink-jobmanager" value: "flink-jobmanager"
# - name: TASK_MANAGER_NUMBER_OF_TASK_SLOTS
# value: "1" #TODO
# - name: FLINK_PROPERTIES
# value: |+
# blob.server.port: 6124
# jobmanager.rpc.port: 6123
# taskmanager.rpc.port: 6122
# queryable-state.proxy.ports: 6125
# jobmanager.memory.process.size: 4Gb
# taskmanager.memory.process.size: 4Gb
# #parallelism.default: 1 #TODO
resources: resources:
limits: limits:
memory: 4Gi memory: 4Gi
...@@ -54,33 +33,5 @@ spec: ...@@ -54,33 +33,5 @@ spec:
name: query-state name: query-state
- containerPort: 9249 - containerPort: 9249
name: metrics name: metrics
# livenessProbe:
# tcpSocket:
# port: 6122
# initialDelaySeconds: 30
# periodSeconds: 60
# volumeMounts:
# - name: flink-config-volume-rw
# mountPath: /opt/flink/conf/
securityContext: securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
# initContainers:
# - name: init-taskmanager
# image: busybox:1.28
# command: ['cp', '-a', '/flink-config/.', '/flink-config-rw/']
# volumeMounts:
# - name: flink-config-volume
# mountPath: /flink-config/
# - name: flink-config-volume-rw
# mountPath: /flink-config-rw/
# volumes:
# - name: flink-config-volume
# configMap:
# name: flink-config
# items:
# - key: flink-conf.yaml
# path: flink-conf.yaml
# - key: log4j-console.properties
# path: log4j-console.properties
# - name: flink-config-volume-rw
# emptyDir: {}
...@@ -36,11 +36,6 @@ spec: ...@@ -36,11 +36,6 @@ spec:
properties: properties:
container: "jobmanager" container: "jobmanager"
variableName: "PARALLELISM" variableName: "PARALLELISM"
- type: "EnvVarPatcher" # required?
resource: "taskmanager-deployment.yaml"
properties:
container: "taskmanager"
variableName: "PARALLELISM"
loadTypes: loadTypes:
- typeName: "NumSensors" - typeName: "NumSensors"
patchers: patchers:
......
...@@ -7,18 +7,16 @@ metadata: ...@@ -7,18 +7,16 @@ metadata:
data: data:
flink-conf.yaml: |+ flink-conf.yaml: |+
jobmanager.rpc.address: flink-jobmanager jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 1 #TODO blob.server.port: 6124
#blob.server.port: 6124 jobmanager.rpc.port: 6123
#jobmanager.rpc.port: 6123 taskmanager.rpc.port: 6122
#taskmanager.rpc.port: 6122 queryable-state.proxy.ports: 6125
#queryable-state.proxy.ports: 6125 jobmanager.memory.process.size: 4Gb
#jobmanager.memory.process.size: 4Gb taskmanager.memory.process.size: 4Gb
#taskmanager.memory.process.size: 4Gb
#parallelism.default: 1 #TODO
metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter
metrics.reporter.prom.interval: 10 SECONDS metrics.reporter.prom.interval: 10 SECONDS
# gives metrics about inbound/outbound network queue lengths
taskmanager.network.detailed-metrics: true taskmanager.network.detailed-metrics: true
# -> gives metrics about inbound/outbound network queue lengths
log4j-console.properties: |+ log4j-console.properties: |+
# This affects logging for both user code and Flink # This affects logging for both user code and Flink
rootLogger.level = INFO rootLogger.level = INFO
......
...@@ -29,17 +29,6 @@ spec: ...@@ -29,17 +29,6 @@ spec:
value: "1" value: "1"
- name: "FLINK_STATE_BACKEND" - name: "FLINK_STATE_BACKEND"
value: "rocksdb" value: "rocksdb"
- name: JOB_MANAGER_RPC_ADDRESS
value: "flink-jobmanager"
- name: FLINK_PROPERTIES
value: |+
blob.server.port: 6124
jobmanager.rpc.port: 6123
taskmanager.rpc.port: 6122
queryable-state.proxy.ports: 6125
jobmanager.memory.process.size: 4Gb
taskmanager.memory.process.size: 4Gb
#parallelism.default: 1 #TODO
resources: resources:
limits: limits:
memory: 4Gi memory: 4Gi
...@@ -61,21 +50,10 @@ spec: ...@@ -61,21 +50,10 @@ spec:
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 60 periodSeconds: 60
volumeMounts: volumeMounts:
- name: flink-config-volume-rw - name: flink-config-volume
mountPath: /opt/flink/conf mountPath: /opt/flink/conf
# - name: job-artifacts-volume
# mountPath: /opt/flink/usrlib
securityContext: securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
initContainers:
- name: init-jobmanager
image: busybox:1.28
command: ['cp', '-a', '/flink-config/.', '/flink-config-rw/']
volumeMounts:
- name: flink-config-volume
mountPath: /flink-config/
- name: flink-config-volume-rw
mountPath: /flink-config-rw/
volumes: volumes:
- name: flink-config-volume - name: flink-config-volume
configMap: configMap:
...@@ -85,8 +63,3 @@ spec: ...@@ -85,8 +63,3 @@ spec:
path: flink-conf.yaml path: flink-conf.yaml
- key: log4j-console.properties - key: log4j-console.properties
path: log4j-console.properties path: log4j-console.properties
- name: flink-config-volume-rw
emptyDir: {}
# - name: job-artifacts-volume
# hostPath:
# path: /host/path/to/job/artifacts
...@@ -18,30 +18,6 @@ spec: ...@@ -18,30 +18,6 @@ spec:
containers: containers:
- name: taskmanager - name: taskmanager
image: ghcr.io/cau-se/theodolite-uc1-flink:latest image: ghcr.io/cau-se/theodolite-uc1-flink:latest
env:
- name: KAFKA_BOOTSTRAP_SERVERS
value: "theodolite-kafka-kafka-bootstrap:9092"
- name: SCHEMA_REGISTRY_URL
value: "http://theodolite-kafka-schema-registry:8081"
- name: CHECKPOINTING
value: "false"
- name: PARALLELISM
value: "1"
- name: "FLINK_STATE_BACKEND"
value: "rocksdb"
- name: JOB_MANAGER_RPC_ADDRESS
value: "flink-jobmanager"
- name: TASK_MANAGER_NUMBER_OF_TASK_SLOTS
value: "1" #TODO
- name: FLINK_PROPERTIES
value: |+
blob.server.port: 6124
jobmanager.rpc.port: 6123
taskmanager.rpc.port: 6122
queryable-state.proxy.ports: 6125
jobmanager.memory.process.size: 4Gb
taskmanager.memory.process.size: 4Gb
#parallelism.default: 1 #TODO
resources: resources:
limits: limits:
memory: 4Gi memory: 4Gi
...@@ -60,19 +36,10 @@ spec: ...@@ -60,19 +36,10 @@ spec:
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 60 periodSeconds: 60
volumeMounts: volumeMounts:
- name: flink-config-volume-rw - name: flink-config-volume
mountPath: /opt/flink/conf/ mountPath: /opt/flink/conf/
securityContext: securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
initContainers:
- name: init-taskmanager
image: busybox:1.28
command: ['cp', '-a', '/flink-config/.', '/flink-config-rw/']
volumeMounts:
- name: flink-config-volume
mountPath: /flink-config/
- name: flink-config-volume-rw
mountPath: /flink-config-rw/
volumes: volumes:
- name: flink-config-volume - name: flink-config-volume
configMap: configMap:
...@@ -82,5 +49,3 @@ spec: ...@@ -82,5 +49,3 @@ spec:
path: flink-conf.yaml path: flink-conf.yaml
- key: log4j-console.properties - key: log4j-console.properties
path: log4j-console.properties path: log4j-console.properties
- name: flink-config-volume-rw
emptyDir: {}
...@@ -36,11 +36,6 @@ spec: ...@@ -36,11 +36,6 @@ spec:
properties: properties:
container: "jobmanager" container: "jobmanager"
variableName: "PARALLELISM" variableName: "PARALLELISM"
- type: "EnvVarPatcher" # required?
resource: "taskmanager-deployment.yaml"
properties:
container: "taskmanager"
variableName: "PARALLELISM"
loadTypes: loadTypes:
- typeName: "NumSensors" - typeName: "NumSensors"
patchers: patchers:
......
apiVersion: v1
kind: ConfigMap
metadata:
name: flink-config
labels:
app: flink
data:
flink-conf.yaml: |+
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 1 #TODO
#blob.server.port: 6124
#jobmanager.rpc.port: 6123
#taskmanager.rpc.port: 6122
#queryable-state.proxy.ports: 6125
#jobmanager.memory.process.size: 4Gb
#taskmanager.memory.process.size: 4Gb
#parallelism.default: 1 #TODO
metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter
metrics.reporter.prom.interval: 10 SECONDS
taskmanager.network.detailed-metrics: true
# -> gives metrics about inbound/outbound network queue lengths
log4j-console.properties: |+
# This affects logging for both user code and Flink
rootLogger.level = INFO
rootLogger.appenderRef.console.ref = ConsoleAppender
rootLogger.appenderRef.rolling.ref = RollingFileAppender
# Uncomment this if you want to _only_ change Flink's logging
#logger.flink.name = org.apache.flink
#logger.flink.level = INFO
# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
# Log all infos to the console
appender.console.name = ConsoleAppender
appender.console.type = CONSOLE
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
# Log all infos in the given rolling file
appender.rolling.name = RollingFileAppender
appender.rolling.type = RollingFile
appender.rolling.append = false
appender.rolling.fileName = ${sys:log.file}
appender.rolling.filePattern = ${sys:log.file}.%i
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 10
# Suppress the irrelevant (wrong) warnings from the Netty channel handler
logger.netty.name = org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline
logger.netty.level = OFF
\ No newline at end of file
...@@ -20,27 +20,8 @@ spec: ...@@ -20,27 +20,8 @@ spec:
image: ghcr.io/cau-se/theodolite-uc2-beam-flink:latest image: ghcr.io/cau-se/theodolite-uc2-beam-flink:latest
args: ["taskmanager"] args: ["taskmanager"]
env: env:
- name: KAFKA_BOOTSTRAP_SERVERS
value: "theodolite-kafka-kafka-bootstrap:9092"
- name: SCHEMA_REGISTRY_URL
value: "http://theodolite-kafka-schema-registry:8081"
- name: CHECKPOINTING
value: "false"
- name: "FLINK_STATE_BACKEND"
value: "rocksdb"
- name: JOB_MANAGER_RPC_ADDRESS - name: JOB_MANAGER_RPC_ADDRESS
value: "flink-jobmanager" value: "flink-jobmanager"
# - name: TASK_MANAGER_NUMBER_OF_TASK_SLOTS
# value: "1" #TODO
# - name: FLINK_PROPERTIES
# value: |+
# blob.server.port: 6124
# jobmanager.rpc.port: 6123
# taskmanager.rpc.port: 6122
# queryable-state.proxy.ports: 6125
# jobmanager.memory.process.size: 4Gb
# taskmanager.memory.process.size: 4Gb
# #parallelism.default: 1 #TODO
resources: resources:
limits: limits:
memory: 4Gi memory: 4Gi
...@@ -52,11 +33,5 @@ spec: ...@@ -52,11 +33,5 @@ spec:
name: query-state name: query-state
- containerPort: 9249 - containerPort: 9249
name: metrics name: metrics
# livenessProbe:
# tcpSocket:
# port: 6122
# initialDelaySeconds: 30
# periodSeconds: 60
securityContext: securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
...@@ -36,11 +36,6 @@ spec: ...@@ -36,11 +36,6 @@ spec:
properties: properties:
container: "jobmanager" container: "jobmanager"
variableName: "PARALLELISM" variableName: "PARALLELISM"
- type: "EnvVarPatcher" # required?
resource: "taskmanager-deployment.yaml"
properties:
container: "taskmanager"
variableName: "PARALLELISM"
loadTypes: loadTypes:
- typeName: "NumSensors" - typeName: "NumSensors"
patchers: patchers:
......
...@@ -6,19 +6,17 @@ metadata: ...@@ -6,19 +6,17 @@ metadata:
app: flink app: flink
data: data:
flink-conf.yaml: |+ flink-conf.yaml: |+
#jobmanager.rpc.address: flink-jobmanager jobmanager.rpc.address: flink-jobmanager
#taskmanager.numberOfTaskSlots: 1 #TODO blob.server.port: 6124
#blob.server.port: 6124 jobmanager.rpc.port: 6123
#jobmanager.rpc.port: 6123 taskmanager.rpc.port: 6122
#taskmanager.rpc.port: 6122 queryable-state.proxy.ports: 6125
#queryable-state.proxy.ports: 6125 jobmanager.memory.process.size: 4Gb
#jobmanager.memory.process.size: 4Gb taskmanager.memory.process.size: 4Gb
#taskmanager.memory.process.size: 4Gb
#parallelism.default: 1 #TODO
metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter
metrics.reporter.prom.interval: 10 SECONDS metrics.reporter.prom.interval: 10 SECONDS
# gives metrics about inbound/outbound network queue lengths
taskmanager.network.detailed-metrics: true taskmanager.network.detailed-metrics: true
# -> gives metrics about inbound/outbound network queue lengths
log4j-console.properties: |+ log4j-console.properties: |+
# This affects logging for both user code and Flink # This affects logging for both user code and Flink
rootLogger.level = INFO rootLogger.level = INFO
......
...@@ -29,17 +29,6 @@ spec: ...@@ -29,17 +29,6 @@ spec:
value: "1" value: "1"
- name: "FLINK_STATE_BACKEND" - name: "FLINK_STATE_BACKEND"
value: "rocksdb" value: "rocksdb"
- name: JOB_MANAGER_RPC_ADDRESS
value: "flink-jobmanager"
- name: FLINK_PROPERTIES
value: |+
blob.server.port: 6124
jobmanager.rpc.port: 6123
taskmanager.rpc.port: 6122
queryable-state.proxy.ports: 6125
jobmanager.memory.process.size: 4Gb
taskmanager.memory.process.size: 4Gb
#parallelism.default: 1 #TODO
resources: resources:
limits: limits:
memory: 4Gi memory: 4Gi
...@@ -61,21 +50,10 @@ spec: ...@@ -61,21 +50,10 @@ spec:
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 60 periodSeconds: 60
volumeMounts: volumeMounts:
- name: flink-config-volume-rw - name: flink-config-volume
mountPath: /opt/flink/conf mountPath: /opt/flink/conf
# - name: job-artifacts-volume
# mountPath: /opt/flink/usrlib
securityContext: securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
initContainers:
- name: init-jobmanager
image: busybox:1.28
command: ['cp', '-a', '/flink-config/.', '/flink-config-rw/']
volumeMounts:
- name: flink-config-volume
mountPath: /flink-config/
- name: flink-config-volume-rw
mountPath: /flink-config-rw/
volumes: volumes:
- name: flink-config-volume - name: flink-config-volume
configMap: configMap:
...@@ -85,8 +63,3 @@ spec: ...@@ -85,8 +63,3 @@ spec:
path: flink-conf.yaml path: flink-conf.yaml
- key: log4j-console.properties - key: log4j-console.properties
path: log4j-console.properties path: log4j-console.properties
- name: flink-config-volume-rw
emptyDir: {}
# - name: job-artifacts-volume
# hostPath:
# path: /host/path/to/job/artifacts
...@@ -18,30 +18,6 @@ spec: ...@@ -18,30 +18,6 @@ spec:
containers: containers:
- name: taskmanager - name: taskmanager
image: ghcr.io/cau-se/theodolite-uc2-flink:latest image: ghcr.io/cau-se/theodolite-uc2-flink:latest
env:
- name: KAFKA_BOOTSTRAP_SERVERS
value: "theodolite-kafka-kafka-bootstrap:9092"
- name: SCHEMA_REGISTRY_URL
value: "http://theodolite-kafka-schema-registry:8081"
- name: CHECKPOINTING
value: "false"
- name: PARALLELISM
value: "1"
- name: "FLINK_STATE_BACKEND"
value: "rocksdb"
- name: JOB_MANAGER_RPC_ADDRESS
value: "flink-jobmanager"
- name: TASK_MANAGER_NUMBER_OF_TASK_SLOTS
value: "1" #TODO
- name: FLINK_PROPERTIES
value: |+
blob.server.port: 6124
jobmanager.rpc.port: 6123
taskmanager.rpc.port: 6122
queryable-state.proxy.ports: 6125
jobmanager.memory.process.size: 4Gb
taskmanager.memory.process.size: 4Gb
#parallelism.default: 1 #TODO
resources: resources:
limits: limits:
memory: 4Gi memory: 4Gi
...@@ -60,19 +36,10 @@ spec: ...@@ -60,19 +36,10 @@ spec:
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 60 periodSeconds: 60
volumeMounts: volumeMounts:
- name: flink-config-volume-rw - name: flink-config-volume
mountPath: /opt/flink/conf/ mountPath: /opt/flink/conf/
securityContext: securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
initContainers:
- name: init-taskmanager
image: busybox:1.28
command: ['cp', '-a', '/flink-config/.', '/flink-config-rw/']
volumeMounts:
- name: flink-config-volume
mountPath: /flink-config/
- name: flink-config-volume-rw
mountPath: /flink-config-rw/
volumes: volumes:
- name: flink-config-volume - name: flink-config-volume
configMap: configMap:
...@@ -82,5 +49,3 @@ spec: ...@@ -82,5 +49,3 @@ spec:
path: flink-conf.yaml path: flink-conf.yaml
- key: log4j-console.properties - key: log4j-console.properties
path: log4j-console.properties path: log4j-console.properties
- name: flink-config-volume-rw
emptyDir: {}
...@@ -36,11 +36,6 @@ spec: ...@@ -36,11 +36,6 @@ spec:
properties: properties:
container: "jobmanager" container: "jobmanager"
variableName: "PARALLELISM" variableName: "PARALLELISM"
- type: "EnvVarPatcher" # required?
resource: "taskmanager-deployment.yaml"
properties:
container: "taskmanager"
variableName: "PARALLELISM"
loadTypes: loadTypes:
- typeName: "NumSensors" - typeName: "NumSensors"
patchers: patchers:
......
apiVersion: v1
kind: ConfigMap
metadata:
name: flink-config
labels:
app: flink
data:
flink-conf.yaml: |+
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 1 #TODO
#blob.server.port: 6124
#jobmanager.rpc.port: 6123
#taskmanager.rpc.port: 6122
#queryable-state.proxy.ports: 6125
#jobmanager.memory.process.size: 4Gb
#taskmanager.memory.process.size: 4Gb
#parallelism.default: 1 #TODO
metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter
metrics.reporter.prom.interval: 10 SECONDS
taskmanager.network.detailed-metrics: true
# -> gives metrics about inbound/outbound network queue lengths
log4j-console.properties: |+
# This affects logging for both user code and Flink
rootLogger.level = INFO
rootLogger.appenderRef.console.ref = ConsoleAppender
rootLogger.appenderRef.rolling.ref = RollingFileAppender
# Uncomment this if you want to _only_ change Flink's logging
#logger.flink.name = org.apache.flink
#logger.flink.level = INFO
# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
# Log all infos to the console
appender.console.name = ConsoleAppender
appender.console.type = CONSOLE
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
# Log all infos in the given rolling file
appender.rolling.name = RollingFileAppender
appender.rolling.type = RollingFile
appender.rolling.append = false
appender.rolling.fileName = ${sys:log.file}
appender.rolling.filePattern = ${sys:log.file}.%i
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 10
# Suppress the irrelevant (wrong) warnings from the Netty channel handler
logger.netty.name = org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline
logger.netty.level = OFF
\ No newline at end of file
...@@ -20,27 +20,8 @@ spec: ...@@ -20,27 +20,8 @@ spec:
image: ghcr.io/cau-se/theodolite-uc3-beam-flink:latest image: ghcr.io/cau-se/theodolite-uc3-beam-flink:latest
args: ["taskmanager"] args: ["taskmanager"]
env: env:
- name: KAFKA_BOOTSTRAP_SERVERS
value: "theodolite-kafka-kafka-bootstrap:9092"
- name: SCHEMA_REGISTRY_URL
value: "http://theodolite-kafka-schema-registry:8081"
- name: CHECKPOINTING
value: "false"
- name: "FLINK_STATE_BACKEND"
value: "rocksdb"
- name: JOB_MANAGER_RPC_ADDRESS - name: JOB_MANAGER_RPC_ADDRESS
value: "flink-jobmanager" value: "flink-jobmanager"
# - name: TASK_MANAGER_NUMBER_OF_TASK_SLOTS
# value: "1" #TODO
# - name: FLINK_PROPERTIES
# value: |+
# blob.server.port: 6124
# jobmanager.rpc.port: 6123
# taskmanager.rpc.port: 6122
# queryable-state.proxy.ports: 6125
# jobmanager.memory.process.size: 4Gb
# taskmanager.memory.process.size: 4Gb
# #parallelism.default: 1 #TODO
resources: resources:
limits: limits:
memory: 4Gi memory: 4Gi
...@@ -52,11 +33,5 @@ spec: ...@@ -52,11 +33,5 @@ spec:
name: query-state name: query-state
- containerPort: 9249 - containerPort: 9249
name: metrics name: metrics
# livenessProbe:
# tcpSocket:
# port: 6122
# initialDelaySeconds: 30
# periodSeconds: 60
securityContext: securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
...@@ -36,11 +36,6 @@ spec: ...@@ -36,11 +36,6 @@ spec:
properties: properties:
container: "jobmanager" container: "jobmanager"
variableName: "PARALLELISM" variableName: "PARALLELISM"
- type: "EnvVarPatcher" # required?
resource: "taskmanager-deployment.yaml"
properties:
container: "taskmanager"
variableName: "PARALLELISM"
loadTypes: loadTypes:
- typeName: "NumSensors" - typeName: "NumSensors"
patchers: patchers:
......
...@@ -6,19 +6,17 @@ metadata: ...@@ -6,19 +6,17 @@ metadata:
app: flink app: flink
data: data:
flink-conf.yaml: |+ flink-conf.yaml: |+
#jobmanager.rpc.address: flink-jobmanager jobmanager.rpc.address: flink-jobmanager
#taskmanager.numberOfTaskSlots: 1 #TODO blob.server.port: 6124
#blob.server.port: 6124 jobmanager.rpc.port: 6123
#jobmanager.rpc.port: 6123 taskmanager.rpc.port: 6122
#taskmanager.rpc.port: 6122 queryable-state.proxy.ports: 6125
#queryable-state.proxy.ports: 6125 jobmanager.memory.process.size: 4Gb
#jobmanager.memory.process.size: 4Gb taskmanager.memory.process.size: 4Gb
#taskmanager.memory.process.size: 4Gb
#parallelism.default: 1 #TODO
metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter
metrics.reporter.prom.interval: 10 SECONDS metrics.reporter.prom.interval: 10 SECONDS
# gives metrics about inbound/outbound network queue lengths
taskmanager.network.detailed-metrics: true taskmanager.network.detailed-metrics: true
# -> gives metrics about inbound/outbound network queue lengths
log4j-console.properties: |+ log4j-console.properties: |+
# This affects logging for both user code and Flink # This affects logging for both user code and Flink
rootLogger.level = INFO rootLogger.level = INFO
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment