Skip to content
Snippets Groups Projects
Commit 94a3999e authored by Simon Ehrenstein's avatar Simon Ehrenstein
Browse files

Implement uc1 wg with new wg

parent 3cd0b988
No related branches found
No related tags found
1 merge request!6Add Distributed Workload Generator
Showing
with 136 additions and 187 deletions
customRulesJars= customRulesJars=
eclipse.preferences.version=1 eclipse.preferences.version=1
enabled=true enabled=false
ruleSetFilePath=config/pmd.xml ruleSetFilePath=config/pmd.xml
...@@ -68,19 +68,6 @@ below), we provide a [patch](https://github.com/SoerenHenning/cp-helm-charts) ...@@ -68,19 +68,6 @@ below), we provide a [patch](https://github.com/SoerenHenning/cp-helm-charts)
for these helm charts. Note that this patch is only required for observation and for these helm charts. Note that this patch is only required for observation and
not for the actual benchmark execution and evaluation. not for the actual benchmark execution and evaluation.
<<<<<<< HEAD
**TODO** Add required configuration, installation
### The Kafka Lag Exporter
Lightbend's Kafka Lag Exporter can be installed via helm:
``sh
helm install kafka-lag-exporter https://github.com/lightbend/kafka-lag-exporter/releases/download/v0.6.0/kafka-lag-exporter-0.6.0.tgz
``
**TODO** Add configuration + ServiceMonitor
=======
#### Our patched Confluent Helm Charts #### Our patched Confluent Helm Charts
To use our patched Confluent Helm Charts clone the To use our patched Confluent Helm Charts clone the
...@@ -118,17 +105,10 @@ To let Prometheus scrape Kafka lag metrics, deploy a ServiceMonitor: ...@@ -118,17 +105,10 @@ To let Prometheus scrape Kafka lag metrics, deploy a ServiceMonitor:
```sh ```sh
kubectl apply -f infrastructure/kafka-lag-exporter/service-monitor.yaml kubectl apply -f infrastructure/kafka-lag-exporter/service-monitor.yaml
``` ```
>>>>>>> 624692753eb09684dd3dda3926482e9b56ada0d6
## Python 3.7 ## Python 3.7
<<<<<<< HEAD
For executing benchmarks and analyzing their results, a Python 3.7 installation
is required. We suggest to use a virtual environment placed in the `.venv` directory.
**TODO** Show how to install requirements
=======
For executing benchmarks and analyzing their results, a **Python 3.7** installation For executing benchmarks and analyzing their results, a **Python 3.7** installation
is required. We suggest to use a virtual environment placed in the `.venv` directory. is required. We suggest to use a virtual environment placed in the `.venv` directory.
...@@ -169,4 +149,3 @@ The `./run_loop.sh` is the entrypoint for all benchmark executions. Is has to be ...@@ -169,4 +149,3 @@ The `./run_loop.sh` is the entrypoint for all benchmark executions. Is has to be
* `<memory-limit>`: Kubernetes memory limit. Optional. Default `4Gi`. * `<memory-limit>`: Kubernetes memory limit. Optional. Default `4Gi`.
* `<commit-interval>`: Kafka Streams' commit interval in milliseconds. Optional. Default `100`. * `<commit-interval>`: Kafka Streams' commit interval in milliseconds. Optional. Default `100`.
* `<duration>`: Duration in minutes subexperiments should be executed for. Optional. Default `5`. * `<duration>`: Duration in minutes subexperiments should be executed for. Optional. Default `5`.
>>>>>>> 624692753eb09684dd3dda3926482e9b56ada0d6
#!/bin/bash
CP_HELM_PATH='../../cp-helm-charts'
## minikube ##
# if minikube stop responding after few minutes
# minikube delete
# cd
# rm -r .minikube
# cd Dokumente/Master-2-SoSe-2020/project/spesb/execution/
#minikube config set memory 4046
#minikube delete
#minikube config set cpus 4
#minikube delete
#minikube start --vm-driver=virtualbox
## kind ##
kind delete cluster
kind create cluster # --config infrastructure/cluster/kind-configuration.yaml
# K8s dashboard
# Token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImdGNlU1U3BnN01XcS14RnlWUFRBODlaTzNpeUtxa1hTV3VKNTVmVGVrZ2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLW5rbnc1Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2NDE5NTUzYy0yY2RkLTQ2OGUtYTMwNS03YzZlNWQ5NjhmMzMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6ZGVmYXVsdCJ9.xkkZJViw4Q7RUbpzWjmTUGIGlgHfIhC94GuJ_bmGId3w8UxCP5PK6eq0yPNTTMJMT-yGr3qH7D1f616UNDJM1SrcoervG1fXyzw0XYmdbXluemW1LIm3WyzukhBs4dF4s93RrxUd9iHFjCrQanssXOSDDCZO-2V4BrpYEZ4TLvgMz9pAy4_k4-1gL4QKu8FBgCydBa2SBVOZ8tFy_5r38KH7j9eX0OJD8kyugcmPz0ARaIZhyZERyTHz3wmxY5E-W_qSe1GY12EeR9c5KlWeGYYIzheyBr-TpcyLuoQQguxmI7Ico917k0zG2YBSEYdfTo1I2LPXCYzbp__MAIV_TQ
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
# kubectl proxy &
# prometheus
helm install prometheus-operator stable/prometheus-operator -f infrastructure/prometheus/helm-values.yaml
kubectl apply -f infrastructure/prometheus/service-account.yaml
kubectl apply -f infrastructure/prometheus/cluster-role.yaml
kubectl apply -f infrastructure/prometheus/cluster-role-binding.yaml
kubectl apply -f infrastructure/prometheus/prometheus.yaml
# grafana
kubectl apply -f infrastructure/grafana/prometheus-datasource-config-map.yaml
kubectl apply -f infrastructure/grafana/dashboard-config-map.yaml
helm install grafana stable/grafana -f infrastructure/grafana/values.yaml
# kafka + lag-exporter
helm install my-confluent $CP_HELM_PATH -f infrastructure/kafka/values.yaml
kubectl apply -f $CP_HELM_PATH/examples/kafka-client.yaml
kubectl apply -f infrastructure/kafka/service-monitor.yaml
helm install kafka-lag-exporter https://github.com/lightbend/kafka-lag-exporter/releases/download/v0.6.0/kafka-lag-exporter-0.6.0.tgz -f infrastructure/kafka-lag-exporter/values.yaml
kubectl apply -f infrastructure/kafka-lag-exporter/service-monitor.yaml
# port fowarding kubectl "port-forward --namespace monitoring <pod name> <local port>:<container port>"
sleep 3m # wait for grafana and prometheus pods
kubectl port-forward $(kubectl get pods -o name | grep grafana) 3000:3000 &
kubectl port-forward $(kubectl get pods -o name | grep prometheus-prometheus) 9090:9090 &
# open web interfaces
# xdg-open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
xdg-open http://localhost:9090
xdg-open http://localhost:3000
# grafana token
# kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
0 9
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
## ------------------------------------------------------ ## ------------------------------------------------------
cp-zookeeper: cp-zookeeper:
enabled: true enabled: true
servers: 3 servers: 1
image: confluentinc/cp-zookeeper image: confluentinc/cp-zookeeper
imageTag: 5.4.0 imageTag: 5.4.0
## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace. ## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace.
...@@ -38,7 +38,7 @@ cp-zookeeper: ...@@ -38,7 +38,7 @@ cp-zookeeper:
## ------------------------------------------------------ ## ------------------------------------------------------
cp-kafka: cp-kafka:
enabled: true enabled: true
brokers: 10 brokers: 1
image: confluentinc/cp-enterprise-kafka image: confluentinc/cp-enterprise-kafka
imageTag: 5.4.0 imageTag: 5.4.0
## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace. ## Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace.
...@@ -61,7 +61,7 @@ cp-kafka: ...@@ -61,7 +61,7 @@ cp-kafka:
# cpu: 100m # cpu: 100m
# memory: 128Mi # memory: 128Mi
configurationOverrides: configurationOverrides:
#"offsets.topic.replication.factor": "3" offsets.topic.replication.factor: 1
"message.max.bytes": "134217728" # 128 MB "message.max.bytes": "134217728" # 128 MB
"replica.fetch.max.bytes": "134217728" # 128 MB "replica.fetch.max.bytes": "134217728" # 128 MB
# "default.replication.factor": 3 # "default.replication.factor": 3
......
...@@ -9,4 +9,4 @@ roleRef: ...@@ -9,4 +9,4 @@ roleRef:
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: prometheus name: prometheus
namespace: titan-scalability namespace: default
\ No newline at end of file \ No newline at end of file
...@@ -4,8 +4,10 @@ UC=$1 ...@@ -4,8 +4,10 @@ UC=$1
IFS=', ' read -r -a DIM_VALUES <<< "$2" IFS=', ' read -r -a DIM_VALUES <<< "$2"
IFS=', ' read -r -a REPLICAS <<< "$3" IFS=', ' read -r -a REPLICAS <<< "$3"
PARTITIONS=${4:-40} PARTITIONS=${4:-40}
CPU_LIMIT=${5:-1000m} CPU_LIMIT=${5:-200m}
MEMORY_LIMIT=${6:-4Gi} MEMORY_LIMIT=${6:-400Mi}
#CPU_LIMIT=${5:-1000m}
#MEMORY_LIMIT=${6:-4Gi}
KAFKA_STREAMS_COMMIT_INTERVAL_MS=${7:-100} KAFKA_STREAMS_COMMIT_INTERVAL_MS=${7:-100}
EXECUTION_MINUTES=${8:-5} EXECUTION_MINUTES=${8:-5}
......
...@@ -26,7 +26,8 @@ kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-z ...@@ -26,7 +26,8 @@ kubectl exec kafka-client -- bash -c "kafka-topics --zookeeper my-confluent-cp-z
# Start workload generator # Start workload generator
NUM_SENSORS=$DIM_VALUE NUM_SENSORS=$DIM_VALUE
WL_MAX_RECORDS=150000 #WL_MAX_RECORDS=150000
WL_MAX_RECORDS=25
WL_INSTANCES=$(((NUM_SENSORS + (WL_MAX_RECORDS -1 ))/ WL_MAX_RECORDS)) WL_INSTANCES=$(((NUM_SENSORS + (WL_MAX_RECORDS -1 ))/ WL_MAX_RECORDS))
WORKLOAD_GENERATOR_YAML=$(sed "s/{{NUM_SENSORS}}/$NUM_SENSORS/g; s/{{INSTANCES}}/$WL_INSTANCES/g" uc1-workload-generator/deployment.yaml) WORKLOAD_GENERATOR_YAML=$(sed "s/{{NUM_SENSORS}}/$NUM_SENSORS/g; s/{{INSTANCES}}/$WL_INSTANCES/g" uc1-workload-generator/deployment.yaml)
......
...@@ -16,8 +16,13 @@ spec: ...@@ -16,8 +16,13 @@ spec:
terminationGracePeriodSeconds: 0 terminationGracePeriodSeconds: 0
containers: containers:
- name: workload-generator - name: workload-generator
image: soerenhenning/uc1-wg:latest # image: soerenhenning/uc1-wg:latest
image: sehrenstein/uc1-wg:latest
env: env:
- name: ZK_HOST
value: "my-confluent-cp-zookeeper"
- name: ZK_PORT
value: "2181"
- name: KAFKA_BOOTSTRAP_SERVERS - name: KAFKA_BOOTSTRAP_SERVERS
value: "my-confluent-cp-kafka:9092" value: "my-confluent-cp-kafka:9092"
- name: NUM_SENSORS - name: NUM_SENSORS
......
customRulesJars= customRulesJars=
eclipse.preferences.version=1 eclipse.preferences.version=1
enabled=true enabled=false
ruleSetFilePath=../config/pmd.xml ruleSetFilePath=../config/pmd.xml
version: '3.1' version: '3.1'
services:
version: '3.1'
services: services:
zookeeper: zookeeper:
image: zookeeper image: zookeeper
ports: ports:
- 2181:2181 - "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost # Replace with docker network
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS: 30000
KAFKA_CREATE_TOPICS: "input:3:1"
package test; package test;
import common.KafkaWorkloadGenerator;
import common.KafkaWorkloadGeneratorBuilder;
import common.dimensions.Duration; import common.dimensions.Duration;
import common.dimensions.KeySpace; import common.dimensions.KeySpace;
import common.dimensions.Period; import common.dimensions.Period;
import common.generators.KafkaWorkloadGenerator;
import common.generators.KafkaWorkloadGeneratorBuilder;
import common.messages.OutputMessage; import common.messages.OutputMessage;
import common.misc.ZooKeeper;
import communication.kafka.KafkaRecordSender;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import kieker.common.record.IMonitoringRecord;
import titan.ccp.models.records.ActivePowerRecord; import titan.ccp.models.records.ActivePowerRecord;
public class Main { public class Main {
public static void main(final String[] args) { public static void main(final String[] args) {
final KafkaWorkloadGenerator generator = final KafkaRecordSender<IMonitoringRecord> recordSender =
new KafkaRecordSender<>("localhost:9092", "input");
final KafkaWorkloadGenerator<IMonitoringRecord> generator =
KafkaWorkloadGeneratorBuilder.builder() KafkaWorkloadGeneratorBuilder.builder()
.setZooKeeper(new ZooKeeper("127.0.0.1", 2181))
.setKafkaRecordSender(recordSender)
.setBeforeAction(() -> { .setBeforeAction(() -> {
System.out.println("Before Hook"); System.out.println("Before Hook");
}) })
...@@ -26,10 +34,6 @@ public class Main { ...@@ -26,10 +34,6 @@ public class Main {
new ActivePowerRecord(key, 0L, 100d))) new ActivePowerRecord(key, 0L, 100d)))
.build(); .build();
// dwhedhwedherbfherf ferufer e u uebvhebzvbjkr fjkebhr erfberf rt gtr grt gtr
// gebuwbfuzerfuzerzgfer fe rf er fe rferhfveurfgerzfgzuerf erf erf ethvrif
generator.start(); generator.start();
} }
......
mainClassName = "spesb.uc1.workloadgenerator.LoadGenerator" mainClassName = "spesb.uc1.workloadgenerator.LoadGenerator"
dependencies {
compile project(':workload-generator-common')
}
\ No newline at end of file
package spesb.uc1.workloadgenerator; package spesb.uc1.workloadgenerator;
import common.dimensions.Duration;
import common.dimensions.KeySpace;
import common.dimensions.Period;
import common.generators.KafkaWorkloadGenerator;
import common.generators.KafkaWorkloadGeneratorBuilder;
import common.messages.OutputMessage;
import common.misc.ZooKeeper;
import communication.kafka.KafkaRecordSender;
import java.io.IOException; import java.io.IOException;
import java.util.List;
import java.util.Objects; import java.util.Objects;
import java.util.Properties; import java.util.Properties;
import java.util.Random;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import java.util.regex.Pattern;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerConfig;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import spesb.kafkasender.KafkaRecordSender;
import titan.ccp.models.records.ActivePowerRecord; import titan.ccp.models.records.ActivePowerRecord;
public class LoadGenerator { public class LoadGenerator {
private static final Logger LOGGER = LoggerFactory.getLogger(LoadGenerator.class); private static final Logger LOGGER = LoggerFactory.getLogger(LoadGenerator.class);
private static final int WL_MAX_RECORDS = 150_000;
public static void main(final String[] args) throws InterruptedException, IOException { public static void main(final String[] args) throws InterruptedException, IOException {
LOGGER.info("Start workload generator for use case UC1."); LOGGER.info("Start workload generator for use case UC1.");
final int numSensors = final int numSensors =
Integer.parseInt(Objects.requireNonNullElse(System.getenv("NUM_SENSORS"), "10")); Integer.parseInt(Objects.requireNonNullElse(System.getenv("NUM_SENSORS"), "10"));
final int instanceId = getInstanceId();
final int periodMs = final int periodMs =
Integer.parseInt(Objects.requireNonNullElse(System.getenv("PERIOD_MS"), "1000")); Integer.parseInt(Objects.requireNonNullElse(System.getenv("PERIOD_MS"), "1000"));
final int value = Integer.parseInt(Objects.requireNonNullElse(System.getenv("VALUE"), "10")); final int value = Integer.parseInt(Objects.requireNonNullElse(System.getenv("VALUE"), "10"));
final int threads = Integer.parseInt(Objects.requireNonNullElse(System.getenv("THREADS"), "4")); final int threads = Integer.parseInt(Objects.requireNonNullElse(System.getenv("THREADS"),
"4"));
final String zooKeeperHost = Objects.requireNonNullElse(System.getenv("ZK_HOST"), "localhost");
final int zooKeeperPort =
Integer.parseInt(Objects.requireNonNullElse(System.getenv("ZK_PORT"), "2181"));
final String kafkaBootstrapServers = final String kafkaBootstrapServers =
Objects.requireNonNullElse(System.getenv("KAFKA_BOOTSTRAP_SERVERS"), "localhost:9092"); Objects.requireNonNullElse(System.getenv("KAFKA_BOOTSTRAP_SERVERS"), "localhost:9092");
final String kafkaInputTopic = final String kafkaInputTopic =
...@@ -41,13 +42,6 @@ public class LoadGenerator { ...@@ -41,13 +42,6 @@ public class LoadGenerator {
final String kafkaLingerMs = System.getenv("KAFKA_LINGER_MS"); final String kafkaLingerMs = System.getenv("KAFKA_LINGER_MS");
final String kafkaBufferMemory = System.getenv("KAFKA_BUFFER_MEMORY"); final String kafkaBufferMemory = System.getenv("KAFKA_BUFFER_MEMORY");
final int idStart = instanceId * WL_MAX_RECORDS;
final int idEnd = Math.min((instanceId + 1) * WL_MAX_RECORDS, numSensors);
LOGGER.info("Generating data for sensors with IDs from {} to {} (exclusive).", idStart, idEnd);
final List<String> sensors = IntStream.range(idStart, idEnd)
.mapToObj(i -> "s_" + i)
.collect(Collectors.toList());
final Properties kafkaProperties = new Properties(); final Properties kafkaProperties = new Properties();
// kafkaProperties.put("acks", this.acknowledges); // kafkaProperties.put("acks", this.acknowledges);
kafkaProperties.compute(ProducerConfig.BATCH_SIZE_CONFIG, (k, v) -> kafkaBatchSize); kafkaProperties.compute(ProducerConfig.BATCH_SIZE_CONFIG, (k, v) -> kafkaBatchSize);
...@@ -60,33 +54,18 @@ public class LoadGenerator { ...@@ -60,33 +54,18 @@ public class LoadGenerator {
r -> r.getTimestamp(), r -> r.getTimestamp(),
kafkaProperties); kafkaProperties);
final ScheduledExecutorService executor = Executors.newScheduledThreadPool(threads); final KafkaWorkloadGenerator<ActivePowerRecord> workloadGenerator =
final Random random = new Random(); KafkaWorkloadGeneratorBuilder.<ActivePowerRecord>builder()
.setKeySpace(new KeySpace("s_", numSensors))
for (final String sensor : sensors) { .setThreads(threads)
final int initialDelay = random.nextInt(periodMs); .setPeriod(new Period(periodMs, TimeUnit.MILLISECONDS))
executor.scheduleAtFixedRate(() -> { .setDuration(new Duration(100, TimeUnit.SECONDS))
kafkaRecordSender.write(new ActivePowerRecord(sensor, System.currentTimeMillis(), value)); .setGeneratorFunction(sensor -> new OutputMessage<>(sensor,
}, initialDelay, periodMs, TimeUnit.MILLISECONDS); new ActivePowerRecord(sensor, System.currentTimeMillis(), value)))
} .setZooKeeper(new ZooKeeper(zooKeeperHost, zooKeeperPort))
.setKafkaRecordSender(kafkaRecordSender)
System.out.println("Wait for termination..."); .build();
executor.awaitTermination(30, TimeUnit.DAYS);
System.out.println("Will terminate now");
workloadGenerator.start();
} }
private static int getInstanceId() {
final String podName = System.getenv("POD_NAME");
if (podName == null) {
return 0;
} else {
return Pattern.compile("-")
.splitAsStream(podName)
.reduce((p, x) -> x)
.map(Integer::parseInt)
.orElse(0);
}
}
} }
...@@ -5,6 +5,7 @@ import java.util.Objects; ...@@ -5,6 +5,7 @@ import java.util.Objects;
import java.util.Properties; import java.util.Properties;
import org.apache.kafka.streams.KafkaStreams; import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig; import org.apache.kafka.streams.StreamsConfig;
import spesb.uc3.streamprocessing.TopologyBuilder;
import titan.ccp.common.kafka.streams.PropertiesBuilder; import titan.ccp.common.kafka.streams.PropertiesBuilder;
/** /**
......
...@@ -13,7 +13,6 @@ sourceCompatibility = "1.11" ...@@ -13,7 +13,6 @@ sourceCompatibility = "1.11"
targetCompatibility = "1.11" targetCompatibility = "1.11"
dependencies { dependencies {
compile project(':')
compile 'org.apache.curator:curator-recipes:4.3.0' compile 'org.apache.curator:curator-recipes:4.3.0'
compile 'org.slf4j:slf4j-simple:1.6.1' compile 'org.slf4j:slf4j-simple:1.6.1'
......
...@@ -13,7 +13,7 @@ public class Period extends Dimension { ...@@ -13,7 +13,7 @@ public class Period extends Dimension {
/** /**
* Define a new period. * Define a new period.
* *
* @param period the period * @param period the period
* @param timeUnit the time unit that applies to the specified {@code period} * @param timeUnit the time unit that applies to the specified {@code period}
*/ */
...@@ -23,7 +23,7 @@ public class Period extends Dimension { ...@@ -23,7 +23,7 @@ public class Period extends Dimension {
this.timeUnit = timeUnit; this.timeUnit = timeUnit;
} }
public int getDuration() { public int getPeriod() {
return this.period; return this.period;
} }
......
package common.dimensions.copy;
/*
* Base class for workload dimensions.
*/
public abstract class Dimension {
}
package common.dimensions.copy;
import java.util.concurrent.TimeUnit;
import common.generators.WorkloadGenerator;
/**
* Wrapper class for the definition of the duration for the {@link WorkloadGenerator}.
*/
public class Duration extends Dimension {
private final int duration;
private final TimeUnit timeUnit;
/**
* Define a new duration.
*
* @param duration the duration
* @param timeUnit the time unit that applies to the specified {@code duration}
*/
public Duration(final int duration, final TimeUnit timeUnit) {
super();
this.duration = duration;
this.timeUnit = timeUnit;
}
public int getDuration() {
return this.duration;
}
public TimeUnit getTimeUnit() {
return this.timeUnit;
}
}
package common.dimensions.copy;
import common.generators.WorkloadGenerator;
/**
* Wrapper class for the definition of the Keys that should be used by the
* {@link WorkloadGenerator}.
*/
public class KeySpace extends Dimension {
private final String prefix;
private final int min;
private final int max;
/**
* Create a new key space. All keys will have the prefix {@code prefix}. The remaining part of
* each key will be determined by a number of the interval ({@code min}, {@code max}-1).
*
* @param prefix the prefix to use for all keys
* @param min the lower bound (inclusive) to start counting from
* @param max the upper bound (exclusive) to count to
*/
public KeySpace(final String prefix, final int min, final int max) {
if (prefix == null || prefix.contains(";")) {
throw new IllegalArgumentException(
"The prefix must not be null and must not contain the ';' character.");
}
this.prefix = prefix;
this.min = min;
this.max = max;
}
public KeySpace(final String prefix, final int numberOfKeys) {
this(prefix, 0, numberOfKeys - 1);
}
public KeySpace(final int numberOfKeys) {
this("sensor_", 0, numberOfKeys - 1);
}
public String getPrefix() {
return this.prefix;
}
public int getMin() {
return this.min;
}
public int getMax() {
return this.max;
}
}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment