Skip to content

Make analysis more flexible

Currently, we can only consider the metric "record lag" as a service level objective. To make benchmark evaluation more flexible and usable in different scenarios, we need to add additional support at different levels:

  • Evaluate an experiment by more than one SLO
    • mark an experiment only as successfully, if all SLO are complies
  • add support for abitrary prometheus-(range)-queries
    • Probably, we need to adjust the interface to the (python) SLO-Checker components:
      • Change the JSON sent to the SLO checker to allow arbitrary metrics. For example, send the following JSON:
"metricName" : "sum by(group)(kafka_consumergroup_group_lag >= 0)",
"result" : [
  {
     "labels" : { "consumer-group" : "kstreams-123" },
     "value" : [ 1435781451.781, "0" ]
  }
  "evaluationMetadata" : { // this metada could be SLO checker specifc : 
    "thresholdPercent"  : 55%
  }
]
Edited by Benedikt Wetzel