diff --git a/README b/README index d11cffa9b700af962197f2ad416f96d8ff69b3a1..e6cc5b1dcb3519afcf61d77376718a0212145a63 100644 --- a/README +++ b/README @@ -9,22 +9,14 @@ Thus, the documentation might be outdated. The MooBench micro-benchmarks can be used to quantify the performance overhead caused by monitoring framework components. -Currenly (directly) supported monitoring frameworks include: +Currenly (directly) supported monitoring frameworks are: * Kieker (http://kieker-monitoring.net) -* inspectIT (http://inspectit.eu/) +* OpenTelemetry (https://opentelemetry.io/) * SPASS-meter (https://github.com/SSEHUB/spassMeter.git) -An ant script (build.xml) is provided to prepare the benchmark for the -respective monitoring framwork. Corresponding build targets, providing -preconfigured builds for each supported framework, are available. -For instance, the target "build-kieker" prepares a jar for Kieker -benchmarking experiments. - -The relevant build targets are: -* build-all (framework independant benchmark) -* build-kieker (Kieker) -* build-inspectit (inspectIT) -* build-spassmeter (SPASS-meter) +The gradle buildfile is provided to prepare the benchmark. To build +the monitored application and copy it to the framework you want to benchmark, +just execute `./gradlew assemble` All experiments are started with the provided "External Controller" scripts. These scripts are available inside the respective bin/ @@ -38,13 +30,9 @@ The default execution of the benchmark requires a 64Bit JVM! However, this behavior can be changed in the respective .sh scripts. Initially, the following steps are required: -1. You should check whether you installed ant (http://ant.apache.org/), - since the execution of all examples described in this - README is based on the run-targets in the ant file build.xml. -2. Make sure, that you've installed R (http://www.r-project.org/) to +1. Make sure, that you've installed R (http://www.r-project.org/) to generate the results. -3. Compile the application by calling ant with the appropriate build - target. +2. Compile the application by calling `./gradlew assemble`. Execution of the micro-benchmark: All benchmarks are started with calls of .sh scripts in the bin folder. @@ -68,23 +56,3 @@ Analyzing the data: In the folder /bin/r are some R scripts provided to generate graphs to visualize the results. In the top the files, one can configure the required paths and the configuration used to analyze the data. - - -(Outdated) Documentation of additional experiments: -=================================================== - -Different recursion depth (with MAXRECURSIONDEPTH=1 without recursion) --> bin/run-benchmark-recursive.sh - -To check for a linear rise in monitoring overhead, this benchmark -increases the recursion depth up to 2^MAXRECURSIONDEPTH in logarithmic -steps --> bin/run-benchmark-recursive-linear.sh - -Benchmarking the JMX-writer --> bin/run-benchmark-recursive-jmx.sh - -The experiments run-cycle*.sh and their used files -run-benchmark-cycle-*.sh are currently only supporting Solaris -environments and require pfexec permissions to assign subsets of cores -to the benchmarking system.