Skip to content
Snippets Groups Projects
Select Git revision
  • main default protected
  • testCP3
  • addCloudprofiler3
  • testCP2
  • testCP
  • addCloudprofiler2
  • addCloudprofiler
  • testNewDockerfile
  • UpdateDockerfile
  • KIEKER-1990
  • cp
  • KIEKER-1983-MooBench-Kieker-Python
12 results

moobench

  • Clone with SSH
  • Clone with HTTPS
  • The MooBench Monitoring Overhead Micro-Benchmark 
    ------------------------------------------------------------------------
    
    Website: http://kieker-monitoring.net/MooBench
    Contact: moobench@notme.de
    
    The MooBench micro-benchmarks can be used to quantify the performance 
    overhead caused by monitoring framework components. 
    
    Currenly (directly) supported monitoring frameworks include:
    * Kieker (http://kieker-monitoring.net)
    * inspectIT (http://inspectit.eu/)
    * SPASS-meter (https://github.com/SSEHUB/spassMeter.git)
    
    An ant script (build.xml) is provided to prepare the benchmark for the
    respective monitoring framwork. Corresponding build targets, providing
    preconfigured builds for each supported framework, are available.
    For instance, the target "build-kieker" prepares a jar for Kieker 
    benchmarking experiments.
    
    The relevant build targets are:
    * build-all (framework independant benchmark)
    * build-kieker (Kieker)
    * build-inspectit (inspectIT)
    * build-spassmeter (SPASS-meter)
    
    All experiments are started with the provided "External Controller"
    scripts. These scripts are available inside the respective bin/ 
    directory. Currently only shell (.sh) scripts are provided. These 
    scripts have been developed on Solaris environments. Thus, minor
    adjustments might be required for common Linux operatong systems,
    such as Ubuntu. Additionally, several Eclipse launch targets are 
    provided for debugging purposes.
    
    The default execution of the benchmark requires a 64Bit JVM!
    However, this behavior can be changed in the respective .sh scripts.
    
    Initially, the following steps are required:
    1. You should check whether you installed ant (http://ant.apache.org/), 
       since the execution of all examples described in this 
       README is based on the run-targets in the ant file build.xml.
    2. Make sure, that you've installed R (http://www.r-project.org/) to 
       generate the results.
    3. Compile the application by calling ant with the appropriate build 
       target.
    
    Execution of the micro-benchmark:
    All benchmarks are started with calls of .sh scripts in the bin folder.
    The top of the files include some configuration parameters, such as
    * SLEEPTIME           between executions (default 30 seconds)
    * NUM_LOOPS           number of repetitions (default 10)
    * THREADS             concurrent benchmarking threads (default 1)
    * MAXRECURSIONDEPTH   recursion up to this depth (default 10)
    * TOTALCALLS          the duration of the benchmark (deafult 2,000,000 calls)
    * METHODTIME          the time per monitored call (default 0 ns or 500 us)
    
    Furthermore some JVM arguments can be adjusted:
    * JAVAARGS            JVM Arguments (e.g., available memory)
    
    Typical call (using Solaris):
    $ nohup ./benchmark.sh & sleep 1;tail +0cf nohup.out
    
    
    Analyzing the data:
    ===================
    In the folder /bin/r are some R scripts provided to generate graphs to 
    visualize the results. In the top the files, one can configure the 
    required paths and the configuration used to analyze the data.
    
    
    (Outdated) Documentation of additional experiments:
    ===================================================
    
    Different recursion depth (with MAXRECURSIONDEPTH=1 without recursion)
    -> bin/run-benchmark-recursive.sh
    
    To check for a linear rise in monitoring overhead, this benchmark 
    increases the recursion depth up to 2^MAXRECURSIONDEPTH in logarithmic 
    steps
    -> bin/run-benchmark-recursive-linear.sh
    
    Benchmarking the JMX-writer
    -> bin/run-benchmark-recursive-jmx.sh
    
    The experiments run-cycle*.sh and their used files 
    run-benchmark-cycle-*.sh are currently only supporting Solaris 
    environments and require pfexec permissions to assign subsets of cores 
    to the benchmarking system.