Skip to content
Snippets Groups Projects
Select Git revision
  • snapshot
  • master default
  • BottleneckTestViaWordCounter-v1.1.2
  • BottleneckTextViaWordCounter-v2.0
  • mad/dynamicThreads
  • mad/AllMonitoring
  • loop_detection
  • mad/pipeChangeMonitoring
  • workstealing
  • DCParadigm
  • dynamicport
  • reduction-variable
  • fast-outputport-iteration
  • v2.1
  • v2.0
  • v1.1.2
  • v1.1.1
  • v1.1
  • v1.0
  • signal-final
  • multiple-ports-final
  • pipeline-with-method-call-final
22 results

teetime

  • Clone with SSH
  • Clone with HTTPS
  • Forked from TeeTime / TeeTime
    Source project has a limited visibility.
    The MooBench Monitoring Overhead Micro-Benchmark 
    ------------------------------------------------------------------------
    
    Website: http://kieker-monitoring.net/MooBench
    Contact: moobench@notme.de
    
    The MooBench micro-benchmarks can be used to quantify the performance 
    overhead caused by monitoring framework components. 
    
    Currenly supported monitoring frameworks include:
    * Kieker (http://kieker-monitoring.net)
    * inspectIT (http://inspectit.eu/)
    * SPASSmeter
    
    An ant script (build.xml) is provided to prepare the benchmark for the
    respective monitoring framwork. Corresponding build targets, providing
    preconfigured build for each supported framework, are available.
    For instance, the target "build-kieker" prepares a jar for Kieker 
    benchmarking experiments.
    
    All experiments are started with the provided "External Controller"
    scripts. These scripts are available inside the respective bin/ 
    directory. Currently only shell (.sh) scripts are provided.
    
    The default execution of the benchmark requires a 64Bit JVM!
    This can be changed in the respective .sh scripts.
    
    Initially, the following steps are required:
    1. You should check whether you installed ant (http://ant.apache.org/), 
       since the execution of all examples described in this 
       README is based on the run-targets in the ant file build.xml.
    2. Make sure, that you've installed R (http://www.r-project.org/) to 
       generate the results.
    3. Compile the application by calling ant with the appropriate build 
       target.
    
    Execution of the micro-benchmark:
    All benchmarks are started with calls of .sh scripts in the bin folder.
    The top of the files include some configuration parameters, such as
    * SLEEPTIME between executions
    * NUM_LOOPS number of repetitions
    * THREADS concurrent benchmarking threads
    * MAXRECURSIONDEPTH recursion up to this depth
    * TOTALCALLS the duration of the benchmark
    * METHODTIME the time per monitored call
    Furthermore some JVM arguments can be adjusted:
    * JAVAARGS JVM Arguments
    
    Typical call:
    $ nohup ./benchmark.sh & sleep 1;tail +0cf nohup.out
    
    Experiments (outdated):
    Different recursion depth (with MAXRECURSIONDEPTH=1 without recursion)
    -> bin/run-benchmark-recursive.sh
    
    To check for a linear rise in monitoring overhead, this benchmark 
    increases the recursion depth up to 2^MAXRECURSIONDEPTH in logarithmic 
    steps
    -> bin/run-benchmark-recursive-linear.sh
    
    Benchmarking the JMX-writer
    -> bin/run-benchmark-recursive-jmx.sh
    
    The experiments run-cycle*.sh and their used files 
    run-benchmark-cycle-*.sh are currently only supporting Solaris 
    environments and require pfexec permissions to assign subsets of cores 
    to the benchmarking system.
    
    Analyzing the data:
    in the folder /bin/r are some R scripts provided to generate 
    graphs to visualize the results. In the top the files, one can configure
    the required paths and the configuration used to analyze the data.