Skip to content
Snippets Groups Projects
Commit 6f944c31 authored by Reiner Jung's avatar Reiner Jung
Browse files

Updated READMEs and tooling.

parent 399781bf
Branches
No related tags found
No related merge requests found
......@@ -20,7 +20,7 @@ Directory structure:
- MITgcm
- UVic
- SWM
- SWM (not yet available)
The corresponding subdirectories of the replication package contain detailed
instructions for each step to be done to setup the experiments and execute them.
......@@ -80,7 +80,8 @@ The envisioned setup is:
- kieker-lang-pack-c (git repo)
- oceandsl-tools (git repo)
- experiments
- oceandsl-tools
- oceandsl-tools (created when following the instructions)
- data
We will refer to the `replication` directory as `${REPLICATION_DIR}` in the
documentation.
......@@ -90,20 +91,21 @@ directories as follows.
```
mkdir replication
cd replication
mkdir install experiments
mkdir install experiments data
```
Then move the `esm-architecture-analyisis-replication-package` directory into
the replication package.
Then move the `esm-architecture-analyisis-replication-package` (replication package)
directory into the `${REPLICATION_DIR}`.
### Installing Java
You need to install a Java runtime (at least Java 11 when you use the pre
packaged tools or Java 8 when you build the code yourself).
You need to install a Java runtime (at least Java 11). Follow the instructions
for your respective operating system. Most Linux distributions provide suitable
Java runtimes and SDKs.
### OceanDSL-Tools
Installing OceanDSL-Tools from the *archive*:
Installing OceanDSL-Tools from the *archive* in the **replication package**:
```
tar -xvpf oceandsl-tools.tgz
......@@ -124,7 +126,7 @@ cd oceandsl-tools
./gradlew build
./assemble-tools.sh
cd ${REPLICATION_DIR}
tar -xvpf ${REPLICATION_DIR}/install/oceandsl-tools/build/oceandsl-tools.tgz
tar -xzpf ${REPLICATION_DIR}/install/oceandsl-tools/build/oceandsl-tools.tgz
```
### Kieker Monitoring
......@@ -159,8 +161,8 @@ Then compile and install the monitoring probes.
You may also call `make install`. However, this may require admin
privileges.
In case you intend to install the Kieker library in a different location than
/usr/local you must specify a suitable path with the configure call, e.g.,
In case you intend to install the Kieker library in a different location then
`/usr/local` you must specify a suitable path with the configure call, e.g.,
```
./configure --prefix=${REPLICATION_DIR}/kieker
make
......@@ -180,15 +182,42 @@ Depending on the version this will produce a directory named `collector-2.0.0-SN
Please rename this to `collector`.
Additional information on the collector and how to use it to collect
monitoring data, you may find on the Kieker documentation page.
monitoring data, can be found in the Kieker documentation page.
https://kieker-monitoring.readthedocs.io/en/latest/kieker-tools/Collector---Kieker-Data-Bridge.html#kieker-tools-collector
`https://kieker-monitoring.readthedocs.io/en/latest/kieker-tools/Collector---Kieker-Data-Bridge.html#kieker-tools-collector`
In case the *collector* does not work, e.g., for incompatibilities, you can
download Kieker tools from
`https://github.com/kieker-monitoring/kieker/releases/` in the tools or binary package.
### Install additional tooling
Install binutils which include `addr2line` with
Install binutils which include `addr2line`. In debian based Linux distributions, this
is done with
`sudo apt install binutils -y`
In case you use a non-Debian distribution, install the corresponding package of your
distribution.
### Install fxtran
You may find `fxtran` in our fork `https://github.com/OceanDSL/fxtran` repository or
a more recent version in the original project.
To install `fxtran` which is required for the static architecture recovery, go to
the `${REPLICATION_DIR}/install` directory and clone the repository
```
cd ${REPLICATION_DIR}/install
git clone https://github.com/OceanDSL/fxtran
```
Compile the tool with
```
cd fxtran
make
cp bin/fxtran "${REPLICATION_DIR}"
```
# General Configuration and Setup
There are some configuration parameters relevant for all models and versions.
These are configured in this directory.
To configure the general parameters create a copy of `config.template`
`cp config.template config`
and edit the file accordingly.
For simple replication of the experiments, you need only to copy the file
and enter the fully qualified path name of the replication directory in the
config file.
```
export REPLICATION_DIR= +++ SETUP HERE +++
```
In case you also want to recover the architecture of different revisions,
you have to use a different setup for the `${DATA_DIR}`. The setup
is already in the `config` file.
Comment out the following line
```
export DATA_PATH="${REPLICATION_DIR}/data"
```
Remove the comments from these lines
```
#export DATA_PATH="${REPLICATION_DIR}/data/$REVISION"
#
#if [ ! -d "${DATA_PATH}" ] ; then
# mkdir -p "${DATA_PATH}"
#fi
```
**Note:** Before using this option make sure you follow the instructions in the REVISIONS-README.md
# Recovering multiple Revisions of an model architecture
All scripts can be used on different revisions of a model, as they only work
on the content of the model directory. In case your model is stored in a
git repository, you can checkout different revisions, releases, branches or
tags and then run the necessary recovery scripts for the respecitve model.
Later you may want to compare architectures with each other. This can
be done with oceandsl tools or their Kieker versions. For the paper, we
used the oceandsl tools. The Kieker tools are maintained by the Kieker
project and may deviate at some point in the future in features and
commandline parameters.
## Recovering multiple revisions of a model
1. Setup the generic `config` located in the directory of this readme.
2. Follow the instructions for the experiment setup of the model you
want to analyze.
3. Before starting the scripts, go to the models source directory, checkout
a revision and set the revision variable with the name of the git tag,
branch or revision name.
```
export REVISION=my-revision
```
4. Run the recovery and analysis scripts as you like. This will produce
in the `${DATA_DIR}` for the respective model and revision all the
necessary output.
5. Repeat this with as many revisions you like.
You can also automate. Here an example for mitgcm and the `tutorial_barotropic_gyre`
model:
```
export SCRIPTS_DIR="${REPLICAITON_DIR}/esm-architecture-analysis-replication-package/models/mitgcm"
cd "${REPLICATION_DIR}"
for REVISION in checkpoint60 checkpoint61a checkpoint61t ; do
cd "experiments/mitgcm/MITgcm
git clean -f -d
git checkout "${REVISION}"
"${SCRIPTS_DIR}/run-static-code-processing.sh" tutorial_barotropic_gyre
# Instead of call you can use dataflow or both as parameters
"${SCRIPTS_DIR}/run-static-analysis.sh tutorial_barotropic_gyre call
done
```
## Comparing multiple models
Assuming you followed the above tutorial, you have in the
`${REPLICATION_DIR}/data/mitgcm` directory multiple subdirectories
named `checkpoint60`, `checkpoint61a`, and `checkpoint61t`.
They contain models with the same name, but different content. To compare them
and use coloring features of the Kieker Development Tools, we have to mark modules
in the models, merge them and tell the Kieker Development Tools how to color them.
The Kieker Development Tools, can be found here together with the necessary instructions
for installation.
`https://kieker-monitoring.readthedocs.io/en/latest/kieker-tools/IRL-Tool.html#kieker-tools-irl`
For the comparision, we have to do the following steps:
1. To have short path to the tools you may set this path or add the path to your PATH variable.
```
export TOOLS_DIR="${REPLICATION_DIR}/oceandsl-tools/bin"
```
2. Add labels to the models to mark every content element.
```
${TOOLS_DIR}/relabel -e demo-c60 -i "${REPLICATION_DIR}/data/mitgcm/checkpoint60" -o c60 -r static-call:static-call,c60
${TOOLS_DIR}/relabel -e demo-c61t -i "${REPLICATION_DIR}/data/mitgcm/checkpoint61t" -o c61t -r static-call:static-call,c61t
```
This adds revision information to all elements in the architecture.
3. Merge models
```
${TOOLS_DIR}/mop -e demo-c60-c61t -i c60 c61t -o c60-c61t merge
```
4. Add a coloring profile to the result model named `color-model.map` with the
following content:
```
component: c61t=#a0ffa0, #90f090
operation: c61t=#d0ffd0, #c0ffc0
component: c60=#a0a0ff, #9090f0
operation: c60=#d0d0ff, #c0c0ff
component: c61t, c60=#ffff00, #d0d000
operation: c61t, c60=#f0f000, #e0e000
```
This map will color modules and operation for the c61t (checkpoint61t) model in green, for
the c60 (checkpoint60) model in blue, and modules and operations that appear in both models in yellow.
Alternatively, you can generate graphics with the `mvis` tool
`https://kieker-monitoring.readthedocs.io/en/latest/kieker-tools/mvis.html#kieker-tools-mvis`
on commandline using different selection schemes. See the documentation for details or just test out
the different selectors.
"<runtime>","abs",1,fixed
"<runtime>","allocated",1,fixed
"<runtime>","aint",1,fixed
"<runtime>","anint",1,fixed
"<runtime>","acos",1,fixed
"<runtime>","asin",1,fixed
"<runtime>","atan",1,fixed
"<runtime>","atan2",2,fixed
"<runtime>","cbrt",1,fixed
"<runtime>","char",1,fixed
"<runtime>","cloc",1,fixed
"<runtime>","conjg",1,fixed
"<runtime>","cos",1,fixed
"<runtime>","cosh",1,fixed
"<runtime>","dble",1,fixed
"<runtime>","dfloat",1,fixed
"<runtime>","dim",1,fixed
"<runtime>","erf",1,fixed
"<runtime>","etime",2,fixed
"<runtime>","exp",1,fixed
"<runtime>","fdate",1,fixed
"<runtime>","float",1,fixed
"<runtime>","flush",1,fixed
"<runtime>","getenv",2,fixed
"<runtime>","iabs",1,fixed
"<runtime>","imag",1,fixed
"<runtime>","ichar",1,fixed
"<runtime>","ifix",1,fixed
"<runtime>","index",2,variable
"<runtime>","int",1,fixed
"<runtime>","ioerrorcount",1,fixed
"<runtime>","len",1,fixed
"<runtime>","len_trim",1,fixed
"<runtime>","log",1,fixed
"<runtime>","alog",1,fixed
"<runtime>","dlog",1,fixed
"<runtime>","clog",1,fixed
"<runtime>","zlog",1,fixed
"<runtime>","cdlog",1,fixed
"<runtime>","log10",1,fixed
"<runtime>","alog10",1,fixed
"<runtime>","dlog10",1,fixed
"<runtime>","max",2,variable
"<runtime>","max0",2,variable
"<runtime>","amax0",2,variable
"<runtime>","max1",2,variable
"<runtime>","amax1",2,variable
"<runtime>","dmax1",2,variable
"<runtime>","min",2,variable
"<runtime>","min0",2,variable
"<runtime>","amin0",2,variable
"<runtime>","min1",2,variable
"<runtime>","amin1",2,variable
"<runtime>","dmin1",2,variable
"<runtime>","mod",2,fixed
"<runtime>","nint",1,fixed
"<runtime>","real",2,fixed
"<runtime>","setrlstk",1,fixed
"<runtime>","sign",2,fixed
"<runtime>","sigreg",1,fixed
"<runtime>","sin",1,fixed
"<runtime>","sinh",1,fixed
"<runtime>","sqrt",1,fixed
"<runtime>","system",1,variable
"<runtime>","tan",1,fixed
"<runtime>","tanh",1,fixed
"<runtime>","timernames",1,fixed
"<runtime>","trim",1,fixed
\ No newline at end of file
......@@ -72,46 +72,48 @@ checkDirectory "Static iface file model" "${IFACE_STATIC_FILE_MODEL}"
checkDirectory "Static iface map model" "${IFACE_STATIC_MAP_MODEL}"
checkDirectory "Static iface 2-level model" "${IFACE_STATIC_2_LEVEL_MODEL}"
checkDirectory "Dynamic file model" "${DYNAMIC_FILE_MODEL}"
checkDirectory "Dynamic map model" "${DYNAMIC_MAP_MODEL}"
checkDirectory "Dynamic 2-level model" "${DYNAMIC_2_LEVEL_MODEL}"
#checkDirectory "Dynamic file model" "${DYNAMIC_FILE_MODEL}"
#checkDirectory "Dynamic map model" "${DYNAMIC_MAP_MODEL}"
#checkDirectory "Dynamic 2-level model" "${DYNAMIC_2_LEVEL_MODEL}"
checkDirectory "Dynamic iface file model" "${IFACE_DYNAMIC_FILE_MODEL}"
checkDirectory "Dynamic iface map model" "${IFACE_DYNAMIC_MAP_MODEL}"
checkDirectory "Dynamic iface 2-level model" "${IFACE_DYNAMIC_2_LEVEL_MODEL}"
#checkDirectory "Dynamic iface file model" "${IFACE_DYNAMIC_FILE_MODEL}"
#checkDirectory "Dynamic iface map model" "${IFACE_DYNAMIC_MAP_MODEL}"
#checkDirectory "Dynamic iface 2-level model" "${IFACE_DYNAMIC_2_LEVEL_MODEL}"
checkDirectory "Combined file model" "${COMBINED_FILE_MODEL}"
checkDirectory "Combined map model" "${COMBINED_MAP_MODEL}"
checkDirectory "Combined 2-level model" "${COMBINED_2_LEVEL_MODEL}"
#checkDirectory "Combined file model" "${COMBINED_FILE_MODEL}"
#checkDirectory "Combined map model" "${COMBINED_MAP_MODEL}"
#checkDirectory "Combined 2-level model" "${COMBINED_2_LEVEL_MODEL}"
checkDirectory "Combined iface file model" "${IFACE_COMBINED_FILE_MODEL}"
checkDirectory "Combined iface map model" "${IFACE_COMBINED_MAP_MODEL}"
checkDirectory "Combined iface 2-level model" "${IFACE_COMBINED_2_LEVEL_MODEL}"
#checkDirectory "Combined iface file model" "${IFACE_COMBINED_FILE_MODEL}"
#checkDirectory "Combined iface map model" "${IFACE_COMBINED_MAP_MODEL}"
#checkDirectory "Combined iface 2-level model" "${IFACE_COMBINED_2_LEVEL_MODEL}"
# check outputs
# ${DYNAMIC_FILE_MODEL}
# ${DYNAMIC_MAP_MODEL}
# ${DYNAMIC_2_LEVEL_MODEL}
# ${IFACE_DYNAMIC_FILE_MODEL}
# ${IFACE_DYNAMIC_MAP_MODEL}
# ${IFACE_DYNAMIC_2_LEVEL_MODEL}
# ${COMBINED_FILE_MODEL}
# ${COMBINED_MAP_MODEL}
# ${COMBINED_2_LEVEL_MODEL}
# ${IFACE_COMBINED_FILE_MODEL}
# ${IFACE_COMBINED_MAP_MODEL}
# ${IFACE_COMBINED_2_LEVEL_MODEL}
# run
TEMPFILE=`mktemp`
cat << EOF > $TEMPFILE
${DYNAMIC_FILE_MODEL}
${DYNAMIC_MAP_MODEL}
${DYNAMIC_2_LEVEL_MODEL}
${IFACE_DYNAMIC_FILE_MODEL}
${IFACE_DYNAMIC_MAP_MODEL}
${IFACE_DYNAMIC_2_LEVEL_MODEL}
${STATIC_FILE_MODEL}
${STATIC_MAP_MODEL}
${STATIC_2_LEVEL_MODEL}
${IFACE_STATIC_FILE_MODEL}
${IFACE_STATIC_MAP_MODEL}
${IFACE_STATIC_2_LEVEL_MODEL}
${COMBINED_FILE_MODEL}
${COMBINED_MAP_MODEL}
${COMBINED_2_LEVEL_MODEL}
${IFACE_COMBINED_FILE_MODEL}
${IFACE_COMBINED_MAP_MODEL}
${IFACE_COMBINED_2_LEVEL_MODEL}
EOF
information "Compute file level statistics"
......
# Main replication directory
export REPLICATION_DIR="/home/reiner/temp/experiment/experiments"
export REPLICATION_DIR= +++ SETUP HERE +++
# Library path including Kieker libraries
export KIEKER_LIBRARY_PATH="${REPLICATION_DIR}/../install/km/lib/"
export KIEKER_LIBRARY_PATH="${REPLICATION_DIR}/kieker/lib/"
# Location for dynamic and static data
export DATA_PATH="/home/reiner/Projects/OceanDSL/architecture-recovery-and-optimization-data"
export DATA_PATH="${REPLICATION_DIR}/data"
# Alternative setup to analyze different revisions of a model
#export DATA_PATH="${REPLICATION_DIR}/data/$REVISION"
#
#if [ ! -d "${DATA_PATH}" ] ; then
# mkdir -p "${DATA_PATH}"
#fi
# List of external functions
export EXTERNAL_FUNCTIONS_MAP="${REPLICATION_DIR}/builtin-functions.csv"
export STATIC_AUX_MODULE_MAP="${REPLICATION_DIR}/uvic-aux-map-file.csv"
export TOOL_DIR="/home/reiner/temp/experiment/install"
export TOOL_DIR="${REPLICATION_DIR}"
# Data directory for results from the optimization
OPTIMIZATION_DATA="/home/reiner/Projects/OceanDSL/restructuring-results"
......
......@@ -31,6 +31,8 @@ another structure, please adapt directories accordingly.
Please note that the ${REPLICATION_DIR} is the main directory of the whole
setup and ${SCRIPTS_DIR} is the directory of this README.md file, e.g.,
`${REPLICATION_DIR}/esm-architecture-analysis-replication-package/models/mitgcm`
and the ${GLOBAL_SCRIPTS_DIR} is the directory one level up, e.g.,
`${REPLICATION_DIR}/esm-architecture-analysis-replication-package/models`.
Create a workspace directory for the analyis, e.g., `experiments/mitgcm`, and
switch to this directory.
......@@ -89,64 +91,80 @@ cd ../..
## Setting up the Experiments
Lets assume you are in the `${SCRIPTS}` directory.
Switch to the `${GLOBAL_SCRIPTS_DIR}` directory.
Here you need to create a `config` file. You can use the `config.template` that
resides alongside this `README.md`-file.
Here you need to create a `config` file in the `${GLOBAL_SCRIPT_DIR}`.
You can use the `config.template` in the same directory as template.
Then the configuration file should look like this:
```
# PREFIX for the mitgcm model variants
PREFIX="${REPLICATION_DIR}/experiments/mitgcm/MITgcm/verification"
# Main replication directory
export REPLICATION_DIR="/home/user/replication"
# Library path including Kieker libraries
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:${REPLICATION_DIR}/kieker/lib/"
# Compile configurtion for kieker
export CONFIGURATION="${REPLICATION_DIR}/experiments/mitgcm/linux_amd64_gfortran_kieker"
export KIEKER_LIBRARY_PATH="${REPLICATION_DIR}/../kieker/lib/"
# Location for dynamic and static data
export DYNAMIC_DATA_PATH="${REPLICATION_DIR}/experiments/mitgcm/dynamic-data"
export STATIC_DATA_PATH="${REPLICATION_DIR}/experiments/mitgcm/static-data"
export DATA_PATH="${REPLICATION_DIR}/data"
# List of external functions
export EXTERNAL_FUNCTIONS_MAP="${REPLICATION_DIR}/builtin-functions.csv"
export STATIC_AUX_MODULE_MAP="${REPLICATION_DIR}/uvic-aux-map-file.csv"
export TOOL_DIR="${REPLICATION_DIR}"
# Data directory for results from the optimization
OPTIMIZATION_DATA="/home/user/restructuring-results"
DAR="${REPLICATION_DIR}/oceandsl-tools/bin/dar"
SAR="${REPLICATION_DIR}/oceandsl-tools/bin/sar"
MAA="${REPLICATION_DIR}/oceandsl-tools/bin/maa"
MOP="${REPLICATION_DIR}/oceandsl-tools/bin/mop"
MVIS="${REPLICATION_DIR}/oceandsl-tools/bin/mvis"
DAR="${TOOL_DIR}/oceandsl-tools/bin/dar"
SAR="${TOOL_DIR}/oceandsl-tools/bin/sar"
MAA="${TOOL_DIR}/oceandsl-tools/bin/maa"
MOP="${TOOL_DIR}/oceandsl-tools/bin/mop"
MVIS="${TOOL_DIR}/oceandsl-tools/bin/mvis"
RELABEL="${TOOL_DIR}/oceandsl-tools/bin/relabel"
FXCA="${TOOL_DIR}/oceandsl-tools/bin/fxca"
FXTRAN="${TOOL_DIR}/fxtran"
RESTRUCTURING="${TOOL_DIR}/oceandsl-tools/bin/restructuring"
DELTA="${TOOL_DIR}/oceandsl-tools/bin/delta"
MKTABLE="${TOOL_DIR}/oceandsl-tools/bin/mktable"
# collector tool
COLLECTOR="${REPLICATION_DIR}/collector/bin/collector"
COLLECTOR="${TOOL_DIR}/collector/bin/collector"
# addr2line
ADDR2LINE=`which addr2line`
# Path to the executable
EXECUTABLE="${REPLICATION_DIR}/experiments/mitgcm/MITgcm/verification/$NAME/build/mitgcmuv"
# Dynamic and static prefix
DYNAMIC_PREFIX="$PREFIX/$NAME/build/"
STATIC_PREFIX="/home/hschnoor/eclipse-workspace/PlayPython/resources/preprocessed/MITgcm-$NAME/"
# Hostname where the dynamic analysis was executed
HOST=lisboa
HOST=glasgow
```
Of course the HOST variable must be changed to the name of the machine
the experiments are run.
the experiments are run. As this is the global configuration file, you
must also setup the second config file for mitgcm.
```
cd "${SCRIPTS_DIR}"
cp config.template config
```
Then edit the config file accordingly.
The next file you have to create is a list of all experiments you want
to run. The file must be named `experiments`. A list of all experiments
which are used in the tutorial of MITgcm are listed in `tutorials` and
all other experiments are listed in `normal`.
## Dynamic Analysis
## Selecting models
**Note:** For test purposes, it is helpful run the data collection with
one experiment only.
Mitgcm comes with multiple prepared experiments located in the `verification`
subdirectory. You can run any number of these experiments following the
instructions below. For certain experiments additional setup is required.
Information for these experiments can be found in the respective
experiment directory.
To run a single experiment you can type
## Dynamic Observations
To run a single experiment type
```
cd ${SCRIPTS_DIR}
./run-dynamic-observation.sh tutorial_barotropic_gyre
......@@ -156,8 +174,9 @@ where `tutorial_barotropic_gyre` is the experiment to be executed.
**Note:** This will automatically create a new experiments file with
`tutorial_barotropic_gyre` as only entry.
While the script tries to setup all experiments as intended, some will
not run or even compile. These need additional setup instructions which
You can run all experiments from the verification directory of mitgcm.
However, some need additional setup and the experiment may not run as
intended. These experiments have additional instructions which
can be found in the respective experiment folder.
You also may want to increase the runtime of certain experiments, to
......@@ -165,18 +184,47 @@ ensure that all parts of the experiment are used. Such instructions
can also be found in the respecticve experiment directory and online
at `https://mitgcm.readthedocs.io/en/latest/examples/examples.html`.
## Static Analysis
## Static Code Processing
Fortran code may use built-in functions. There have to be registered
in a function map. Copy from
```
cp "${REPLICATION_DIR}/esm-architecture-analysis-replication-package/models/builtin-functions.csv" "${REPLICATION_DIR}"
```
Run the code processing with
```
cd "${SCRIPTS_DIR}"
./run-static-code-processing.sh tutorial_barotropic_gyre
```
## Architecture Reconstruction
Lets assume you have collected the dynamic and static data for a MITgcm
Lets assume you have collected the dynamic and static data for a MITgcm
experiment, e.g., `tutorial_barotropic_gyre` and you are in the
`${REPLICATION_DIR}/experiments/mitgcm` directory. Now you can run the analysis
experiment, e.g., `tutorial_barotropic_gyre`. Ensure that you are in the
`${SCRIPTS_DIR}` directory.
```
cd "${SCRIPTS_DIR}"
./run-static-analysis.sh tutorial_barotropic_gyre call
./run-dynamic-analysis.sh tutorial_barotropic_gyre
```
Instead of `call` the static analysis also accepts `dataflow` and `both`
as parameters.
## Automation of Analysis
Instead of running the scripts above, you can automate this with
```
cd "${GLOBAL_SCRIPTS_DIR}"
./run-architecture-analysis.sh tutorial_barotropic_gyre mitgcm
```
This call runs all dynamic and static analysis steps for the specified
variant - here `tutorial_barotropic_gyre` for the model `mitgcm`.
If everything is setup properly, you will get results files for the
various analyses:
......@@ -198,11 +246,25 @@ various analyses:
- `combined-model` similar to `dynamic-model`, but these models reflect
the architecture after the dynamic and static analysis.
## Additional Information
## Run all experiments
You can use `./run-all-analysis.sh experiments` to run all experiment
architecture analyses automatically.
You can use `./run-all-analysis.sh ${experiments}.lst` to run all
experiment architecture analyses automatically.
We have prepared two lists
- `all-variants.lst` contains all variants of mitgcm
- `normal-variants.lst` are variants that do not require additional setup
## Visualization
You can use the `dotPic-fileConverter.sh` from the Kieker archive to
convert all dot files. This tool requires `dot` installed on your
machine.
Alternatively, you can use from the Kieker Development Tools the visualization
component. These are a bundle of tools used with Kieker and are implemented
as plugins for Eclipse. The Eclipse repository is
`https://maui.se.informatik.uni-kiel.de/repo/kdt/snapshot/`
# Repository prefix for the mitgcm model variants
REPOSITORY_DIR="${REPLICATION_DIR}/MITgcm"
REPOSITORY_DIR="${REPLICATION_DIR}/experiments/mitgcm/MITgcm"
# Compile configurtion for kieker
export CONFIGURATION="${REPOSITORY_DIR}/linux_amd64_gfortran_kieker"
export CONFIGURATION="${REPLICATION_DIR}/esm-architecture-analysis-replication-package/models/mitgcm/linux_amd64_gfortran_kieker"
# Source code root directory
SOURCE_CODE_PATH="${REPOSITORY_DIR}/model:${REPOSITORY_DIR}/pkg:${REPOSITORY_DIR}/optim:${REPOSITORY_DIR}/eesupp"
# Processed source code directory
......
......@@ -27,6 +27,8 @@ fi
export JAVA_OPTS="-Dlogback.configurationFile=${BASE_DIR}/../logback.xml"
export MODEL_DATA_PATH="${DATA_PATH}/mitgcm/${EXPERIMENT_NAME}"
information "MODEL $MODEL_DATA_PATH"
# inputs
checkDirectory "Static data" "${MODEL_DATA_PATH}" create
checkDirectoryList "Source directory" "${SOURCE_CODE_PATH}"
......
No preview for this file type
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment