Skip to content

Commit

Permalink
Adding documentation and changing website layout
Browse files Browse the repository at this point in the history
Signed-off-by: Akshaya Jagannadharao <[email protected]>
  • Loading branch information
ajagann authored and Jagaskak committed Aug 10, 2018
1 parent 75cd103 commit c163eaf
Show file tree
Hide file tree
Showing 59 changed files with 636 additions and 426 deletions.
2 changes: 0 additions & 2 deletions _config.yml

This file was deleted.

249 changes: 45 additions & 204 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -1,224 +1,65 @@
What is this software?
----------------------

This is the MPI Testing Tool (MTT) software package. It is a
standalone tool for testing the correctness and performance of
arbitrary MPI implementations.

The MTT is an attempt to create a single tool to download and build a
variety of different MPI implementations, and then compile and run any
number of test suites against each of the MPI installations, storing
the results in a back-end database that then becomes available for
historical data mining. The test suites can be for both correctness
and performance analysis (e.g., tests such as nightly snapshot compile
results as well as the latency of MPI_SEND can be historically
archived with this tool).

The MTT provides the glue to obtain and install MPI installations
(e.g., download and compile/build source distributions such as nightly
snapshots, or copy/install binary distributions, or utilize an
already-existing MPI installation), and then obtain, compile, and run
the tests. Results of each phase are submitted to a centralized
PostgresSQL database via HTTP/HTTPS. Simply put, MTT is a common
infrastructure that can be distributed to many different sites in
order to run a common set of tests against a group of MPI
implementations that all feed into a common PostgresSQL database of
results.

The MTT client is written almost entirely in perl; the MTT server side
is written almost entirely in PHP and relies on a back-end PostgresSQL
database.

The main (loose) requirements that we had for the MTT are:

- Use a back-end database / archival system.
- Ability to obtain arbitrary MPI implementations from a variety of
sources (web/FTP download, filesystem copy, Subversion export,
etc.).
- Ability to install the obtained MPI implementations, regardless of
whether they are source or binary distributions. For source
distributions, include the ability to compile each MPI
implementation in a variety of different ways (e.g., with different
compilers and/or compile flags).
- Ability to obtain arbitrary test suites from a variety of sources
(web/FTP download, filesystem copy, Subversion export, etc.).
- Ability to build each of the obtained test suites against each of
the MPI implementation installations (e.g., for source MPI
distributions, there may be more than one installation).
- Ability to run each of the built test suites in a variety of
different ways (e.g, with a set of different run-time options).
- Ability to record the output from each of the steps above and
submit securely them to a centralized database.
- Ability to run the entire test process in a completely automated
fashion (e.g., via cron).
- Ability to run each of the steps above on physically different
machines. For example, some sites may require running the
obtain/download steps on machines that have general internet access,
running the compile/install steps on dedicated compile servers,
running the MPI tests on dedicated parallel resources, and then
running the final submit steps on machines that have general
internet access.
- Use a component-based system (i.e., plugins) for the above steps so
that extending the system to download (for example) a new MPI
implementation is simply a matter of writing a new module with a
well-defined interface.


How to cite this software
-------------------------
Hursey J., Mallove E., Squyres J.M., Lumsdaine A. (2007) An Extensible
Framework for Distributed Testing of MPI Implementations. In Recent
Advances in Parallel Virtual Machine and Message Passing Interface.
EuroPVM/MPI 2007. Lecture Notes in Computer Science, vol 4757. Springer,
Berlin, Heidelberg.
https://doi.org/10.1007/978-3-540-75416-9_15


Overview
--------

The MTT divides its execution into six phases:

1. MPI get: obtain MPI software package(s) (e.g., download, copy)
2. MPI install: install the MPI software package(s) obtained in phase 1.
This may involve a binary installation or a build from source.
3. Test get: obtain MPI test(s)
4. Test build: build the test(s) against all MPI installations
installed in phase 2.
5. Test run: run all the tests build in phase 4.
6. Report: report the results of phases 2, 4, and 5.

The phases are divided in order to allow a multiplicative effect. For
example, each MPI package obtained in phase 1 may be installed in
multiple different ways in phase 2. Tests that are built in phase 4
may be run multiple different ways in phase 5. And so on.

This multiplicative effect allows testing many different code paths
through MPI even with a small number of actual tests. For example,
the Open MPI Project uses the MTT for nightly regression testing.
Even with only several hundred MPI test source codes, Open MPI is
tested against a variety of different compilers, networks, number of
processes, and other run-time tunable options. A typical night of
testing yields around 150,000 Open MPI tests.


Quick start
-----------

Testers run the MTT client on their systems to do all the work. A
configuration file is used to specify which MPI implementations to use
and which tests to run.

The Open MPI Project uses MTT for nightly regression testing. A
sample Perl client configuration file is included in
samples/perl/ompi-core-template.ini. This template will require
customization for each site's specific requirements. It is also
suitable as an example for organizations outside of the Open MPI
Project.

Open MPI members should visit the MTT wiki for instructions on how to
setup for nightly regression testing:

https://github.com/open-mpi/mtt/wiki/OMPITesting

The MTT client requires a few perl packages to be installed locally,
such as LWP::UserAgent. Currently, the best way to determine if you
have all the required packages is simply to try running the client and
see if it fails due to any missing packages.

Note that the INI file can be used to specify web proxies if
necessary. See comments in the ompi-core-template.ini file for
details.


Running the MTT Perl client
---------------------------

Having run the MTT client across several organizations within the Open
MPI Project for quite a while, we have learned that even with common
goals (such as Open MPI nightly regression testing), MTT tends to get
used quite differently at each site where it is used. The
command-line client was designed to allow a high degree of flexibility
for site-specific requirements.

The MTT client has many command line options; see the following for a
full list:

$ client/mtt --help

Some sites add an upper layer of logic/scripting above the invocation
of the MTT client. For example, some sites run the MTT on
SLURM-maintained clusters. A variety of compilers are tested,
yielding multiple unique (MPI get, MPI install, Test get, Test build)
tuples. Each tuple is run in its own 1-node SLURM allocation,
allowing the many installations/builds to run in parallel. When the
install/build tuple has completed, more SLURM jobs are queued for each
desired number of nodes/processes to test. These jobs all execute in
parallel (pending resource availability) in order to achieve maximum
utilization of the testing cluster.
# What is this software?
This is the Middleware Testing Tool (MTT) software package. It is a standalone tool for testing the correctness and performance of arbitrary MPI implementations.

Other scenarios are also possible; the above is simply one way to use
the MTT.
This website focuses on documenting the Python Client. For more documentation on the Perl Client, please refer to the [Wiki Pages](https://github.com/open-mpi/mtt/wiki/MTTOverview).

MTT is a single tool created to download and build a variety of different middleware implementations, and then compile and run any number of test suites against each of the installations, storing the results in a back-end database that then becomes available for
historical data mining. The test suites can be for both correctness and performance analysis (e.g., tests such as nightly snapshot compile results as well as the latency of MPI_SEND can be historically archived with this tool).

Current status
--------------
MTT provides the glue to obtain and install middleware installations (e.g., download and compile/build source distributions such as nightly snapshots, or copy/install binary distributions, or utilize an already-existing middleware installation), and then obtain, compile, and run the tests.

This tool was initially developed by the Open MPI team for nightly and
periodic compile and regression testing. However, enough other
parties have expressed [significant] interest that we have open-sourced
the tool and are eagerly accepting input from others. Indeed, having
a common tool to help objectively evaluate MPI implementations may be
an enormous help to the High Performance Computing (HPC) community at
large.
Results of each phase are submitted to a centralized PostgresSQL database via HTTP/HTTPS. Simply put, MTT is a common infrastructure that can be distributed to many different sites in
order to run a common set of tests against a group of Middleware implementations that all feed into a common PostgresSQL database of results.

We have no illusions of MTT becoming the be-all/end-all tool for
testing software -- we do want to keep it somewhat focused on the
needs and requires of testing MPI implementations. As such, the usage
flow is somewhat structured towards that bias.
# Overview

It should be noted that the software has been mostly developed internally
to the Open MPI project and will likely experience some growing pains
while adjusting to a larger community.
MTT is divided into multiple phases of execution to split up grabbing content, building content, running content, and reporting results (please refer to the [INI documentation](/mtt/docs/ini_docs.html) to learn more).

The phases are divided to allow a multiplicative effect. For example, each middleware package obtained may be installed in multiple different ways. The built tests may be executed in multiple different ways. And so on.

License
-------
Phases are effectively templated to allow multiple executions of each phase based on parameterization. For example, you can specify a single middleware implementation, but have MTT compile it against both the GNU and Intel compilers. MTT will automatically track that there is one middleware source, but two installations of it. Every test suite that is specified will therefore be compiled and run against _both_ middleware installations, and their results filed accordingly. Hence, the MTT gives a multiplicitive effect. A simplistic view:
- M middleware implementations are specified
- I installations of each middleware implementation are specified
- A total of (M * I) installations are created (assuming all are successful)
- T test suites are specified, each of which is compiled against the (M * I) middleware installation
- R different run parameters are specified for each test suite
- A total of (T * R * M * I) tests are run.

Because we want MTT to be a valuable resource to the entire HPC
community, the MTT uses the new BSD license -- see the LICENSE file in
the MTT distribution for details.
Hence, you must be careful not to specify too much work to MTT -- it will happily do all of it, but it may take a long, long time!

*Note:* MTT takes care of all PATH and LD_LIBRARY_PATH issues when building and installing both middleware implementations and test suites. There is no need for the user to setup anything special in their shell startup files.

Get involved
------------
The following graphic is a decent representation of the relationships of the phases to each other, and the general sequence of phases. It shows two example middleware implementations (open MPI and MPICH), but any middleware implementation could be used (even multiple versions of the same middleware implementation):

We *want* your feedback. We *want* you to get involved.
![](/mtt/assets/images/mtt-functional.png)

The main web site for the MTT is:
# Quick start
Testers run the MTT client on their systems to do all the work. A configuration file is used to specify which middleware implementations to use and which tests to run.

http://www.open-mpi.org/projects/mtt/
The Open MPI Project uses MTT for nightly regression testing. A sample Python client configuration file is included in samples/python/ompi_hello_world.ini. It is also suitable as an example for organizations outside of the Open MPI Project.

User-level questions and comments should generally be sent to the
user's mailing list ([email protected]). Because of spam, only
subscribers are allowed to post to this list (ensure that you
subscribe with and post from *exactly* the same e-mail address --
[email protected] is considered different than
[email protected]!). Visit this page to subscribe to the
user's list:
# Nightly Regression Testing
Open MPI members should visit the [MTT Wiki](https://github.com/open-mpi/mtt/wiki/OMPITesting) for instructions on how to setup for nightly regression testing.

https://lists.open-mpi.org/mailman/listinfo/mtt-users
To configure nightly testing with Travis CI, please refer to the [Travis CI documentation](/mtt/pages/travis.html).

Developer-level bug reports, questions, and comments should generally
be sent to the developer's mailing list ([email protected]).
Please do not post the same question to both lists. As with the
user's list, only subscribers are allowed to post to the developer's
list. Visit the following web page to subscribe:
# Running the MTT Python client

https://lists.open-mpi.org/mailman/listinfo/mtt-devel
http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
Having run the MTT client across several organizations within the Open MPI Project for quite a while, we have learned that even with common goals (such as Open MPI nightly regression testing), MTT tends to get used quite differently at each site where it is used. The
command-line client was designed to allow a high degree of flexibility for site-specific requirements.

When submitting bug reports to either list, be sure to include as much
extra information as possible.
The MTT client has many command line options; try the following command to see the full list of options:

```mtt/pyclient/pymtt.py --help```

Some sites add an upper layer of logic/scripting above the invocation of the MTT client. For example, some sites run the MTT on SLURM-maintained clusters. A variety of compilers are tested, yielding multiple unique (MiddlewareGet, MiddlewareBuild, TestGet, TestBuild) tuples. Each tuple is run in its own 1-node SLURM allocation, allowing the many installations/builds to run in parallel. When the install/build tuple has completed, more SLURM jobs are queued for each desired number of nodes/processes to test. These jobs all execute in parallel (pending resource availability) in order to achieve maximum utilization of the testing cluster.

Other scenarios are also possible; the above is simply one way to use MTT.

# How to cite this software

Hursey J., Mallove E., Squyres J.M., Lumsdaine A. (2007) An Extensible Framework for Distributed Testing of MPI Implementations. In Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2007. Lecture Notes in Computer Science, vol 4757. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-75416-9_15

# License
Because we want MTT to be a valuable resource to the entire HPC community, the MTT uses the new BSD license -- see the LICENSE file in the MTT distribution for details.

Thanks for your time.
2 changes: 1 addition & 1 deletion docs/_config.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
title: MTT
description: MPI Testing Tool
description: Middleware Testing Tool
show_downloads: true
theme: jekyll-theme-leap-day
Loading

0 comments on commit c163eaf

Please sign in to comment.