• Home
  • History
  • Annotate
  • only in this directory
NameDateSize

..25-Jul-201943

__init__.pyH A D25-Jul-2019426

barrelfish.pyH A D25-Jul-20194.5 KiB

builds.pyH A D25-Jul-20198.4 KiB

checkout.pyH A D25-Jul-20193.5 KiB

debug.pyH A D25-Jul-20191.9 KiB

efiimage.pyH A D25-Jul-20193.7 KiB

harness.pyH A D25-Jul-20196.8 KiB

machines/H25-Jul-201913

READMEH A D25-Jul-20193.7 KiB

reprocess.pyH A D25-Jul-20192.1 KiB

results.pyH A D25-Jul-20193 KiB

scalebench.pyH A D25-Jul-201914.5 KiB

siteconfig/H25-Jul-20196

stats.pyH A D25-Jul-20192.4 KiB

subprocess_timeout.pyH A D25-Jul-20191.1 KiB

tests/H25-Jul-201951

README

1##########################################################################
2Copyright (c) 2009, ETH Zurich.
3All rights reserved.
4
5This file is distributed under the terms in the attached LICENSE file.
6If you do not find this file, copies can be found by writing to:
7ETH Zurich D-INFK, Haldeneggsteig 4, CH-8092 Zurich. Attn: Systems Group.
8##########################################################################
9
10Barrelfish test/benchmarking harness README
11
12RUNNING TESTS
13
14This set of Python modules is designed to automate the process of
15building, booting and collecting/analysing the output of various bechmarks
16on Barrelfish. There are currently two top-level programs:
17
18scalebench.py -- the (poorly-named) main script to run tests and collect results
19reprocess.py -- this is a utility using the same backend code that
20  allows the results of one or more previous runs to be re-analysed
21  without rerunning the benchmark
22
23scalebench.py is essentially a nested loop that runs one or more tests
24for one or more builds on one or more victim machines. The available
25builds, machines and tests are determined by the local site and the
26configured modules -- use scalebench.py -L to see a list.
27
28Specifying builds
29-----------------
30
31Build types may be specified the -b argument (which may be passed multiple
32times). Presently-supported builds are hake's default configuration, and a
33"release" configuration (without assertions or debug info). For a given build
34type, a build directory will be automatically created under the "build base"
35path, specified with -B. This allows the reuse of results of previous builds
36of the same type. Alternatively, rather than passing -b, the -e argument
37may be used to specify a path to an existing (configured) build directory;
38this allows the user to quickly run benchmarks against a specific set of
39compiled binaries with arbitrary build options.
40
41Specifying machines
42-------------------
43
44One or more victim 'machines' must be specified with the -m argument.
45This includes, at a minimum, the machines 'qemu1' and 'qemu4' which are
46simulated 1-way and 4-way CPUs. Depending on your site, various real
47hosts will also be available.
48
49Specifying tests
50----------------
51
52A large number of tests are available, and at least one must be specified
53with -t. See scalebench.py -L for a list of currently-defined tests and
54a short description of each. Not all tests may work at one time, and some
55tests probably won't work on all machines (in particular qemu). You'll have
56to use common sense here or ask for help.
57
58Note that the -b, -m and -t arguments accept shell-style glob wildcards;
59this can be useful to run a set of similarly-named tests, or to try all
60build types.
61
62Results
63-------
64
65Each test run, successful or not, produces a set of files in a result
66directory, which is currently created with a unique name under a global
67results directory that you must pass as the final argument to scalebench.
68This directory contains some metadata describing the test run
69(description.txt), the full console output from running the test (raw.txt)
70and any other test-specific files produced by running the test or processing
71its results -- these are hopefully self-explanatory, but if not see the
72python module that defines the test for information.
73
74
75INVOCATION EXAMPLES
76
77For a quick x86_64 smoke-test, try something like:
78
79python scalebench.py -m qemu1 -t memtest -v SOURCEDIR /tmp/results
80
81
82DEFINING NEW MACHINES, BUILDS, AND TESTS
83
84This is presently undocumented :(
85Please see the existing examples, or ask Andrew for help.
86
87
88TODOs
89
90 * Better support for multiple architectures.
91 * Better support for processing results, plot scripts etc.
92 * Better error handling (don't blow up in a backtrace when subprograms fail)
93 * Parallel tests/builds
94