• Home
  • History
  • Annotate
  • only in this directory
NameDateSize

..14-Jul-201411

CommonCrypto/H14-Jul-201436

CommonNumerics/H14-Jul-20144

main.cH A D23-Aug-2012234

READMEH A D23-Aug-20126.8 KiB

runscript/H14-Jul-20143

test/H14-Jul-201411

util/H14-Jul-20147

README

1regression test suite for security components.
2by Michael Brouwer
3
4
5GOALS
6=====
7
8The goals of this test setup are  to have something that required
90 configuration and setup and allows developers to quickly write
10new standalone test cases.
11
12
13USAGE
14=====
15
16The tests are runnable from the top level Makefile by typing:
17    make test
18or individually from the command line or with gdb.  Tests will be
19built into a directory called build by default or into LOCAL_BUILD_DIR
20if you set that in your environment.
21
22
23DIRECTORY LAYOUT
24================
25
26Currently there are subdirectories for a number of different parts
27of the security stack.  Each directory contains some of the unit
28tests I've managed to find from radar and other places.
29
30The test programs output their results in a format called TAP.  This
31is described here:
32    http://search.cpan.org/~petdance/Test-Harness-2.46/lib/Test/Harness/TAP.pod
33Because of this we can use Perl's Test::Harness to run the tests
34and produce some nice looking output without the need to write an
35entire test harness.
36
37Tests can be written in C, C++ or Objective-C or perl (using
38Test::More in perl).
39
40
41WRITING TESTS
42=============
43
44To add a new test simply copy one of the existing ones and hack away.
45Any file with a main() function in it will be built into a test
46automatically by the top level Makefile (no configuration required).
47
48To use the testmore C "library" all you need to do is #include
49"testmore.h" in your test program.
50
51Then in your main function you must call:
52
53plan_tests(NUMTESTS) where NUMTESTS is the number of test cases you
54test program will run.  After that you can start writing tests.
55There are a couple of macros to help you get started:
56
57The following are ways to run an actual test case (as in they count
58towards the NUMTESTS number above):
59
60ok(EXPR, NAME)
61    Evaluate EXPR if it's true the test passes if false it fails.
62    The second argument is a descriptive name of the test for debugging
63    purposes.
64
65is(EXPR, VALUE, NAME)
66    Evaluate EXPR if it's equal to VALUE the test passes otherwise
67    it fails.  This is equivalent to ok(EXPR == VALUE, NAME) except
68    this produces nicer output in a failure case.
69    CAVEAT: Currently EXPR and VALUE must both be type int.
70
71isnt(EXPR, VALUE, NAME)
72    Opposite of is() above.
73    CAVEAT: Currently EXPR and VALUE must both be type int.
74
75cmp_ok(EXPR, OP, VALUE, NAME)
76    Succeeds if EXPR OP VALUE is true.  Produces a diagnostic if not.
77    CAVEAT: Currently EXPR and VALUE must both be type int.
78
79ok_status(EXPR, NAME)
80    Evaluate EXPR, if it's 0 the test passes otherwise print a
81    diagnostic with the name and number of the error returned.
82
83is_status(EXPR, VALUE, NAME)
84    Evaluate EXPR, if the error returned equals VALUE the test
85    passes, otherwise print a diagnostic with the expected and
86    actual error returned.
87
88ok_unix(EXPR, NAME)
89    Evaluate EXPR, if it's >= 0 the test passes otherwise print a
90    diagnostic with the name and number of the errno.
91
92is_unix(EXPR, VALUE, NAME)
93    Evaluate EXPR, if the errno set by it equals VALUE the test
94    passes, otherwise print a diagnostic with the expected and
95    actual errno.
96
97Finally if you somehow can't express the success or failure of a
98test using the macros above you can use pass(NAME) or fail(NAME)
99explicitly.  These are equivalent to ok(1, NAME) and ok(0, NAME)
100respectively.
101
102
103LEAKS
104=====
105
106If you want to check for leaks in your test you can #include
107"testleaks.h" in your program and call:
108
109ok_leaks(NAME)
110    Passes if there are no leaks in your program.
111
112is_leaks(VALUE, NAME)
113    Passes if there are exactly VALUE leaks in your program.  Useful
114    if you are calling code that is known to leak and you can't fix
115    it.  But you still want to make sure there are no new leaks in
116    your code.
117
118
119C++
120===
121
122For C++ programs you can #include "testcpp.h" which defines these
123additional macros:
124no_throw(EXPR, NAME)
125    Success if EXPR doesn't throw.
126
127does_throw(EXPR, NAME)
128    Success if EXPR does throw.
129
130is_throw(EXPR, CLASS, FUNC, VALUE, NAME)
131    Success if EXPR throws an exception of type CLASS and CLASS.FUNC == VALUE.
132    Example usage:
133    is_throw(CssmError::throwMe(42), CssmError, osStatus(), 42, "throwMe(42)");
134
135
136TODO and SKIP
137=============
138
139Sometimes you write a test case that is known to fail (because you
140found a bug).  Rather than commenting out that test case you should
141put it inside a TODO block.  This will cause the test to run but
142the failure will not be reported as an error.  When the test starts
143passing (presumably because someone fixed the bug) you can comment
144out the TODO block and leave the test in place.
145
146The syntax for doing this looks like so:
147
148    TODO: {
149        todo("<rdar://problem/4000000> ER: AAPL target: $4,000,000/share");
150
151        cmp_ok(apple_stock_price(), >=, 4000000, "stock over 4M");
152    }
153
154Sometimes you don't want to run a particular test case or set of
155test cases because something in the environment is missing or you
156are running on a different version of the OS than the test was
157designed for.  To achieve this you can use a SKIP block.
158
159The syntax for a SKIP block looks like so:
160
161    SKIP: {
162        skip("only runs on Tiger and later", 4, os_version() >= os_tiger);
163
164        ok(tiger_test1(), "test1");
165        ok(tiger_test2(), "test2");
166        ok(tiger_test3(), "test3");
167        ok(tiger_test4(), "test4");
168    }
169
170How it works is like so:  If the third argument to skip evaluates
171to false it breaks out of the SKIP block and reports N tests as
172being skipped (where N is the second argument to skip)  The reason
173for the test being skipped is given as the first argument to skip.
174
175
176Utility Functions
177=================
178
179Anyone writing tests can add new utility functions.  Currently there
180is a pair called tests_begin and tests_end.  To get them
181#include "testenv.h". Calling them doesn't count as running a test
182case, unless you wrap them in an ok() macro.  tests_begin creates
183a unique dir in /tmp and sets HOME in the environment to that dir.
184tests_end cleans up the mess that tests_begin made.
185
186When writing your own utility functions you will probably want to use
187the setup("task") macro so that any uses of ok() and other macros
188don't count as actual test cases run, but do report errors when they
189fail.   Here is an example of how tests_end() does this:
190
191int
192tests_end(int result)
193{
194        setup("tests_end");
195        /* Restore previous cwd and remove scratch dir. */
196        return (ok_unix(fchdir(current_dir), "fchdir") &&
197                ok_unix(close(current_dir), "close") &&
198                ok_unix(rmdir_recursive(scratch_dir), "rmdir_recursive"));
199}
200
201Setup cases all tests unil the end of the current funtion to not count
202against your test cases test count and they output nothing if they
203succeed.
204
205There is also a simple utility header called "testcssm.h" which
206currently defines cssm_attach and cssm_detach functions for loading
207and initializing cssm and loading a module.
208