Difference between revisions of "ZDTM API"
Line 126: | Line 126: | ||
== Adding test to automatic suite == | == Adding test to automatic suite == | ||
+ | |||
+ | The above part of the article describes what should be done to make the test work itself. Now, to check how the test is dumped and restored one should add the test to the suite. This is done simply by editing the <code>test/zdtm.sh</code> script and adding the test name to the <code>TEST_LIST</code> variable. Some tricks apply though. | ||
+ | |||
+ | === Caring about namespaces === | ||
+ | |||
+ | If your test behaves very differently when run inside sub-namespace or tests namespaces stuff, then the test should not be added to the very first definition of the <code>TEST_LIST</code> list, but instead to the one that goes after the <code># These ones are not in ns</code> comment, optionally with the <code>ns/</code> prefix. | ||
+ | |||
+ | === Requiring special features === | ||
+ | |||
+ | If in order to C/R the test some special support from the kernel is required, then CRIU itself should be taught to check this support with the <code>check --feature=$name</code> option (see cr-check.c file for details). The test itself should be conditionally added to the <code>TEST_LIST</code> variable. Check how the <code>static/tun</code> test is added for reference. | ||
[[Category: Development]] | [[Category: Development]] |
Revision as of 16:30, 23 January 2015
This page describes the API one should use to write new test in ZDTM.
Overview
The ZDTM test-suite is the automated suite that launches individual subtests as processes, waits for them to get prepared, then dumps the process(es) started, then restores them and asks to check whether the restored state is what test thinks to be proper. In order to write more subtests developer should use the API declared below. Tests themselves live in one of test/zdtm/live/
sub-directory. The full-cycle suite is the test/zdtm.sh
script.
API for test itself
Test body
The test should clearly describe 3 stages:
- Preparations
- These are actions that test does before the process(es) get checkpointed
- Actions
- During this the process(es) will be interrupted at arbitrary place and get dumped
- Checks
- After restore test should verify whether everything is OK or not and report the result
This is achieved by using the following API calls:
test_init(argc, argv)
- Just initializes the test subsystem. After this call the preparations stage starts.
test_daemon()
- This one says that we finished preparations and ready to get dumped at any time. I.e. -- the actions begin.
test_go()
- This routine checks whether the process was dumped and restored or not yet, i.e. the actions stage still goes on.
test_waitsig()
- Calling this blocks the task till it's restored. After this the checks stage starts.
pass()
- Calling this marks test as PASSED
fail(message)
- Calling this would report test failure. The message argument will be written into logs
err(message)
- Use this function to mark the test as being unable to work at all.
test_msg(message)
- This is for logging.
From the code perspective this looks like this:
int main(int argc, char **argv) { test_init(argc, argv); /* Preparations */ test_daemon(); #if test want to act and get c/r-ed unexpectedly while (test_go()) { /* Actions go here */ } #else test just want to wait for c/r to happen /* Actions go here. */ test_waitsig(); #endif /* checks */ if (test_passed()) pass(); else fail("Something went wrong"); return 0; }
Options
Sometimes a test might need an option to work. E.g. a file or directory to work with or some string to operate on. This can be done by declaring the options.
TEST_OPTION(name, type, description);
The name is some name for the option. The variable of the same name should be declared and the --$name
CLI argument would be passed upon test start (this option would be parsed by test_init()
call, so nothing more should be done to have the option available).
The type is data type. Currently supported types are string, bool, int, uint, long and ulong.
Description is just arbitrary text.
Building and launching
Then the test should be added into Makefile. There are several sections where simple tests can be added to.
TST_NOFILE
- Add your test to this list if the test accepts no options.
TST_FILE
- Tests here will be started with the
--filename $some_name
option. The file will not be created by launcher, test should take care of it itself.
TST_DIR
- These will be started with the
--dirname $some_name
option. The directory will not be created by launcher, test should take care of it itself.
TST
- These are tests with custom options. For those one should also declare the starting rule in the Makefile like this
test.pid: test $(<D)/$(<F) --pidfile=$@ --outfile=$<.out --OPTIONS-HERE
Checking test works
To make sure test itself works do these steps:
make cleanout # to clean the directory from old stuff make $test # to build the test make $test.pid # to start the test make $test.out # to stop the test and ask it for "checking" stage and results
All logs from test, including the pass/fail status would be in the $test.out
file.
Adding test to automatic suite
The above part of the article describes what should be done to make the test work itself. Now, to check how the test is dumped and restored one should add the test to the suite. This is done simply by editing the test/zdtm.sh
script and adding the test name to the TEST_LIST
variable. Some tricks apply though.
Caring about namespaces
If your test behaves very differently when run inside sub-namespace or tests namespaces stuff, then the test should not be added to the very first definition of the TEST_LIST
list, but instead to the one that goes after the # These ones are not in ns
comment, optionally with the ns/
prefix.
Requiring special features
If in order to C/R the test some special support from the kernel is required, then CRIU itself should be taught to check this support with the check --feature=$name
option (see cr-check.c file for details). The test itself should be conditionally added to the TEST_LIST
variable. Check how the static/tun
test is added for reference.