Difference between revisions of "Eiffeltest"
(answer to pini) |
|||
(2 intermediate revisions by the same user not shown) | |||
Line 31: | Line 31: | ||
''That is how I'm doing testing for the Amber project, and it seems to work well. --Roger Browne 2005-11-26'' |
''That is how I'm doing testing for the Amber project, and it seems to work well. --Roger Browne 2005-11-26'' |
||
+ | |||
+ | That's a good point to raise, if only to mention that the current tet suite (pre-eiffeltest) does a "fuzzy" matching of error messages (using SmartEiffel/tools/commands/same_msg.e).--[[User:FM|FM]] 21:42, 19 Dec 2005 (CET) |
||
== Open issues == |
== Open issues == |
||
Line 51: | Line 53: | ||
''Perhaps we could use indexing/notes clauses with tags indicating which modes/options are relevant for the test?'' --[[User:Dmoisset|Dmoisset]] 22:44, 26 Nov 2005 (CET) |
''Perhaps we could use indexing/notes clauses with tags indicating which modes/options are relevant for the test?'' --[[User:Dmoisset|Dmoisset]] 22:44, 26 Nov 2005 (CET) |
||
+ | |||
+ | ''Nice idea. I would improve it by proposing the opposite: list the modes/options to exclude. This means that all modes will be checked by default. If the test is incompatible with some mode, then it has to be specified. If some new mode is added, this mode will be tested with all tests. If some tests are not compatible with this mode, they have to be excluded. Otherway, you would have to edit about every test to add this mode. -- Philippe Ribet |
Revision as of 09:51, 21 December 2005
This page specifies a tool that has not been written yet.
eiffeltest
is a tool that runs a suite of tests. This tool allows to validate, with the provided test-suite, the compiler and the libraries. Users can use it to run a test-suite of their own, which is a usefull tool for their project robustness.
Synopsis
eiffeltest directory
eiffeltest source_file.e [source_file2.e...]
File handling
The tools recursively iterates over the directory given as a command-line parameter, looking for test files or over the given test file names.
Test files are Eiffel source files with special names:
- test_*.e: valid source file that should be compilable and runnable without causing an error
- bad_*.e: invalid source file that should trigger a given compiler error message
For each test file, there can be optional files that have the same name but a different extension. These optional files can be used to provide:
- output that the compiler is expected to provide when compiling bad_*.e files. This file mandatory file for invalid tests has the file name extension .msg
- input data to be fed to the program. This optional file (allowed only for valid tests) has the file name extension .in
- output that the program is expected to provide when run. This optional file (allowed only for valid tests) has the file name extension .out
Why not remove the 'test_' and 'bad_' substrings from the test names? That reduces clutter and leaves more space for readable, self-documenting filenames.
The test harness can look for response files that match the test prefix. You already have two suffixes: *.out (for output) and *.msg (for a compiler error message). Another useful one is *.match (for a regular expression match on the output, because the output might contain things like date/time/version-number that are not relevant to the test).
That is how I'm doing testing for the Amber project, and it seems to work well. --Roger Browne 2005-11-26
That's a good point to raise, if only to mention that the current tet suite (pre-eiffeltest) does a "fuzzy" matching of error messages (using SmartEiffel/tools/commands/same_msg.e).--FM 21:42, 19 Dec 2005 (CET)
Open issues
- How can we use .in and .out files on platforms that do not allow redirecting the standard input and output of programs?
Easy: just finish the "exec" cluster ;-) --Cyril 08:16, 10 nov 2005 (CET)
Sounds good. Can you give a brief listing of what is to be done ? --pini 23:36, 25 nov 2005 (CET)
The win32 port (at least) is to be written. Ah, and testing ;-) --Cyril 15:39, 28 Nov 2005 (CET)
- Which compilation modes are used for testing? There are many options and running all possibilities is probably too much.
Flags I can remember from: -no_gc, -flat_check, -no_split, (-debug_check | -all_check | -loop_check | -invariant_check | -ensure_check | -require_check | -no_check | -boost) This means compiling and running about 64 times (minus incompatibilities between -flat_check and boost/no_check).
- How to test some specific capabilities: options -profile, -no_main, -c_mode, -sedb, -cecil, or testing with c2j?
- What about tests specific to some compiler mode? For example GC tests should not be run without GC, optimizer tests are invalid in modes other than -boost...
Perhaps we could use indexing/notes clauses with tags indicating which modes/options are relevant for the test? --Dmoisset 22:44, 26 Nov 2005 (CET)
Nice idea. I would improve it by proposing the opposite: list the modes/options to exclude. This means that all modes will be checked by default. If the test is incompatible with some mode, then it has to be specified. If some new mode is added, this mode will be tested with all tests. If some tests are not compatible with this mode, they have to be excluded. Otherway, you would have to edit about every test to add this mode. -- Philippe Ribet