-
Notifications
You must be signed in to change notification settings - Fork 2
Saving and Using Test Runtimes
When vvtest knows how long each test takes to run, then filtering by runtime is possible and batch performance can be greatly improved. Runtimes are also used to determine test timeouts.
The filters that need and use test runtimes include --tmin, --tmax, and --tsum.
For batching, tests are collected into groups and comprise the batch jobs. Running multiple tests in a single batch job improves throughput, because batch overhead is reduced and because fewer batch jobs is more favorable to queue priority policies. But of course, test runtimes are needed in order to collect into groups and to know how much time to request for each job.
Providing runtimes to vvtest implies access to previously run test results.
A simple way to do this is to use the --save-results option to save test results into
a persistent directory location. That location can then be read in by vvtest during
subsequent invocations.
Here is a simple method for maintaining and using previous test results:
- Define a directory location for the project by augmenting the
vvtest_user_plugin.pyfile or by defining theVVTEST_RESULTS_DIRenvironment variable. - Add the
--save-resultsoption to the project's automated testing processes. A project will commonly run on multiple platforms nightly with possible longer running tests run two times per week, for example. - Also add the
--results-update=10option to the project's automated testing process. This will remove test results older than 10 days (the number 10 is arbitrary), in order avoid a buildup of test results in the persistent directory. - As long as the plugin file or environment variable is defined for vvtest users, the saved results and runtimes will be available for all vvtest invocations.
An important underlying operation is test identification, which is needed to find the right runtime for any given test. If you always operate out of a single Git repository, then the default mechanism should just work. However, read the section below on Test Identification to handle projects with multiple test repositories.
The directory to save results just has to be readable & writable to the automated processes
and readable to project users. It can be shared across platforms and vvtest -o options
or be specific to each.
Since the directory will be used so often, the project will want a persistent way to specify
it. If the project maintains environment variables (through environment modules, for example)
then the VVTEST_RESULTS_DIR can be defined with the path to the persistent results directory.
Another way is to augment the vvtest configuration plugin file. See
Project specific configuration and plugins.
Add a function to the vvtest_user_plugin.py file with name results_directory.
For example,
import os
def results_directory( platname, options ):
"""
The 'platname' is the vvtest platform name for this invocation.
The 'options' is a list of `-o` option values, if given on the command line.
"""
if os.path.exists( '/projects/coolproj/testresults' ):
return '/projects/coolproj/testresults'
elif os.path.exists( '/gapps/coolproj/testresults' ):
return '/gapps/coolproj/testresults'
return NoneIt is not an error to return None from this function - it just means test results are not
available to vvtest. And the same directory can be used regardless of platform
and options, but they are provided just in case.
After a vvtest run completes, the test results and runtimes can be saved to the
results directory by adding the --save-results command line option. This option
can be given on the same command line that runs the tests, or a separate vvtest
invocation can be performed with -i --save-results.
Usually, a convenient strategy to run and save results is just to modify the normal
scheduled/automated testing runs by adding the --save-results option. The test
results and runtimes will be saved in the results directory in separate files whose
names take the form
vvtresults.YYYY_MM_DD_HHh_MMm_SSs.<platform name>.<options>-<arbitrary tag>
The platform name will always appear, but the options are only added if given on
the command line, as well as the arbitrary tag (provided with the --results-tag
option). The file format is line-by-line JSON, where each line is a dictionary
of attributes for a single test.
Since the files are meant to be shared with others on the project, it is suggested
to use the --perms option to set the permissions of each new file (the tail directory
will be created too, if it does not already exist). For example,
--perms "wg-sable-dev,g=rX,o=".
Since each vvtest run that adds --save-results will result in a new file in the
results directory, the number of files can grow without bound. This growth can be
prevented by adding the --results-update=<value> command line option to the same
vvtest invocation that includes the --save-results option. The value is
the number of days to keep, such as 10 days worth of files.
The file deletion only considers files for removal that have the same platform, options,
and tag value. This avoids file removal race conditions and provides more control but
it does mean that each automated process that adds --save-results would also have to
add --results-update to avoid file growth.
In general, tests are identified in vvtest using the path to the test source file
(a *.vvt file) relative to the top of the repository (the repository root), plus the
test name and its parameters. And the existence of a .git directory is used to identify
the root level of the repository. For example,
repo
|-- .git
| |-- config
|-- adir
|-- atest.vvt
Test "atest" could be identified as a tuple of strings (adir/atest.vvt, atest).
However, suppose more tests are contained in another repository with the same
directory structure. For example,
.
|-- repo1
| |-- .git
| | |-- config
| |-- adir
| |-- atest.vvt
|-- repo2
|-- .git
| |-- config
|-- adir
|-- atest.vvt
In this case, the default identification mechanism could not distinguish between "atest" from repo1 or from repo2.
To solve this problem, a special marker file can be added to the repositories to name the root level. For example, add the file name ".vvtroot.txt" to the top level of repo1 with contents
ROOTPATH=repo1
and add the same filename to repo2 with contents ROOTPATH=repo2. So we have
.
|-- repo1
| |-- .vvtest.txt
| |-- .git
| | |-- config
| |-- adir
| |-- atest.vvt
|-- repo2
|-- .vvtest.txt
|-- .git
| |-- config
|-- adir
|-- atest.vvt
With the addition of the .vvtest.txt files, the "atest" from repo1 would be identified
with tuple (repo1/adir/atest.vvt, atest) and from repo2 would be
(repo2/adir/atest.vvt, atest).
Some details about the .vvtest.txt marker file:
- The root directory is determined by first looking for a
.vvtest.txtfile in the test tree. If that file is not found, then it looks for a.gitdirectory. - The contents can be empty, which marks the root level but does not provide a name.
The effect is the same as if a
.gitdirectory existed at that location. - The value of
ROOTPATHcan be an arbitrary directory path, such asROOTPATH=arbitrary/path/segments. This can be useful for naming test repositories that are laid out using Git submodules; the names can be made to mimic the layout.