There may be times that you wish to influence the building and testing carried out on your change. You might for example be fixing an issue that affects a particular distribution, CPU architecture, interoperability with a particular Lustre version, or fails only intermittently and needs multiple test runs to confirm it is fixed. To address these needs, you can change or add the tests carried out by indicating in the commit message the changes you require. Test-Parameter
sessions are normally in addition to the normal test sessions that would be run against a patch. This allows a patch with specific or unusual testing requirements to ensure that sufficient additional testing is run to gain confidence in the change being made. For patches that are of an experimental nature (i.e. developer is not sure of functionality, or only wants a limited set of tests to be run just to try something), it is also possible to submit a patch with the fortestonly
parameter.
To run additional sessions for your patch, add Test-Parameters:
with space-separated name value pairs to your commit message (order within the line does not matter).
For example:
Test-Parameters: testlist=conf-sanity env=ONLY=61,ONLY_REPEAT=20 |
This will cause Autotest to run the normal test sessions plus one additional session where only conf-sanity.sh
test_61
will run, and it will be repeated 20 times. The parameters specified in Test-Parameters
do not affect the existing sessions that are run by default.
Multiple 'Test-Parameters:' lines can be defined, with each Test-Parameters
line triggering a separate test session to run in parallel with the default test sessions (parameters are not carried over across multiple lines):
Test-Parameters: testgroup=review-ldiskfs clientdistro=sles12sp3 serverdistro=el9.3 Test-Parameters: testgroup=review-ldiskfs clientdistro=ubuntu1804 serverdistro=el7.9 |
Long lists can be catered for by escaping the carriage return, but are otherwise not subject to the 70-character limit for commit messages:
Test-Parameters: ostcount=2 clients=1 ostsizegb=2 mdssizegb=2 env=SLOW=yes \ testlist=sanity,liblustre |
Quotations can be used when spaces are necessary in a value:
Test-Parameters: testlist=sanity env=SANITY_EXCEPT="101g 102i" |
The test parameter sessions can be influenced in many ways, see the General Parameters and Node Parameters sections below for all of the options.
Below are build parameters read by Jenkins, and the Lustre Janitor.
Signals to all components the patch is intended for Lustre Janitor testing only. Jenkins will not build the patch (the status will be FAILURE) and therefore, Autotest will not test it. Your patch will not receive a +/- 1 from Maloo.
|
Signals to all components the patch should be completely ignored. Jenkins will not build the patch (the status will be FAILURE) and as a result it will not be tested by Autotest or the Lustre Janitor. Your patch will not receive a +/- 1 from Maloo.
|
Below is the list of general test parameters that can be used to run custom test sessions. These parameters differ from the node_parameters in that these do not need to be specified with a node type prefix.
For all of the examples below the 'Test-Parameters:' marker has been omitted to simplify the examples.
Pass through options for auster. Valid values: See "Auster usage help" section on the Setting up a Lustre Test Environment wiki.
|
When true, the MGT will share a partition with an MDT. When false, AT will create an additional partition to be used by just the MGT. NOTE: mdtfilesystemtype will override mgtfilesystemtype when combinedmdsmgs is true. If standalonemgs is set to true, this option will be ignored. Valid values: true, false (no value is the same as true)
|
Comma separated environment definitions passed to the test environment. For definitions requiring spaces, enclose them in quotations. NOTE: Be very careful setting environment variables directly (for example OSTFSTYPE=zfs) because Autotest creates a config file based on the environment it builds. If you ask for something at odds with Autotest's expectations you will see failure instead of success. In this case, for example, you should use the filesystemtype keyword described on this page. Autotest will then create the appropriate environment variables, the same is true for other things like ostcount instead of OSTCOUNT, ostsizegb instead of OSTSIZE, etc. Only use the
|
Used to specify which node should run the test framework. Default is client1. Valid values: mds1, oss1, client2, etc.
|
Setup cluster in failover configuration Valid values: true, false (no value is the same as true)
|
Patch will be built by Jenkins, but Autotest will not run any testing. Valid values: true, false (no value is the same as true)
|
The Valid values: true, false (no value is the same as true)
|
Sets the file system type for all server nodes (MDS, MGS and OSS). NOTE: fstype should not be used in combination with <mdt/mgt/ost>filesystemtype. Valid values: ldiskfs, zfs
|
Use iSCSI for failover testing Valid values: 0 (no iSCSI), 1 (iSCSI)
|
Set the cluster up to use Kerberos Valid values: true, false (no value is the same as true)
|
For DDN employees only: How to: livedebug |
The total number of MDS nodes in the cluster. Valid values: 1 - 8 MDS nodes
|
Sets the size of the partitions on the MDS in GB. Valid values: > 0 and must be an integer. When setting this value, take into consideration your mdtcount value and that most test nodes have ~90GB of disk space.
|
The total number of MDTs spread across all of the MDSs. Valid values: 1 - 4 per MDS
|
Configure the file system type to use on the MDTs. NOTE: <mdt/mgt/ost>filesystemtype should not be used in combination with fstype. Valid values: ldiskfs, zfs
|
Sets the size of the partition on the MGS in GB. This value is only applicable when the session is using a stand alone MGS. Valid values: > 0 and must be an integer. When setting this value, take into consideration that most test nodes have ~90GB of disk space.
|
Configure the file system type to use on the MGTs. NOTE: The default value for combinedmdsmgs is true, therefore it must be set to false in order to configure a different file system type for the MGT. Also, <mdt/mgt/ost>filesystemtype should not be used in combination with fstype. Valid values: ldiskfs, zfs
|
Marks the test session as optional: does not impact the verified value from Maloo and is only run if resources are immediately available. Valid values: true, false (no value is the same as true)
|
The number of OSTs per OSS. Valid values: 1 - 8
|
Configure the file system type to use on the OSTs. NOTE: <mdt/mgt/ost>filesystemtype should not be used in combination with fstype. Valid values: ldiskfs, zfs
|
Sets the size of the partitions on the OSS in GB. Valid values: > 0 and must be an integer. When setting this value, take into consideration your ostcount value and that most test nodes have ~90GB of disk space.
|
When true, Autotest will continue the session after a suite crashes. The session will continue with the suite following the one that crashed. When false, Autotest will stop the session and upload the results to Maloo immediately. Valid values: true, false (no value is the same as true)
|
Marks the session as enforcing: is used to determine the verified value from Maloo. Valid values: true, false (no value is the same as true)
|
Provisions a separate test node as the MGS. The MGS will be setup with the same parameters as the MDS unless they're overwritten. Valid values: true, false (no value is the same as true)
|
Specifies the subproject of the patch. Subprojects must be added by Autotest Admin, currently the only configured subproject is lnet. Valid values: lnet
|
Configure the test session to run a specific test group. Specifying a test group makes it easy to run a typical test grouping with small modifications since the session will inherit all of the values from the base test with any overrides applied (see the code block below for examples). NOTE: testgroup can be combined with testlist to run a test group plus additional suites Valid values: See the Test Groups section for a complete list
|
Configure the test session to run a specific list of suites. testlist can be combined with testgroup to run a test group plus additional suites. Valid values: sanity, sanityn, conf-sanity, mmp, replay-single, replay-dual, lnet-selftest, etc.
|
Configures the session to re-provision all of the nodes instead of simply reboot them. Valid values: true, false (no value is the same as true)
|
The Valid values: true, false (no value is the same as true)
|
Node parameters are used to change how specific node types are configured. They must be prefixed with the node type being changed. Valid node types are client, mds, mgs, oss and server. Server is an alias that allows users to modify a value for all server node types (mds, mgs and oss). For example instead of writing
mdsdistro=el7 mgsdistro=el7 ossdistro=el7 |
Users can simply write
serverdistro=el7 |
Sets the architecture for the specified node type Valid values: x86_64, ppc64, aarch64 (architecture must have been built for patch)
|
Used in conjunction with job to install a specific build on the specified node. job must be specified with buildno and version cannot be specified with buildno. Valid values: Any valid Jenkins build number for the specified job
|
The number of nodes to use for the specified node type. NOTE: For MDS and OSS nodes, it's best to also set mdtcount / ostcount to ensure you have the expected number of targets. Valid values:
|
Distribution to use for the specified node type. Valid values: el7, el7.5, el7.6, ubuntu1604, ubuntu1804, sles12sp3, etc. (The distribution specified must have been built for the patch being tested. The distros built can be seen on the patch's build page in Jenkins.)
|
The IB stack to use for the specified node type. Valid values: inkernel, ofa (The IB stack type must have been built for the build being tested. The stack types can be seen on the patch's build page in Jenkins.)
|
Used with buildno to install a specific build on the specified node. buildno must be specified with job. version cannot be specified with job. Valid values: Any valid Jenkins job, such as lustre-reviews, lustre-master, etc.
|
Enables selinux on the specified node type. Valid values: true, false (no value is the same as true)
|
The version of Lustre to use for the specified node type. Version cannot be specified with job and build. If a distro is not specified in the test parameters, Autotest will use the highest el version available for the specified version. Valid values: For a list of valid versions see the Versions section.
Note that in addition to the more commonly used |
Versions are pointers to job/build combinations and simplify using a specific Lustre version on a test node. Versions can be specified in the test parameters using the version node parameter.
The versions in parenthesis auto-update as new versions are added. For example 1.9 will always point to the highest 1.9.x version. EXA1 will always point to the highest EXA1.x.x version.
If there is a version missing, send an email to charlie@whamcloud.com
Version | Build |
---|---|
2.7.0 (2.7) | |
2.8.0 (2.8) | |
2.9.0 (2.9) | |
2.10.0 | |
2.10.1 | |
2.10.2 | |
2.10.3 | |
2.10.4 | |
2.10.5 | |
2.10.6 | |
2.10.7 | |
2.10.8 (2.10) | |
2.11.0 (2.11) | |
2.12.0 | |
2.12.1 | |
2.12.2 | |
2.12.3 | |
2.12.4 | |
2.12.5 | |
2.12.6 | |
2.12.7 | |
2.12.8 | |
2.12.9 (2.12) | |
2.13.0 (2.13) | |
2.14.0 (2.14) | |
2.15.0 | |
2.15.1 | https://build.whamcloud.com/job/lustre-b2_15/28/ |
2.15.2 | https://build.whamcloud.com/job/lustre-b2_15/48/ |
2.15.3 | https://build.whamcloud.com/job/lustre-b2_15/65/ |
2.15.4 (2.15) | https://build.whamcloud.com/job/lustre-b2_15/81 |
EXAScaler Versions |
Test groups are set lists of Lustre test suites managed by Autotest.
Name | Suites |
---|---|
basic | conf-sanity |
failover-part-1 | replay-vbr, replay-dual, replay-single, mmp, replay-ost-single, recovery-small, recovery-double-scale |
failover-part-2 | recovery-random-scale |
failover-part-3 | recovery-mds-scale |
failover-zfs-part-1 | replay-vbr, replay-dual, replay-single, mmp, replay-ost-single, recovery-small, recovery-double-scale |
failover-zfs-part-2 | recovery-random-scale |
failover-zfs-part-3 | recovery-mds-scale |
full-dne-part-1 | sanity-scrub, replay-single, obdfilter-survey, replay-ost-single, large-scale, insanity, parallel-scale, runtests, replay-dual, sanity-flr, sanity-lsnapshot, mmp, sanity-dom, mds-survey, sanity-lfsck, sanity-lnet, pjdfstest, ost-pools, recovery-small |
full-dne-part-2 | lnet-selftest, sanity, sanity-hsm, lustre-rsync-test, sanity-sec, replay-vbr, parallel-scale-nfsv3, sanity-quota, sanity-pcc, sanity-lipe-find3, racer |
full-dne-part-3 | sanity-pfl, performance-sanity, sanity-benchmark, conf-sanity, sanityn, parallel-scale-nfsv4, hot-pools, sanity-lipe, sanity-lipe-scan3 |
full-dne-zfs-part-1 | sanity-scrub, replay-single, obdfilter-survey, replay-ost-single, large-scale, insanity, parallel-scale, runtests, replay-dual, sanity-flr, sanity-lsnapshot, mmp, sanity-dom, mds-survey, sanity-lfsck, sanity-lnet, pjdfstest, ost-pools, recovery-small |
full-dne-zfs-part-2 | lnet-selftest, sanity, sanity-hsm, lustre-rsync-test, sanity-sec, replay-vbr, parallel-scale-nfsv3, sanity-quota, sanity-pcc, sanity-lipe-find3, racer |
full-dne-zfs-part-3 | sanity-pfl, performance-sanity, sanity-benchmark, conf-sanity, sanityn, parallel-scale-nfsv4, hot-pools, sanity-lipe, sanity-lipe-scan3 |
full-part-1 | sanity-scrub, replay-single, obdfilter-survey, replay-ost-single, large-scale, insanity, parallel-scale, runtests, replay-dual, sanity-flr, sanity-lsnapshot, mmp, sanity-dom, mds-survey, sanity-lfsck, sanity-lnet, pjdfstest, ost-pools, recovery-small |
full-part-2 | lnet-selftest, sanity, sanity-hsm, lustre-rsync-test, sanity-sec, replay-vbr, parallel-scale-nfsv3, sanity-quota, sanity-pcc, sanity-lipe-find3, racer |
full-part-3 | sanity-pfl, performance-sanity, sanity-benchmark, conf-sanity, sanityn, parallel-scale-nfsv4, hot-pools, sanity-lipe, sanity-lipe-scan3 |
full-patchless | lnet-selftest, runtests, sanity, sanity-scrub, sanity-benchmark, sanity-lfsck, sanityn, sanity-hsm, sanity-flr, sanity-dom, sanity-lsnapshot, insanity, sanity-quota, sanity-sec, sanity-pfl, lustre-rsync-test, ost-pools, mds-survey, mmp, performance-sanity, parallel-scale, large-scale, obdfilter-survey, parallel-scale-nfsv3, parallel-scale-nfsv4, pjdfstest, sanity-lnet, racer |
full-zfs-dkms | sanity-scrub, replay-single, obdfilter-survey, replay-ost-single, large-scale, insanity, parallel-scale, runtests, replay-dual, sanity-flr, sanity-lsnapshot, mmp, sanity-dom, mds-survey, sanity-lfsck, sanity-lnet, pjdfstest, ost-pools, recovery-small |
full-zfs-part-1 | sanity-scrub, replay-single, obdfilter-survey, replay-ost-single, large-scale, insanity, parallel-scale, runtests, replay-dual, sanity-flr, sanity-lsnapshot, mmp, sanity-dom, mds-survey, sanity-lfsck, sanity-lnet, pjdfstest, ost-pools, recovery-small |
full-zfs-part-2 | lnet-selftest, sanity, sanity-hsm, lustre-rsync-test, sanity-sec, replay-vbr, parallel-scale-nfsv3, sanity-quota, sanity-pcc, sanity-lipe-find3, racer |
full-zfs-part-3 | sanity-pfl, performance-sanity, sanity-benchmark, conf-sanity, sanityn, parallel-scale-nfsv4, hot-pools, sanity-lipe, sanity-lipe-scan3 |
lnet-review-ldiskfs-dne | lnet-selftest, sanity, sanity-lnet, sanity-sec |
review | lnet-selftest, runtests, sanity, sanityn, replay-single, conf-sanity, recovery-small, replay-ost-single, insanity, sanity-quota, lustre-rsync-test, ost-pools, sanity-lfsck, sanity-hsm, sanity-lnet |
review-dne-part-1 | sanity, sanity-pfl |
review-dne-part-2 | mds-survey, replay-dual, runtests, sanity-lfsck, sanity-sec |
review-dne-part-3 | conf-sanity |
review-dne-part-4 | insanity, mmp, replay-ost-single, sanity-dom, sanity-flr, sanity-hsm, sanity-quota |
review-dne-part-5 | lustre-rsync-test, recovery-small, sanity-scrub, sanityn |
review-dne-part-6 | replay-single, ost-pools |
review-dne-part-7 | large-scale, hot-pools, sanity-pcc, sanity-lipe |
review-dne-part-8 | parallel-scale, replay-vbr, replay-dual, racer |
review-dne-part-9 | obdfilter-survey, performance-sanity, sanity-benchmark, parallel-scale-nfsv4, parallel-scale-nfsv3 |
review-dne-selinux-ssk-part-1 | sanity |
review-dne-selinux-ssk-part-2 | recovery-small, sanity-sec, sanity-selinux |
review-dne-zfs | runtests, sanity, sanityn, replay-single, conf-sanity, recovery-small, replay-ost-single, insanity, sanity-quota, lustre-rsync-test, ost-pools, sanity-lfsck, sanity-hsm |
review-dne-zfs-part-1 | sanity, sanity-pfl |
review-dne-zfs-part-2 | mds-survey, replay-dual, runtests, sanity-lfsck, sanity-sec |
review-dne-zfs-part-3 | conf-sanity |
review-dne-zfs-part-4 | insanity, mmp, replay-ost-single, sanity-dom, sanity-flr, sanity-hsm, sanity-quota |
review-dne-zfs-part-5 | lustre-rsync-test, recovery-small, sanity-scrub, sanityn |
review-dne-zfs-part-6 | replay-single, ost-pools |
review-dne-zfs-part-7 | large-scale, hot-pools, sanity-pcc, sanity-lipe |
review-exa6 | sanity-pcc, sanity-sec |
review-ldiskfs | lnet-selftest, sanity, sanity-lnet |
review-ldiskfs-arm | lnet-selftest, sanity, sanity-lnet |
review-ldiskfs-dne | lnet-selftest, sanity, sanity-lnet |
review-ldiskfs-dne-arm | lnet-selftest, sanity, sanity-lnet |
review-ldiskfs-ubuntu | lnet-selftest, sanity, sanity-lnet |
review-zfs | sanity-quota, sanity-flr, replay-single, replay-ost-single, insanity |
review-zfs-part-1 | runtests, sanity, sanityn, sanity-quota, ost-pools, sanity-lfsck, sanity-hsm, sanity-flr |
review-zfs-part-2 | replay-single, conf-sanity, recovery-small, replay-ost-single, insanity, lustre-rsync-test, large-scale, mds-survey |
tiny | sanity, mmp |