The plan for Buffalo 3 [PUB:probably to be called Maloo (aboriginal for thunder)] is that it will become a complete set of tools for creating, managing, running and analysing the results of Lustre validation efforts. We require that the validation tools are closed loop with the results of the process enabling greater productivitiy in the future.
Because an individual cannot possibly know what each of the many hundreds of tests does, and therefor if the person changes a line of code in Lustre has to run all the tests just to be sure, I'm thinking of how we create a test resource that can capture not only the history of the testing of Lustre, but also the history of each test. This will allow analysis of the value of the tests and the test environment.
When a developer is developing, the test environment should deliver a test set that is customised to the particular development activity. This might mean for example that the environment can produce a directed minimilalist test set, that automatically runs those tests most likely to fail first.
Lots more to follow, my first step is to capture the test results into a database and then work forwards from there.