Hello, this is Amir Shehata with another quick tip on the LUTF.
Today I'd like to express my thoughts on the software process.
We all learnt the waterfall model at school. I don't know about you, but in practice it never is that clean.
It's a lot more iterative.
As we write the HLD, we find we need to role back and adjust the requirements. The Detailed design is often passed over in favour of going directly to implementation, which I can't really disagree with.
A compromise is to develop the HLD in enough details that we can jump directly into implementation.
In my experience what often happens is that the Test plan is left to the very end. When we get to it we sometimes discover a missing requirement or a missing element of the design.
Under deadlines what inevitably ends up happening is the documentation goes stale.
When the product is then maintained by another developer the documentation (if it exists at all) is sorely out of date making the developer's life hard. Sounds like a familiar story? It does to me.
A solution that definitely doesn't work is introducing more red-tape and documentation. It'll just make the developer's life even harder.
I contend that there exists no zero effort solution. Any solution will need adjusting the development workflow, which will appear like extra effort. The trick is to reduce the cost of the solution and maximize the benefit.
Before I get to my proposal, a disclaimer. I'm a strong proponent of formalized documentation, but I'm flexible on the form this formalized documentation looks like.
One of the big problems with maintaining a test plan is developers need to create a test plan document which is separate from the test scripts.
What, inevitably, ends up happening is that as the product evolves, bugs get fixed, etc, more test cases are added, but the test plan is not updated.
Even more problematic is that the design diverges and the HLD and/or requirements are not updated.
The LUTF provides a way to resolve this issue.
Each test script should include a documentation block formatted as follows:
""" @PRIMARY: sample_01 @SECONDARY: sample_01 @DESCRIPTION: Simple Hello Lustre test """
It's enclosed in """
.
- @PRIMARY: is the primary requirement this script fulfills
- @SECONDARY: is the secondary requirement, if any, this script fulfills
- @DESCRIPTION: Is a detailed description of the test case
By specifying these, the LUTF can generate automatic documentation for the test scripts. We no longer need to maintain a separate test plan. Our test scripts become our test plan.
This method provides the glue between the code and the documentation. As bugs are fixed or the feature updated, the developer should create a test case to cover the changes made.
Each test case should have a documentation block.
The LUTF then provides a method to extract the documentation block and create a table which can then be imported to the wiki. In this way it becomes much easier for the developer to maintain appropriate documentation without much overhead.
One way of working would be to create a skeleton of the scripts which just include the above comment block. Then run the create_docs() command on the suite
suites['samples'].create_docs("samples.csv")
This will generate a csv file with all the documentation.
This can then be imported directly into the Confluence wiki.
Once the test plan is reviewed, updated and approved, the script logic can be written in a separate phase.
Let's run through this process to show how easy it is.