You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Video Transcript

Hello, this is Amir Shehata with another quick tip on the LUTF.

Today I'd like to express my thoughts on the software process.

Problems with the Traditional Software Development Model

We all learnt the waterfall model at school. I don't know about you, but in practice it never is that clean.

It's a lot more iterative.

As we write the HLD, we find we need to role back and adjust the requirements. The Detailed design is often passed over in favour of going directly to implementation, which I can't really disagree with.

The Test Plan is written at the very end.

Bug fixes happen during implementation and testing and often result in design changes and event requirement changes. We endup with a bit of a messy development process.

Under deadlines what inevitably ends up happening is the documentation goes stale.

When the product is then maintained by another developer the documentation (if it exists at all) is sorely out of date making the developer's life hard. Sounds like a familiar story? It does to me.

A solution that definitely doesn't work is introducing more red-tape and documentation. It'll just make the developer's life even harder.

I contend that there exists no zero effort solution. Any solution will need adjusting the development workflow, which will appear like extra effort.

The trick is to reduce the cost of the solution and maximize the benefit.

Proposed Solution

My proposed solution is to bring the test plan development to the centre of the Software Development Process.

A well formed requirement must be testable. Therefore we can represent each requirement as a test case.

Each requirement must be designed. The details of the design can also be added to the test case.

We write the HLD (as we would normally) and add extra test cases as needed. The test cases would test existing requirements or add new discovered requirements we didn't think of before.

We move to implementation (either after the HLD is completed or in parallel) and add extra test cases as needed. Again these test cases can test existing requirement or add new ones.

Once the feature is complete we should have a set of test scripts which represent both the requirements, test plan and parts of the design.

The LUTF Involvement

Let's now imagine that all these test scripts, which include:

  • the requirements
  • the details on how the Requirements will be designed and
  • the test case description

are LUTF test scripts.

We can use the LUTF to automatically extract all this information from the test scripts and generate our documentation.

As you can see we no longer need to keep 3 different documents: A requirements document, an HLD document and a Test plan document.

With this work flow the requirements, design and the test plan and the test plan implementation is collapsed into one form: the LUTF test scripts. From the test scripts we can generate our documents.

I believe this reduces the overhead and increases our chances of keeping documentation consistent with the implementation.

The LUTF Documentation Block

Each test script should include a documentation block. Not all elements shown here are needed. But the more complete it is the more complete the generated document will be

"""
@PRIMARY: Primary Requirement ID
@PRIMARY_DESC: Primary Requirement ID description
@SECONDARY: Secondary Requirement ID if it exists
@SECONDARY_DESC: Secondary Requirement ID description if it exists
@DESIGN: Design notes and description
@TESTCASE: Test case descrtiption
"""

It's enclosed in """ .

The LUTF can automatically generate the bulk of the requirements, HLD and test plan documentation from the test scripts.

This method provides the glue between the code and the documentation. As bugs are fixed or the feature updated, the developer should create new test cases to cover the changes made and regenerate the documentation.

The LUTF can generate documentation with the create_docs() command.

suites['samples'].create_docs("samples.csv")

This will generate two csv files one would be the requirement document with the design and the other is the test plan.

Requirement Document

Requirement IDRequirement Description


HLD

Requirement IDDesign Notes


Test Plan

Primary Requirement IDSecondary Requirement IDTest Case  Description



NOTE: each secondary requirement in a test case should be a primary requirement in another test case. The idea here is that some test cases might touch on more than one requirement.

The csv files can then be imported directly into the Confluence wiki or embedded in a document.

Once the test plan is reviewed, updated and approved, the script logic can be written in a separate phase.

Caveat

Of course life is always more complicated that it appears on paper.

Diagrams and other mediums might be needed to explain the requirements and the design, which can not be included in the text only test script.

However, my argument is that the above process can alleviate much of the maintenance of the documents required.

Lets take the Requirements document as an example.

The first section can have the overview and diagrammatic explanations required. The second section is the table detailing all the requirements.

Let's look at how all this would look like in the LUTF.

Demonstration

Let's run through this process to show how easy it is.

  • No labels