Lean Quality Assurance: Preventing Defects Down The Line

by 15th April 2012Quality Assurance

Last week, I had the fantastic opportunity to travel down to London to attend a two day course on Lean Quality Assurance at the British Computer Society headquarters, just off Covent Garden. The course itself was run by Tom and Kai Gilb, who are pre-eminent in their field. Tom has been developing and applying his methods with a great deal of success in major multinationals for over forty years. His son, Kai, joined him around twenty years ago, and quickly gained an equal measure of respect. Indeed, Tom is so well thought of, that he was unanimously appointed an honorary fellow of the BCS, something that was last bestowed two years ago. I was lucky to be there when he was told and to have a few celebratory drinks with him.

So what did I learn? A great deal, certainly too much to go into here, but there were some key processes and methods that were delved into, amid numerous anecdotes. I started off purely testing software, and gradually moved into QA, almost without realising. My main gripe with testing has always been that it’s a job that traditionally gets left to the end and squeezed for time, so that typically, a 60 to 80 hour week would be needed to ensure the software was fit for purpose.

With that in mind, I started to look up the chain, to see where improvements could be made at an earlier stage, so that my pain would be less further down the line. I won’t take sole credit for that: I’m sure a lot of us have done the same thing. Tom and Kai have taken this a step further than my tentative forays, developing and improving processes and methods that are designed to build the quality in right at the start, so much so that the traditional role of testing is almost unneeded.

Quantification

‘Quantification is an ABSOLUTE PREREQUISITE for quality control’ Tom Gilb

In a recent project, it was an uphill struggle to persuade business analysts and the client that their requirements had to be quantified. All too often, the requirements were vague or aspirational and more in line with business objectives or goals. Consequently, there was a lot of interpretation down the line of what was required, and even confusion over whether the final requirement was actually what the Client had originally asked for.

The Gilb policy is simple. Everything, but everything can be quantified. It may not be measurable, but you can certainly quantify what it is that you want. The day after the course, I actually lay awake in bed, running through an entirely hypothetical (though no doubt recognisable) scenario:

‘The new system must run faster or better than the old one.’

With careful questioning and prompting, the goal of the requirement can be derived.

‘We need to have the data processed by the start of work, every weekday morning.’
‘The start of work is 8 a.m.’
‘We get the data at midnight at the latest.’
‘The most data we get is 200mb.’

The quantified attributes of any quality requirement can then be broken down and presented, using a variation of the outline below

{name tag of the objective}

Ambition: {give overall real ambition level in 5-20 words}
Version: {version number} {dd-mm-yy}
Owner: {the person or instance allowed to make official changes to this requirement}
Type: {quality|objective|constraint }
Stakeholder: {Client, Programme Manager, Finance Director, etc.} who can influence your profit, success or failure?
Scale: {a defined units of measure, with [parameters]}
Meter [{for what test level?}]

Benchmarks {The Past}

Past [ ] {estimate of past} {source}
Record [ {where}, {when }, {estimate of record level} ] {source of record data}
Trend [ {future date}, {where?} ] {prediction of level} {source of prediction}

Targets {The future needs}

Wish [ ] {source of wish}
Goal […] {target level} {Source}
Value [Goal] {refer to what this impacts or how much it creates of value}

© www.gilb.com

Once a quality requirement is quantified in this way, it can be looked at to see if it can be measured. Irrespective of this, quality can, and must be quantified, in order that tangible benefits can be seen in the delivered end product. Thus, my original example could be broken down to:

Data.Processing

Ambition: To process all incoming data before the beginning of each working day
Version: 1.0 14/04/2012
Owner: Matthew Cunliffe
Type: Quality
Stakeholder: Client, Operations Manager
Scale: That [mb] of data can be processed and stored in the database between [start time] and [end time]
Meter: 10 iterations, 200mb, 8 hours

Benchmarks

Past 150mb Midnight to 8:00, Source: Operational Audit Data
Record 180mb Midnight to 8:30, On 13/01/2011, Source: Operational Audit Data
Trend 200mb Midnight to 9:30, By 01/01/2013 Source: Operational Audit Data

Targets

Wish 300mb, Midnight to 8:00, Source: Operations Manager, Client
Goal 200mb, Midnight to 8:00, Source: Operations Manager, Client

In the example above, the "Wish" is where they would like to be, but the Goal is the acceptable pass level. Here we now have a clearly defined requirement, with a scale and method of measurement that is clear and unambiguous.

Lean QA Inspection

Throughout the System Development Lifecycle, artefacts are created, whether they are contracts, requirements, specifications, code, plans and the like. In each case there is the probability (or indeed, likelihood) that there will be errors introduced, either through interpretation of sources or directly. Typically in any reasonable project, and documentation will be reviewed to ensure that it conforms to templates and standards and that the documentation doesn’t contradict itself or its sources. That can take time, with each document taking each reviewer several hours to review, resulting in many man-hours of effort. Tom and Kai pointed out that even a few hours on each document wasn’t sufficient to capture all the potential errors. Enter lean QA inspection. For each set of documents, a set of rules is established by each set of stakeholders (for example, the lead BA would identify the rules necessary for Use Cases). At a high level, the rules aim for:

Clarity
Unambiguity
Consistency
Traceability
A clear separation between requirement and solution
A clear separation between performance, functionality and design

A target is also set for the number of major errors that are acceptable in a sample of the document.

An inspection meeting is then held where, during a forty minute period, each reviewer concentrates on validating the same single page (or less) against the rules, taking into account related documentation and sources. If a document exceeds the number of acceptable major errors, the document in its entirety is rewritten by the author. The Gilb’s suggested that the identified errors did not need to be supplied to the author, but in my mind, these would help the author to identify what needed rectification. During the course, Tom showed us real life examples of this process on an author’s documentation. The change in an author’s quality was quick, with a rapid reduction in errors after three or four documents (bearing in mind that a document may have included fifty identified errors in a sample the first time, and drop to four or five by the fourth document). Tom also pointed out, that even though the sampling could return a high number of errors in a document, even this method was only 33 to 50 per cent effective. The upshot of this method is that it saves a huge amount of time for the reviewers; places the responsibility of delivering quality on the author, rather than relying on the reviewers to capture the errors, and increases the e

rror capture rate. By driving in quality at each stage in this manner, the likelihood of faults manifesting in the delivered software is greatly reduced.

A key case study was presented by a regular practitioner of Gilb techniques, Dick Holland, who demonstrated how a product was turned from being discontinued to cornering the entire UK market for that type of product within four years. By promising clients that the faults per client would be reduced year on year and giving demonstrable targets, confidence was built in the end product. In the end, they actually exceeded their fault level, going from roughly one fault per client per week, to one every eight months.

Defect Prevention Process

Testing software isn’t just about validating at a system conforms to the design documents (be they requirements, functional or technical) but finding out why faults have been identified. This is carried out by root cause analysis (RCA), to determine where the fault originated.

The Defect Prevention process (DPP) takes this one step further, as root cause analysis can only give a high level indication of where the faults may have originated. RCA is also typically confined to the test execution phase, whereas DPP can be applied at any point in the process.

By holding a half hour meeting of stakeholders, including the author who is responsible for the error (be it a document or code) a sample of ten errors is quickly discussed to identify why it occurred in the first place. This could be for any reason, even that the author has a sick child and hasn’t been able to concentrate on the document.

A further ten minutes is then spend coming up with ideas about how these errors can be prevented in future. These solutions are then trialled within the group, and if proven to be successful, can then be rolled out to a wider audience.

Conversion

Have I been converted to the Gilb method? As Tom and Kai are the first to admit, their ideas are just one of many that can be applied, and can easily complement the Capability Maturity Model or agile development (I was nodding vigorously as Tom told us that agile development is about getting the job done quickly, not necessarily about building in quality). All methods must be assessed to see if they are appropriate for the business model in which you are working. However, I can see that there are huge benefits to be gained, providing you can get the buy in of key stakeholders. With that in mind, I’m looking forward to applying these processes as soon as I can.

With thanks to Tom and Kai Gilb, Dick Holland, and the BCS Quality Specialist Group.

Matthew Cunliffe

Matthew Cunliffe

Author

Matthew is an IT specialist with more than 24 years experience in software development and project management. He has a wide range of interests, including international political theory; playing guitar; music; hiking, kayaking, and bouldering; and data privacy and ethics in IT.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Share this post