5 QUALITY

LEARNING OUTCOMES

When you have completed this chapter you should be able to demonstrate an understanding of the following:

definitions of ‘quality’;

quality control and quality assurance;

measurement of quality;

detection of defects during the project life cycle;

quality procedures: entry, process and exit requirements;

defect removal processes, including testing and reviews;

types of testing (including unit, integration, user acceptance and regression testing);

the inspection process and peer reviews;

the principles of ISO 9000 quality management systems;

evaluation of suppliers.

5.1 INTRODUCTION

The key quality concern for IT projects is providing customers with the systems they need and which meet their requirements at a price they can afford. In order to achieve this, quality needs to be built into a product. It cannot be added to a product after it has been created – except with great difficulty and cost. It is like trying to alter the foundations of a building once it is complete. To build in quality requires a commitment from all parties to the project, from the project sponsor through all the levels of management to the technical staff, customers, users and the clerical support staff. Philip Crosby’s seminal book on quality opened:

Quality is free. It’s not a gift, but it is free. What costs money are the unquality things – all the actions that involve not doing the job right in the first place.

(Philip B. Crosby, Quality is Free, New York: Mentor, 1980)

Crosby was not arguing that increasing the quality of a product did not cost money; rather, that the costs of remedying lapses in quality would be even greater. There is a relationship between quality, cost and time. A higher level of quality can increase the duration of a project and/or its cost. This is called the cost of conformance. But the more effort goes into quality, the less expenditure is needed on correcting and compensating for faults at the end of the project – the cost of non-conformance. The project manager must balance the two types of cost.

It is important to distinguish between quality control and quality assurance. Quality control is concerned with the practical activities that check the quality of a deliverable or intermediate product, for example that manufactured light bulbs actually work. Assurance activities do not check product quality directly but instead check that the required steps that establish product quality are in place and have been carried out. This might check, for example, that process standards belonging to the activity have been followed, such as a checking process for the purity of raw materials used in a process. This could involve examining evidence such as testing plans and sign-off documents. Section 5.4 explores this in more detail.

5.2 DEFINITIONS OF QUALITY

A definition of quality is a degree or level of excellence, as in the phrase ‘high-quality goods’. This definition is subjective: for example, when comparing cars, people argue about the quality of different makes.

Another definition is conformance to standard. Within a project process there will be certain standards to which developers are expected to conform. These standards help reassure the project’s clients that they will get value for money for the project’s products. However, the generally accepted definition which should apply to all projects is that the deliverables should be ‘fit for purpose’. Again referring to cars: if you need to get to work on time, a Rolls-Royce may not be the best vehicle to navigate through heavy traffic – a scooter might be more effective. The original international standard on quality, ISO 8402:1994, formally defined quality as ‘the totality of features and characteristics of a product or service which bear on its ability to satisfy stated or implied needs’.

One aspect of this ‘fit for purpose’ definition, reliability, shows how the concept applies. If the Water Holiday Company’s booking system is out of action for a time, it would be annoying but not life threatening. However, if the control systems in an aircraft in flight fail, that would be disastrous. Hence, the effort devoted to making sure that the aircraft system does not fail would be greater than that required for the Water Holiday Company integration project. The required quality varies depending on the type of system under development and the money the customer is prepared to pay.

Those responsible technically for a project must advise the customer on the benefits of a well-engineered system, but the person paying usually calls the tune. However, suppliers also have a professional and legal commitment to the general public to ensure that the systems they produce are safe. The possibility of an aircraft crashing in an urban area makes us all stakeholders in aircraft safety.

As IT systems become more complex, it is impossible to ensure that they will never fail. Thus, it is important to examine the ways in which the systems under development will behave in the event of various types of failure. This will directly influence the development process and how quality is measured. For example, a control system for a nuclear power plant will be designed to ‘fail safe’ – that is, in the event of a systems failure it will revert to a safe state, for instance by closing down. Obviously, such a requirement must be specified in the quality criteria for the system. Where something is inherently dangerous, as in this example of the nuclear power plant, the developers would have a duty of care, not just to the project sponsor, but to the world at large. It is also likely that governments, either individually or collectively, will enact legal requirements in such circumstances with which there must be compliance.

The testing of deliverables for required qualities needs the deliverable to actually exist. This is rather late in the development life cycle. It is therefore useful to identify process standards governing how products are created that can be applied at an earlier stage.

5.3 QUALITY CHARACTERISTICS

The definition of ‘fitness for purpose’ needs elaborating so that it can be applied practically. Another international standard, SQuaRE (standing for Software Quality Requirements and Evaluation), is documented in the ISO 25000 series of standards and defines a set of standard characteristics by which software quality can be measured. (This has replaced a previous version, ISO 9126.) It specifies the following high-level quality characteristics:

Functional suitability: Does the system as delivered meet the functional requirements of the user? Meeting user expectations is more than just meeting a specification.

Performance efficiency: For example, how fast does the system execute functions? What hardware resources does it need? How many users can access the system at one time?

Compatibility: Can the new system operate with existing systems? In particular, can it share information with other applications?

Usability: Is the system straightforward to use? How much training is required for someone to use it?

Reliability: How often does the system break down? How long does it take to put right? In general, software that has been in use by a large number of people over a long period of time is likely to be reliable. This is because over time most of the faults will have been detected and corrected. Hence the use of the term ‘software maturity’.

Maintainability: Can this software application be modified easily and without introducing unexpected errors?

Portability: how easy is it to move the software from the particular platform on which it was developed to another environment?

The standard itself goes into much more detail for each heading. The top-level qualities are broken into sub-qualities that contribute to the higher-level quality.

Not all qualities are relevant for all projects and project managers need to select those useful for their projects. For example, in the Water Holiday Company booking system, response time in answering queries about the availability of boats is an important part of usability. For the application, engineering measurements have to be mapped to values on a scale reflecting user satisfaction – for example, a response time under five seconds might correspond to ‘acceptable’.

5.4 QUALITY CRITERIA

The level of product quality required has to be specified at the beginning, before the product is developed. In Chapter 2, the concept of a product definition or description was introduced. Each product definition included a section headed quality criteria. These criteria enable checks that a product is fit for its purpose in the final deliverable, or in the creation of other products, by seeing whether it meets the quality criteria specified. The product definition refers to types of product, not specific instances. For example, for a software module, processing test data correctly could be a quality criterion, but the actual test data will be based on the particular module’s specification. Product definitions are produced at the start of projects and specifications will elaborate and extend the initial criteria.

Effective quality criteria must be specific, measurable and achievable.

As an example, the aircraft control software could have a specific requirement that the product must never fail. It is measurable by running the product continuously until it fails. Unfortunately, it can only be demonstrated that the requirement has not been met. We can never prove that it will not fail at some point in the future. Hence the ‘never’ requirement is not achievable. We can, however, provide an estimate of the probability of failure which will get smaller as testing intensifies. As noted with the nuclear power station, we can also plan for the possibility of failure, for example, by allowing a reversion to manual control.

ACTIVITY 5.1

Refer back to the discussion about product definitions (Section 2.2.1). Draw up quality criteria that can be used to assess a product definition (not the product it defines). Add any other headings that you consider should be standard for all product definitions/descriptions. Specify how the criteria can be measured.

5.5 QUALITY CONTROL VERSUS QUALITY ASSURANCE

We now discuss how the quality criteria of a product created by a project are checked. Ideally, the project takes place in an organisation committed to quality with standards already in place for certain activities. If this is not so, part of project set-up will be the creation of the framework for managing quality.

ACTIVITY 5.2

The following are examples of good and bad quality criteria:

All screen layouts should have similar layouts and use the same terminology.

Screens should be user friendly.

The system should be able to handle 50 transactions.

The system should allow for 20 users at any one time without degradation.

The response time should not be longer than three seconds.

Comment on the effectiveness of each of these quality criteria.

ACTIVITY 5.3

Maintainability is defined as a quality characteristic. How can it be measured?

ACTIVITY 5.4

How can the reliability of a system be measured?

An organisation that carries out many similar projects can usefully develop organisational standards for product definitions and quality practices applicable, with appropriate adjustments, to all its project work. This quality framework is called the quality management system (QMS). It may be based on the ISO 9000 series of standards (see Section 5.10). Within the QMS, there will be a quality strategy and quality assurance processes. They are reviewed and, if necessary, modified to meet the specific requirements of each project.

A quality strategy defines the QMS and includes:

procedures and standards for creating a project quality plan;

definitions of quality criteria;

quality control procedures;

quality assurance procedures;

a statement of compliance with or allowed deviation from industry standards;

acceptance criteria;

an allocation of responsibilities for defining or undertaking quality-related activities.

The methods for exercising quality control are discussed later, but generally these are either practical tests or a review. As seen in Section 5.1, quality assurance stands alongside quality control but is external to it. Quality assurance is an audit to confirm that proper procedures are in place and applied correctly. The correctness of a software component can be verified by running tests and checking the results. That is control. Assurance would involve checking that the testing has actually been done by looking at evidence such as copies of the expected and actual results of testing.

Quality control would normally be an integral part of the project team’s work. The quality assurance activities, on the other hand, are usually carried out by people outside the project team who report directly to the steering committee/project board. This separation of responsibilities helps to ensure that the process is transparent and reduces possible conflicts of interest.

Quality criteria are specified for each component. As each component is completed, a control process is undertaken to ensure that the criteria have been met. This is followed, at an appropriate time, by an assurance process which confirms that the agreed procedures have been followed and that all products have undergone the necessary checks.

You will recall from Chapter 1 that the systems development life cycle has a number of stages – for example, requirements analysis is followed by systems design, followed by construction and testing. Each stage contains quality control processes, usually reviews, and a quality assurance process takes place at the end of each stage. If proper quality control is found to be lacking, corrective action may be mandated. The quality control and, if necessary, the quality assurance processes are repeated until the quality criteria are met.

5.6 QUALITY PLANNING

The creation of a quality plan is vital to the success of a project. It specifies the particular standards that apply to the project. Ideally, they should be taken from existing organisational standards. These in turn may derive from industry standards. However, modifications to organisational standards may be needed because of the special characteristics of a project.

The plan also specifies how, when and by whom the quality control activities should be undertaken, the quality assurance processes to be followed and who will carry them out. It may also include configuration management and change control procedures (see Chapter 4).

The quality plan itself is subject to quality control and quality assurance processes.

It is common for the quality plan to be integrated with the project management plan described in Section 1.8.1.

5.7 DETECTING DEFECTS

Quality control of software products tends to use testing to establish the quality of deliverables to the client, and reviews for the intermediate products such as requirements, specifications and design documents. Software can be subject to both testing and review.

5.7.1 The V model

The V model is a useful model of the systems development process (see Figure 5.1), in which the solid lines represent the forward progress of the project and the dashed lines represent the way in which quality control is exercised.

There are two quality control processes at work: one between stages and the other across the V. For example, the requirements specification describes the functions and quality attributes required in the system. This should include an acceptance test plan showing how the requirements are going to be assessed in the final system.

Using the acceptance test plan, testing can be undertaken to demonstrate that the final system to be delivered meets the requirements. This link between the requirements specification and user acceptance testing is shown by the dashed arrow between the two, identified as the ‘acceptance test plan’.

Figure 5.1 A simplified V model

images

Systems design follows requirements specification as shown by an arrow. Systems design has two dashed arrows: one across to systems testing and the other back to requirements specification. The horizontal arrow shows the systems testing that needs to be carried out to validate the design after it has been implemented. The arrow to the requirements specification indicates that the design process may discover errors in the systems requirements. For example, gaps may be found in the definition of the requirements, or two requirements may be found to be inconsistent. The same kind of links to the previous source document and to the appropriate type of testing are applied to each stage of development.

5.7.2 Process requirements

The importance of product quality criteria has been stressed. However, the qualities required in a product are captured by following the correct processes that create the product. This makes it necessary to specify the process requirements for each stage and activity.

Entry requirements state what must exist before the stage or activity can begin. For example, before design can begin, the requirements documentation must be agreed and signed off and any design techniques to be used must be specified. If design begins before requirements are agreed, this will lead to problems if the requirements are changed during design. The design may then have to be reworked at additional cost; worse still, the need to amend the design in line with changes in the requirements may be forgotten.

Implementation requirements define how the process should be done. For example, in design, implementation requirements may specify the use of techniques such as entity modelling or logical process modelling. Implementation requirements also specify the software tools that are to be used.

Exit requirements indicate what should be in place for a successful sign-off of the activity. The exit requirements for design documentation are that it:

is complete;

has been reviewed;

meets the standards agreed;

covers all the requirements for this component;

leaves no outstanding issues.

5.7.3 Defect removal process

Before a defect can be removed, it must be identified. This is relatively straightforward – though more costly – in the later stages of development, when test cases can be run though the system to see if they give the expected results (dynamic testing). In the earlier stages of a project, quality checking could involve the following:

desk checking;

document review;

walkthrough;

peer review;

inspection;

pair programming;

static testing.

When specifying quality criteria that apply to, say, a software specification, the technique for checking if those criteria are met should be specified, along with the type of staff and tools needed.

Desk checking

Desk checking is the most basic of the review activities. When you produce something, you yourself should be the first person to check it. After all, you are the person who knows most about what you were trying to do. This may mean reading it through and checking for mistakes and ambiguities in a document, or working through the logic of software to identify slips with code. It will, hopefully, remove many trivial errors before the product is subjected to more vigorous scrutiny.

ACTIVITY 5.5

Read through the following table and identify any errors:

Data Item range Cross-checks
Time: hours 0−23  
Time: minutes 0−60  
Time: seconds 0−60  
Day 1−31 Cannot be greater than 30 if month = 2, 4, 6, 7, 9 or 11
Month 1−12  
Year >2010  

Document review

This too is desk checking, but by people other than the author to ensure that the document meets specified quality criteria, such as appropriateness and clarity. The readers may have particular concerns: a user could be concerned that a requirement satisfies the daily needs of their job; a coder might be concerned that it gives a clear indication of what the software must do. Products created in a project are not self-contained, and must be compatible with other products.

Concerns might include: is it complete, unambiguous, self-consistent and consistent with related documents? Is it clear? Are all technical terms properly used and understood? Does it conform to the agreed layout?

Walkthrough

A walkthrough is a particular technique where a scenario is created of a real-world situation (called ‘user stories’ in Agile). How the new system will deal with the situation is worked through and discussed, step by step. The participants could be drawn from both technical and user groups, each of which may have a different view on the proposed transaction. This is particularly useful in assessing early proposals for interface designs.

Peer review

A peer review is a type of document review that is often carried out on a design document or actual code. The author’s co-workers (or peers) examine copies of the document and make comments about it. The issues considered include:

Is the proposed design technically feasible?

Is there an easier or better way to achieve the same objectives?

Does the design conform to company standards for such processes?

Can the design communicate with other parts of the system?

Does the design cover all the requirements that should be included?

Are there any ambiguities?

The peer reviewers of software often carry out a version of a walkthrough where they dry-run the code – that is, take some test input data and manipulate it on paper according to the instructions in the code.

Peer reviews can be done relatively informally within the project team. However, the time needed by the reviewers needs to be officially scheduled – after all, they have their own software on which to work as well.

Inspection

An inspection is a formal review of a product. Its purpose is to review the product in order to identify defects in a planned, independent, controlled and documented manner. It is a process with the following structure:

Preparation:

setting up the review meeting – including the time, place and who is to attend;

distributing documentation – for example, the product and its description;

annotation of the product by reviewers, if it is a document, and recording defects.

Meeting:

discussion of potential defects identified by reviewers, which should confirm whether they are defects or not, but not seek to produce a solution;

agreement of follow-up appropriate to each defect.

Recording:

follow-up actions and responsibilities;

agreement of outcome and sign-off if appropriate.

Follow-up:

advising the project manager of the outcome;

planning remedial work;

signing off when complete.

There are four roles within this process:

1. The moderator sets the agenda and controls the review process, ensures actions and required results are agreed and, once the process is complete, signs off the review.

2. The author provides the reviewers with relevant product documentation, answers questions about the product during the review and agrees actions to resolve defects.

3. Reviewers undertake the review, assessing the product against the quality criteria, identifying potential defects and ensuring that the nature of any defect is clearly understood by the author.

4. The scribe takes notes of the agreed actions, who is to carry them out and who is to check corrections. He or she confirms these details at the end of the meeting.

With peer reviews, inspections and walkthroughs, no attempt should be made during the meeting to solve the problems identified. Problems should be recorded. The author will then go away and seek to come up with solutions. The review can then be repeated if it is felt that the changes required were of sufficient significance. An alternative is for one person to be given responsibility for ensuring that the necessary alterations are made by the author.

Pair programming

The review techniques described above depend on copies of documents, including code, being printed and additional reviewing activities being scheduled. To avoid this, in Agile development environments, code developers sometimes work in pairs. The pair take turns to type in code at the workstation while the other advises and checks on what is being entered. This is rather like a real-time version of a peer review.

Static testing

Some software tools carry out static testing by analysing the structure of the code. Among other analyses, such tools look at the branches and loops in a program and calculate a measure of complexity. The more complex a software component is, the more difficult it will be to maintain.

All the above processes take place during the activities on the left-hand side of the V model. The quality control processes on the right-hand side are dominated by dynamic testing.

5.8 DYNAMIC TESTING

Dynamic testing is divided into various levels:

unit;

integration;

systems;

user acceptance;

regression.

For each type of testing, sets of test data and expected results must be produced. Referring back to the V model, a test plan should have been produced at the appropriate stage in the development process on the left-hand side of the V. The plan should contain the necessary guidance for producing the test data, if not the test data itself.

5.8.1 Unit testing

Unit tests are the first tests carried out on a software component. It is often done by the coder. In the V model this checks the coder has faithfully implemented the detailed design. The detailed test plan for the component should cover the range of inputs expected and how they are handled by each function in the component. The test should test various input combinations and sequences. Testing is then extended to cover, for example, numbers just inside and just outside any specified limits. All tests are designed to ensure that this particular unit will not fail because of bad data or unusual combinations but will handle them in a predefined way.

A good practice is for test-first data and expected results to be drafted before the coding is commenced. Thinking about the detail of how inputs will be converted into outputs can clarify requirements early on.

ACTIVITY 5.6

Assume that the table of time/date checks drawn up (and corrected) in Activity 5.5 has now been implemented in an input screen in an IT system. Draw up a set of test data and expected results that could be used to test that the checks on data are being carried out correctly.

Test results should be carefully checked against the expected results. There are automated tools that can save human effort here after initial set-up. Discrepancies are recorded so that the software can be amended and retested. Sometimes, however, it is the expected results that are wrong! Resulting amendments are also recorded. If problems arise later, there is then a trail that can be followed to establish how they were introduced.

There are tools – commonly called test harnesses – which can simulate the software or hardware that supply data to or use data from the module under test. The test harness can record data input and output and sometimes the routes through the program exercised. Automated tools can also simulate keyboard input and capture and compare screen output with expected results. In addition, dynamic analysis can identify code not executed by the test data used. This is useful as it may show up a shortcoming in the test data or unneeded code in the software.

Once unit testing has been successfully completed, the unit can be signed off and registered as a configuration item (see Chapter 4).

5.8.2 Integration testing

Integration testing links a number of system components and runs them as a whole. This checks that the units communicate properly with each other. Discrepancies include data items being treated as different formats in different units, and conditions being set in one component that another cannot cope with. Some of these can be avoided by the use of shared databases and database management systems (DBMS) and by careful modular design that keeps interactions between components to a minimum.

As errors are found, they will be recorded and corrected. The integration test will then need to be repeated. The repetition of testing makes this a fruitful area for automation.

Agile approaches stress the need for early and frequent integration testing. In some environments this can even be done while software components are still under development. The idea is to catch discrepancies between components as early as possible.

5.8.3 Systems testing

Systems testing is the final stage in testing by the development professionals. It involves running the whole system on the infrastructure that will be used when the system is operational. It may sound just like a step up from integration testing, but there are many additional issues which must be addressed; for example:

Does the system run correctly on the infrastructure to be used for the final system?

Are the response times within the tolerances set in the requirements specification?

Can the system cope with the planned workloads?

What is the effect of high loading on the system?

Again, there are tools to help. They can be used to simulate large numbers of users and high volumes of data.

5.8.4 User acceptance testing

User acceptance testing is the crucial test. Can the users operate the system? Does it meet their expectations, not just their requirements? Users should be involved in the development process from the beginning and have had sight of how the system works before this point so that there are few surprises. While it is helpful to involve users in earlier testing activities, seeing the faults that arise during testing may make them anxious.

Acceptance testing underlines the importance of having clear requirements from the start. Users should have a well-defined acceptance test plan to guide them as they test whether the system meets their expressed needs. Where discrepancies are found, the issue must be recorded, the source of the problem established and, if need be, the system must be modified. Sometimes, the system modification may have to be to the users’ way of working.

5.8.5 Regression testing

Regression testing is quite different from the earlier testing processes. Regression testing needs to take place at each stage of testing. Whenever a fault is found and the offending piece of software identified, it has to be corrected. Unfortunately, the correction itself could introduce further errors or uncover ones that were masked by the first error. Regression testing involves running an agreed set of test data through the system again to confirm that not only has the original error been corrected but no further errors have been introduced or uncovered. Regression testing can largely be automated by using a standard set of test data and expected results.

5.8.6 Management of testing

The project manager has certain key concerns in relation to testing:

The way the system has been designed has an impact on how easy it is to test it. The development of components that are as self-contained as possible and where there are external outputs, that indicate the internal state of the unit, make testing easier. In some cases, special platforms have to be set up to support testing. Thus, the way the testing is to be done needs to be taken into account at an early stage of the project planning.

The number of defects is hidden in the software and thus unknown, so the time needed to complete testing can be uncertain. Hence the need for quality checking at the earlier stages of development, which can reduce defects before testing is started.

If testing is finished too early, the reliability of the resulting operational system may be jeopardised. In most systems there are critical functions that are heavily used. For example, in the Water Holiday Company scenario, the function that checks the availability of vessels will, if successful, be the precursor to an income-generating booking and should therefore be subjected to very careful testing.

5.9 EVALUATING SUPPLIERS

With the increase in specialisation, it is quite usual for a new system to be developed by an external supplier rather than in-house. Where the development responsibilities devolved are significant, a rigorous selection process should take place with invitations to tender being sent to potential suppliers. The organisation would then have to evaluate proposals from suppliers based on the scope of what they can undertake and their price. There are many articles and books on contract negotiation and management; the focus here is on the quality aspects.

images

COMPLEMENTARY READING

Tate, M. (2015) Off-The-Shelf IT Solutions: A practitioner’s guide to selection and procurement. Swindon: BCS.

Where new functionality is being created by the supplier rather than an existing application being installed, the project manager will be particularly concerned with the implications for the quality of deliverables. Care about specifying the quality of deliverables will be important, but the final quality of these products will only be known when they are delivered towards the end of the project. This is rather late, so the alternative is to focus on process quality and how well the supplier carries out their activities.

The types of question that could be asked of the supplier include:

How are their projects organised? (For example, do they follow a framework like PRINCE2, the standard for project management procedures sponsored by the UK government?)

Do they use a defined development methodology, such as the Dynamic Systems Development Method (DSDM)?

How is quality control exercised?

At what points are quality reviews held?

Is there a quality assurance process?

Do practitioners have appropriate professional certification?

Is there a configuration management system in place?

How are change requests handled?

This list is by no means exhaustive. The response to each question should be supported by evidence. For example, if it is claimed that software quality is assessed by reviews, then examples of the outputs from the reviews can be examined. Observing some of their reviews can increase confidence in their quality processes.

As well as the quality of a potential supplier’s proposal, the standing of the supplier itself needs to be assessed. There might be an important post-project relationship with the supplier for ongoing maintenance, and their continuing financial stability would need consideration. These wider procurement issues are beyond this book’s scope – but see the complementary reading box above.

Such detailed inquisition would be time-consuming for the organisation, particularly if there is a large number of potential suppliers. It may be easier to assess suppliers on the basis of their accreditation to professional and standards organisations. Most of the concerns raised in the questions above are covered by ISO 9001 quality management system (QMS) accreditation, which is discussed next.

5.10 ISO 9001:2015

ISO 9001:2015 is the international standard for quality management systems (which we introduced in Section 5.5) and is aimed at producers and suppliers of any products and services, not necessarily software. However, a subsequent document, ISO 90003, describes the way ISO 9001 can be applied to software development projects.

Organisations can be inspected and awarded ISO 9001 certification by accredited auditors. This means that, as a potential client, you can assume a particular standard of quality management without having to carry out detailed checks yourself.

However, while ISO 9001 states that a quality level should be specified, it does not say what that level should be. For example, for the Water Holiday Company project, it could be stated that a performance quality requirement is for an average response time of 3 seconds for a boat availability query, but equally it could state 6 seconds. Thus, having a set of ISO 9001 procedures only guarantees that a level of quality has been specified, not that this level is accepted universally as appropriate. This allows the client of an ISO 9001:2015 supplier to negotiate the quality criteria that they personally need.

ISO 9001:2015 is based on the following principles:

Customer focus: The supplier focuses on what the customer really needs.

Leadership: Top supplier management must have a genuine concern for quality, which is communicated through word and deed across all levels of the organisation.

Engagement of people: For quality to be delivered, there must be ‘competent, empowered and engaged’ staff. This is more likely where there is an ethos of teamwork, open discussion and knowledge-sharing.

Process approach: The processes used to produce quality products and services must be defined, understood and managed.

Improvement: Effective management of processes is a foundation for finding ways of improving them. Improvements will be supported by evaluation of current practices, education and innovation.

Evidence-based decision-making: Decision-making is based on the analysis of data and information. This enhances the understanding of cause and effect when delivering products and services that meet customer needs.

Relationship management: Most supplier organisations themselves depend on other suppliers of products and services in their supply chains. This will have a significant influence on the quality of their outputs. Thus, effective relationships with other members of their supply chains are essential.

It was stressed above that ISO 9001:2015 is relevant to a broad range of organisation types. There is an expanding number of supplementary standards written to show how the generic ISO 9001 principles can be interpreted and implemented in different business contexts. An updated version of ISO 90003 has been published that applies ISO 9001 to software development: ISO 90003:2018.

The UK-based TickITplus scheme is designed to enable organisations involved with software development, and also with the delivery of other IT-related products and services, to be accredited as compliant with ISO 9001:2015. As it relates to IT, the scheme can also take account of other IT standards, such as ISO 15504:2102, which enables the assessment of IT process quality.

TickITplus also goes beyond a simple yes/no model of compliance. It can allocate an organisation to a particular capability level, that is: foundation, bronze, silver, gold and platinum. The concept of capability level is discussed further below.

images

Capability maturity models

In the discussion of ISO 9001:2015, we assumed that the motivation was to allow customers to assess potential suppliers. Sometimes, however, the managers of a supplier organisation want to assess their own quality processes to find ways of improving them. They want to gauge their current level of effectiveness, but also identify what needs to be done to get it to a higher level. This brings in the concept of maturity models, where the organisation is assessed as being at a particular level of process maturity.

One example of this is the capability maturity model (CMM) originally developed by the Software Engineering Institute at Carnegie Mellon University for the US Department of Defense. This comprises five levels:

1. Initial: Any organisation would be at this level by default. Good quality work may be done, but customers cannot be sure that this is always the case.

2. Managed: Some basic project management and other systems are in place.

3. Defined: The way each task in software development is done is defined to enable consistent good practice.

4. Quantifiably managed: Processes and their products are measured and controlled – through, for example, the number of errors created in each process.

5. Optimising: The measurement data collected is analysed to find ways of improving processes.

The assessment of an organisation’s maturity level can be done internally, or external auditors could be employed so that the maturity level can be published externally, as in the case of ISO 9001.

SAMPLE QUESTIONS

1. ‘Fitness for purpose’ defines which of the following?

a. The quality of the project deliverables

b. The usability of the delivered IT application

c. The capability of the staff who will implement the IT application

d. The capability of the staff who will use the system when it is operational

2. User acceptance testing is an example of which of the following?

a. Project control

b. Project monitoring

c. Quality control

d. Quality assurance

3. Which of the following is NOT a defined role in inspections?

a. The moderator

b. The project manager

c. The scribe

d. The author

4. ISO 9001:2015 is a standard that defines which of the following?

a. Project management standards

b. IT project deliverables

c. Quality management systems

d. Software testing procedures

POINTERS FOR ACTIVITIES

Activity 5.1

Quality criteria could include:

1. A product definition or description must contain the following headings or sections (from Chapter 2):

identity;

description;

derived from;

components;

format;

quality criteria.

Additional headings could include the following:

author;

owner;

date first compiled;

date of last amendment;

version number.

The ‘derived from’ section must refer to valid product types.

2. The ‘format’ section must provide enough information to allow someone to create an instance of the product in the correct form.

One can easily identify other criteria.

These criteria meet the requirement of being specific (for example, a list of obligatory section headings), measurable (the sections are either there or not there) and achievable (it is possible, without difficulty, to ensure that each heading is present).

The quality of a product description will most likely be measured through a review process, such as inspection by a fellow developer. What this process does not do is to ensure fully that the correct information is entered for each heading. This may require a further set of quality criteria, which again should be subject to review.

Activity 5.2

Criterion Assessment
All screen layouts should have similar layout and use the same terminology This is relatively clear, and measurements could be devised, but more detailed checklists of things to look for (as might be found in a style guide) would be helpful.
Screens should be user friendly This is subjective and therefore not measurable.
The system should be able to handle 50 transactions This is not measurable as there is no indication as to the period of time within which the transactions have to be handled.
The system should allow for 20 users at any one time without degradation This is better than the previous criterion, but clarification of what each user might be doing could be requested.
The response time should not be longer than three seconds This is a mixture. It is clear that a response time of three seconds or less is required, but it does not specify under what conditions.

Ideally the last two criteria should be combined so that a baseline of three seconds is given with a loading of 20 simultaneous users. This can still be improved upon. For example, it could be stated that the response time should be less than four seconds for at least 95 per cent of the time with a loading of 20 simultaneous users and should never exceed 10 seconds. Such a statement does allow for the odd occasion when the three seconds might be exceeded. These examples show how difficult, yet important, it is to get the quality criteria correctly specified for each product in the development process.

Activity 5.3

As can seen in the main text of this chapter, there is a difference between measuring the quality of an existing software component – where actual performance can be measured – and assessing the likely quality of an application as it is being built. In an existing software component, we could collect statistics about the amount of effort that has been needed to implement actual changes.

If the software is being created, we could examine the code to see if it has characteristics that are likely to lead to maintainability. A system would be maintainable if it satisfied the following criteria (among others):

The structure of the software is clear.

The names used for items of data and procedures are indicative of the nature and purpose of each of these items.

The purpose and method of manipulations of data in code is clear and unambiguous.

Documentation is present to support code.

There are other possible software engineering criteria that could be discussed, such as loose coupling of components (minimal cross-references) and cohesion (all code for a function being together).

The measurement for these criteria would probably be a peer review process.

Activity 5.4

There are two ways in which the reliability of a system has been traditionally measured:

1. Meantime between failure (MTBF) specifies how long the system runs without failing. These days, this would be specified in weeks or even months. Something which fails every day would not be very popular with those operating the system.

2. Meantime to repair (MTTR) specifies how long it takes to repair the system when it fails. It is no good if a system cannot be restored to a working situation in a reasonable length of time. That length of time needs to be specified as part of the acceptance criteria. Note that this measure is also related to the attribute of maintainability.

Other valid measurements could be considered, such as the percentage availability of the system.

Activity 5.5

The errors include:

Time: minutes should be in the range 0−59.

Time: seconds should be in the range 0−59.

Day: July (month 7) has 31 days, February never has more than 29 days and there is no leap year check. The cross-check should be:

‘If month = 2 and year is leap, up to 29;

if month = 2 and year is not leap, up to 28;

if month = 4, 6, 9 or 11, up to 30.’

Activity 5.6

You may find some ‘holes’ in the above test data. It should illustrate that although devising test data is not the most glamorous job, creating effective test cases does require the kind of attention to detail that we normally expect of software developers. Devising test data will also trigger questions about the precise nature of the requirements – for example, is there really no upper limit on year?

images

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset