Chapter 4

Meeting IEC 61508 Part 3

Abstract

This chapter covers Part 3 of IEC 61508 addressing the overall software requirements and the development of software. The Annexes of Part 3 offer appropriate techniques, by SIL, in the form of tables followed by more detailed tables with cross-references. In the 2010 version there is an additional Annex giving guidance on the properties that the software techniques should achieve which is intended to allow a frame work for justifying alternative techniques to those given in the Standard. This chapter attempts to provide a simple and useable interpretation by summarizing the main requirements.

Keywords

Coding; Formal methods; Integration; Life-cycle models; Metrics; Safety manuals; Semi-formal methods; Testing
 
IEC 61508 Part 3 covers the development of software. This chapter summarizes the main requirements. However, the following points should be noted first.

Whereas the reliability prediction of hardware failures, addressed in Section 3.3.3 of the last chapter, predicts a failure rate to be anticipated, the application and demonstration of qualitative measures DOES NOT imply a failure rate for the systematic failures. All that can be reasonably claimed is that, given the state of the art, we believe the measures specified are appropriate for the integrity level in question and that therefore the systematic failures will credibly be similar to and not exceed the hardware failure rate of that SIL.

The Annexes of Part 3 offer appropriate techniques, by SIL, in the form of tables followed by more detailed tables with cross-references. In the 2010 version there is an additional Annex giving guidance on the properties that the software techniques should achieve which is intended to allow a frame work for justifying alternative techniques to those given in the standard.

This chapter attempts to provide a simple and useable interpretation. At the end of this chapter a “conformance demonstration template” is suggested which, when completed for a specific product or system assessment, will offer evidence of conformance to the SIL in question

The approach to the assessment will differ substantially between:
Embedded software design
and
Applications software
The demonstration template tables at the end of this chapter cater for the latter case. Chapter 8, which will cover the restricted subset of IEC 61511, also caters for applications software.

4.1. Organizing and Managing the Software Engineering

4.1.1. Section 7.1 and Annex G of the Standard Table “1”

Section 3.1 of the previous chapter applies here in exactly the same way and therefore we do not repeat it.
In addition, the Standard recommends the use of the “V” model approach to software design, with the number of phases in the “V” model being adapted according to the target safety-integrity level and the complexity of the project. The principle of the “V” model is a top-down design approach starting with the “overall software safety specification” and ending, at the bottom, with the actual software code. Progressive testing of the system starts with the lowest level of software module, followed by integrating modules, and working up to testing the complete safety system. Normally, a level of testing for each level of design would be required.
The life cycle should be described in writing (and backed up by graphical figures such as are shown in Figures 4.14.3). System and hardware interfaces should be addressed and it should reflect the architectural design. The “V” model is frequently quoted and is illustrated in Figure 4.1. However, this is somewhat simplistic and Figures 4.2 and 4.3 show typical interpretations of this model as they might apply to the two types of development mentioned in the box at the beginning of this chapter. Beneath each of the figures is a statement describing how they meet the activities specified in the Standard.
image
Figure 4.1 A typical “V” model.
Figure 4.2 describes a simple proven PLC platform with ladder logic code providing an application such as process control or shut down. Figure 4.3 describes a more complex development where the software has been developed in a high-level language (for example a C subset or Ada) and where there is an element of assembler code.
Other life-cycle models, like the “Waterfall,” are acceptable provided they incorporate the same type of properties as the V model. At SIL 2 and above, there needs to be evidence of positive justifications and reviews of departures from the life-cycle activities listed in the Standard.
image
Figure 4.2 A software development life cycle for a simple PLC system at the application level. The above life-cycle model addresses the architectural design in the functional specification and the module design by virtue of cause and effect charts. Integration is a part of the functional test and validation is achieved by means of acceptance test and other activities listed in the quality and safety plan.
Annex G provides guidance on tailoring the life-cycle for “data-driven systems.” Some systems are designed in two parts:
• A basic system with operating functions
• A data part which defines/imposes an application onto the basic system.
The amount of rigor needed will depend on the complexity of the behavior called for by the design. This complexity can be classified into classes as follows:
• Variability allowed by the language:
fixed program
limited variability
full variability
• Ability to configure application:
limited
full
image
Figure 4.3 A software development life cycle for a system with embedded software. The above life-cycle model addresses the architectural design in the functional specification. Validation is achieved by means of acceptance test and other activities listed in the quality and safety plan.
A brief summary of these is provided in Annex G and is summarized at the end of this chapter.
The software configuration management process needs to be clear and should specify:
• levels where configuration control commences;
• where baselines will be defined and how they will be established;
• methods of traceability of requirements;
• change control;
• impact assessment;
• rules for release and disposal.
At SIL 2 and above, configuration control must apply to the smallest compiled module or unit.

4.2. Requirements Involving the Specification

4.2.1. Section 7.2 of the Standard: Table A1

(a) The software safety requirements, in terms of both the safety functions and the safety integrity, should be stated in the software safety requirements specification. Items to be covered include:
• Capacities and response times
• Equipment and operator interfaces including misuse
• Software self-monitoring
• Functions which force a safe state
• Overflow and underflow of data storage
• Corruption
• Out of range values
• Periodic testing of safety functions whilst system is running
(b) The specification should include all the modes of operation, the capacity and response time performance requirements, maintenance and operator requirements, self-monitoring of the software and hardware as appropriate, enabling the safety function to be testable while the equipment under control (EUC) is operational, and details of all internal/external interfaces. The specification should extend down to the configuration control level.
(c) The specification should be written in a clear and precise manner, traceable back to the safety specification and other relevant documents. The document should be free from ambiguity and clear to those whom it is intended.
For SIL 1 and SIL 2 systems, this specification should use semiformal methods to describe the critical parts of the requirement (e.g., safety-related control logic). For SIL 3 and SIL 4, semiformal methods should be used for all the requirements and, in addition, at SIL 4 there should be the use of computer support tools for the critical parts (e.g., safety-related control logic).
Forward and backward traceability should be addressed.
The semiformal methods chosen should be appropriate to the application and typically include logic/function block diagrams, cause and effect charts, sequence diagrams, state transition diagrams, time Petri nets, truth tables, and data flow diagrams.

4.3. Requirements for Design and Development

4.3.1. Features of the Design and Architecture

Section 7.4.3 of the Standard: Table A2

(a) The design methods should aid modularity and embrace features which reduce complexity and provide clear expression of functionality, information flow, data structures, sequencing, timing-related constraints/information, and design assumptions.
(b) The system software (i.e., non-application software) should include software for diagnosing faults in the system hardware, error detection for communication links, and online testing of standard application software modules.
In the event of detecting an error or fault the system should, if appropriate, be allowed to continue but with the faulty redundant element or complete part of the system isolated.
For SIL 1 and SIL 2 systems there should be basic hardware fault checks (i.e., watchdog and serial communication error detection).
For SIL 3 and SIL 4, there needs to be some hardware fault detection on all parts of the system, i.e. sensors, input/output circuits, logic resolver, output elements and both the communication and memory should have error detection.
(c) Non-interference (i.e., where a system hosts both non-safety-related and safety-related functions), then Annex F provides a list of considerations such as:
• shared use of RAM, peripherals, and processor time
• communications between elements
• failures in an element causing consequent failure in another.

4.3.2. Detailed Design and Coding

Paragraphs 7.4.5, 7.4.6, Tables A4, B1, B5, B7, B9

(a) The detailed design of the software modules and coding implementation should result in small manageable software modules. Semiformal methods should be applied, together with design and coding standards including structured programming, suitable for the application. This applies to all SILs.
(b) The system should, as far as possible, use trusted and verified software modules, which have been used in similar applications. Thus is called for from SIL 2 upward.
(c) The software should not use dynamic objects, which depend on the state of the system at the moment of allocation, where they do not allow for checking by offline tools. This applies to all SILs.
(d) For SIL 3 and SIL 4 systems, the software should include additional defensive programming (e.g., variables should be in both range and, where possible, plausibility checked). There should also be limited use of interrupts, pointers, and recursion.

4.3.3. Programming Language and Support Tools

Paragraph 7.4.4, Table A3

(a) The programming language should be capable of being fully and unambiguously defined. The language should be used with a specific coding standard and a restricted subset, to minimize unsafe/unstructured use of the language. This applies to all SILs.
At SIL 2 and above, dynamic objects and unconditional branches should be forbidden. At SIL 3 and SIL 4 more rigorous rules should be considered such as the limiting of interrupts and pointers, and the use of diverse functions to protect against errors which might arise from tools.
(b) The support tools need to be either well proven in use (and errors resolved) and/or certified as suitable for safety system application. The above applies to all SILs, with certified tools more strongly recommended for SIL 3 and SIL 4.
(c) The requirements for support tools should also apply to offline software packages that are used in association with any design activity during the safety life cycle. An example of this would be a software package that is used to perform the safety loop PFD or failure rate calculation. These tools need to have been assessed to confirm both completeness and accuracy and there should be a clear instruction manual.

4.4. Integration and Test (Referred to as Verification)

4.4.1. Software Module Testing and Integration

Paragraphs 7.4.7, 7.4.8, Tables A5, B2, B3, B6, B8

(a) The individual software modules should be code reviewed and tested to ensure that they perform the intended function and, by a selection of limited test data, to confirm that the system does not perform unintended functions.
(b) As the module testing is completed, module integration testing should be performed with predefined test cases and test data. This testing should include functional, “black box,” and performance testing.
(c) The results of the testing should be documented in a chronological log and any necessary corrective action specified. Version numbers of modules and of test instructions should be clearly indicated. Discrepancies from the anticipated results should be clearly visible. Any modifications or changes to the software which are implemented after any phase of the testing should be analyzed to determine the full extent of re-test that is required.
(d) The above needs to be carried out for all SILs; however, the extent of the testing for unexpected and fault conditions needs to be increased for the higher SILs. As an example, for SIL 1 and SIL 2 systems the testing should include boundary value testing and partitioning testing and in addition, for SIL 3 and SIL 4, tests generated from cause consequence analysis of certain critical events.

4.4.2. Overall Integration Testing

Paragraph 7.5, Table A6

These recommendations are for testing the integrated system, which includes both hardware and software and, although this requirement is repeated in Part 3, the same requirements have already been dealt with in Part 2.
This phase continues through to Factory Acceptance Test. Test harnesses are part of the test equipment and require adequate design documentation and proving. Test records are vital as they are the only visibility to the results.

4.5. Validation (Meaning Overall Acceptance Test and Close Out of Actions)

Paragraphs 7.3, 7.7, 7.9, Table A7

(a) Whereas verification implies confirming, for each stage of the design, that all the requirements have been met prior to the start of testing of the next stage (shown in Figures 4.2 and 4.3), validation is the final confirmation that the total system meets all the required objectives and that all the design procedures have been followed. The Functional Safety Management requirements (Chapter 2) should cover the requirements for both validation and verification.
(b) The Validation plan should show how all the safety requirements have been fully addressed. It should cover the entire life-cycle activities and will show audit points. It should address specific pass/fail criteria, a positive choice of validation methods and a clear handling of nonconformances.
(c) At SIL 2 and above some test coverage metric should be visible. At SIL 3 and SIL 4 a more rigorous coverage of accuracy, consistency, conformance with standards (e.g. coding rules) is needed.

4.6. Safety Manuals

(Annex D)

For specific software elements which are reused a safety manual is called for. Its contents shall include:
• A description of the element and its attributes
• Its configuration and all assumptions
• The minimum degree of knowledge expected of the integrator
• Degree of reliance placed on the element
• Installation instructions
• The reason for release of the element
• Details of whether the preexisting element has been subject to release to clear outstanding anomalies, or inclusion of additional functionality
• Outstanding anomalies
• Backward compatibility
• Compatibility with other systems
• A preexisting element may be dependent upon a specially developed operating system
• The build standard should also be specified incorporating compiler identification and version, tools
• Details of the preexisting element name(s) and description(s) should be given, including the version/issue/modification state
• Change control
• The mechanism by which the integrator can initiate a change request
• Interface constraints
• Details of any specific constraints, in particular, user interface requirements shall be identified
• A justification of the element safety manual claims.

4.7. Modifications

Paragraph 7.6, 7.8, Table A8 and B9

(a) The following are required:
• A modification log
• Revision control
• Record of the reason for design change
• Impact analysis
• Re-testing as in (b) below.
The methods and procedures should be at least equal to those applied at the original design phase. This paragraph applies for all SIL levels.
The modification records should make it clear which documents have been changed and the nature of the change.
(b) For SIL 1, changed modules are re-verified, for SIL 2 all affected modules are re-verified, and for SIL 3 and above the whole system needs to be re-validated. This is not trivial and may add considerably to the cost for a SIL 3 system involving software.

4.8. Alternative Techniques and Procedures

Annex C of the 2010 version provides guidance on justifying the properties that alternative software techniques should achieve. The properties to be examined, in respect of a proposed alternative technique, are:
• Completeness with respect to the safety needs
• Correctness with respect to the safety needs
• Freedom from specification faults or ambiguity
• Ease by which the safety requirements can be understood
• Freedom from adverse interference from nonsafety software
• Capability of providing a basis for verification and validation.
The methods of assessment (listed in Annex C) are labeled R1, R2, R3 and “-”.
• For SIL1/2: R1—limited objective acceptance criteria (e.g., black box test, field trial)
• For SIL3: R2—objective acceptance criteria with good confidence (e.g., tests with coverage metrics)
• For SIL4: R3—objective systematic reasoning (e.g., formal proof)
• “-”: not relevant.

4.9. Data-Driven Systems

This is where the applications part of the software is written in the form of data which serves to configure the system requirements/functions. Annex G covers this as follows.

4.9.1. Limited Variability Configuration, Limited Application Configurability

The configuration language does not allow the programmer to alter the function of the system but is limited to adjustment of data parameters (e.g., SMART sensors and actuators). The justification of the tailoring of the safety life cycle should include the following:
(a) specification of the input parameters;
(b) verification that the parameters have been correctly implemented;
(c) validation of all combinations of input parameters;
(d) consideration of special and specific modes of operation during configuration;
(e) human factors/ergonomics;
(f) interlocks, (e.g., ensuring that operational interlocks are not invalidated during configuration;
(g) inadvertent reconfiguration, e.g., key switch access, protection devices.

4.9.2. Limited Variability Configuration, Full Application Configurability

As above but can create extensive static data parameters (e.g., an air traffic control system). In addition to the above the justifications shall include:
(a) automation tools for creation of data;
(b) consistency checking, e.g., the data is self compatible;
(c) rules checking, e.g., to ensure the generation of data meets the constraints;
(d) validity of interfaces with the data preparation systems.

4.9.3. Limited Variability Programming, Limited Application Configurability

These languages allow the user limited flexibility to customize the functions of the system to their own specific requirements, based on a range of hardware and software elements (e.g., functional block programming, ladder logic, spreadsheet-based systems).
In addition to the above two paragraphs the following should be included:
(a) the specification of the application requirements;
(b) the permitted language subsets for this application;
(c) the design methods for combining the language subsets;
(d) the coverage criteria for verification addressing the combinations of potential system stated.

4.9.4. Limited Variability Programming, Full Application Configurability

The essential difference from limited variability programming, limited application configurability is complexity (e.g., graphical systems and SCADA-based batch control systems). In addition to the above paragraphs, the following should be included:
(a) the architectural design of the application;
(b) the provision of templates;
(c) the verification of the individual templates;
(d) the verification and validation of the application.

4.10. Some Technical Comments

4.10.1. Static Analysis

Static analysis is a technique (usually automated) which does not involve execution of code but consists of algebraic examination of source code. It involves a succession of “procedures” whereby the paths through the code, the use of variables, and the algebraic functions of the algorithms are analyzed. There are packages available which carry out the procedures and, indeed, modern compilers frequently carry out some of the static analysis procedures such as data flow analysis.
Table B8 of Part 3 lists Data flow and Control flow as HR (highly recommended) for SIL 3 and SIL 4. It should be remembered, however, that static analysis packages are only available for procedural high-level languages and require a translator which is language specific. Thus, static analysis cannot be automatically applied to PLC code other than by means of manual code walkthrough, which loses the advantages of the 100% algebraic capability of an automated package.
Semantic analysis, whereby functional relationships between inputs and outputs are described for each path, is the most powerful of the static analysis procedures. It is, however, not trivial and might well involve several man-days of analysis effort for a 500-line segment of code. It is not referred to in the Standard.
Static analysis, although powerful, is not a panacea for code quality. It only reflects the functionality in order for the analyst to review the code against the specification. Furthermore it is concerned only with logic and cannot address timing features.
It is worth noting that, in Table B8, design review is treated as an element of static analysis. It is, in fact, a design review tool.
If it is intended to use static analysis then some thought must be given as to the language used for the design, because static analysis tools are language specific.

4.10.2. Use of “Formal” Methods

Table B5 of Part 3 refers to formal methods and Table A9 to formal proof. In both cases it is HR (highly recommended) for SIL 4 and merely R (recommended) for SIL 2 and SIL 3.
The term Formal Methods is much used and much abused. In software engineering it covers a number of methodologies and techniques for specifying and designing systems, both nonprogrammable and programmable. These can be applied throughout the life cycle including the specification stage and the software coding itself.
The term is often used to describe a range of mathematical notations and techniques applied to the rigorous definition of system requirements which can then be propagated into the subsequent design stages. The strength of formal methods is that they address the requirements at the beginning of the design cycle. One of the main benefits of this is that formalism applied at this early stage may lead to the prevention, or at least early detection, of incipient errors. The cost of errors revealed at this stage is dramatically less than if they are allowed to persist until commissioning or even field use. This is because the longer they remain undetected the potentially more serious and far-reaching are the changes required to correct them.
The potential benefits may be considerable but they cannot be realized without properly trained people and appropriate tools. Formal methods are not easy to use. As with all languages, it is easier to read a piece of specification than it is to write it. A further complication is the choice of method for a particular application. Unfortunately, there is not a universally suitable method for all situations.

4.10.3. PLCs (Programmable Logic Controllers) and their Languages

In the past, PLC programming languages were limited to simple code (e.g., ladder logic) which is a limited variability language usually having no branching statements. These earlier languages are suitable for use at all SILs with only minor restrictions on the instruction set.
Currently PLCs have wider instruction sets, involving branching instructions etc., and restrictions in the use of the language set are needed at the higher SILs.
With the advent of IEC 61131–3 there is a range of limited variability programming languages and the choice will be governed partly by the application. Again restricted subsets may be needed for safety-related applications. Some application-specific languages are now available, for example, the facility to program plant shutdown systems directly by means of Cause and Effect Diagrams. Inherently, this is a restricted subset created for safety-related applications.

4.10.4. Software Reuse

Parts 2 and 3 of the Standard refer to “trusted/verified,” “proven in use,” and “field experience” in various tables and in parts of the text. They are used in slightly different contexts but basically refer to the same concept of empirical evidence from use. However, “trusted/verified” also refers to previously designed and tested software without regard for its previous application and use.
Table A4 of Part 3 lists the reuse of “trusted/verified” software modules as “highly recommended” for SIL 2 and above.
It is frequently assumed that the reuse of software, including specifications, algorithms, and code, will, since the item is proven, lead to fewer failures than if the software were developed anew. There are reasons for and against this assumption.
Reasonable expectations of reliability, from reuse, are suggested because:
• The reused code or specification is proven
• The item has been subject to more than average test
• The time saving can be used for more development or test
• The item has been tested in real applications environments
• If the item has been designed for reuse it will be more likely to have stand-alone features such as less coupling.
On the other hand:
• If the reused item is being used in a different environment undiscovered faults may be revealed
• If the item has been designed for reuse it may contain facilities not required for a particular application, therefore the item may not be ideal for the application and it may have to be modified
• Problems may arise from the internal operation of the item not being fully understood.
In Part 3, Paragraph 7.4.7.2 (Note 3) allows for statistical demonstration that a SIL has been met in use for a module of software. In Part 7 Annex D, there are a number of pieces of statistical theory which purport to be appropriate to the confidence in software. However, the same statistical theory applies as with hardware failure data (Section 3.10).
In conclusion, provided that there is adequate control involving procedures to minimize the effects of the above then significant advantages can be gained by the reuse of software at all SILs.

4.10.5. Software Metrics

The term metrics, in this context, refers to measures of size, complexity, and structure of code. An obvious example would be the number of branching statements (in other words a measure of complexity), which might be assumed to relate to error rate. There has been interest in this activity for many years but there are conflicting opinions as to its value.
The pre-2010 Standard mentions software metrics but merely lists them as “recommended” at all SILs. The long-term metrics, if collected extensively within a specific industry group or product application, might permit some correlation with field failure performance and safety integrity. It is felt, however, that it is still “early days” in this respect.
The term metrics is also used to refer to statistics about test coverage, as called for in earlier paragraphs.

4.11. Conformance Demonstration Template

In order to justify that the requirements have been satisfied, it is necessary to provide a documented demonstration.
The following Conformance Demonstration Template is suggested as a possible format, addressing up to SIL 3 applications. The authors (as do many guidance documents) counsel against SIL 4 targets. In the event of such a case more rigorous detail from the Standard would need to be addressed.

IEC 61508 Part 3

For embedded software designs, with new hardware design, the demonstration might involve a reprint of all the tables from the Standard. The evidence for each item would then be entered in the right-hand column as in the simple tables below.
However, the following tables might be considered adequate for relatively straightforward designs.
Under “Evidence” enter a reference to the project document (e.g., spec, test report, review, calculation) which satisfies that requirement. Under “Feature” take the text in conjunction with the fuller text in this chapter and/or the text in the IEC 61508 Standard. Note that a “Not applicable” entry is acceptable if it can be justified.

General (Paragraphs 7.1, 7.3) (Table “1”)

Feature (all SILs)Evidence
Existence of S/W development plan including:
Procurement, development, integration, verification, validation, and modification activities.
Rev number, config management, config items, deliverables, and responsible persons.
Evidence of review
Description of overall novelty, complexity, SILs, rigor needed, etc.
Clear documentation hierarchy (Quality and Safety Plan, Functional Spec, Design docs, Review strategy, Integration and test plans, etc.)
Adequate configuration management as per company's FSM procedure
Feature (SIL 3)Evidence
Enhanced rigor of project management and appropriate independence

FSM, functional safety management.

Life cycle (Paragraphs 7.1, 7.3) (Table “1”)

Feature (all SILs)Evidence
A Functional Safety audit has given a reasonable indication that the life-cycle activities required by the company's FSM procedure have been implemented.
The project plan should include adequate plans to validate the overall requirements and state tools and techniques.
Adequate software life-cycle model as per this chapter including the document hierarchy
Configuration management (all documents and media) specifying baselines, minimum configuration stage, traceability, release, etc.
Feature (SIL 2 and above)Evidence
Alternative life-cycle models to be justified
Configuration control to level of smallest compiled unit
Feature (SIL 3)Evidence
Alternative life-cycle models to be justified and at least as rigorous
Sample review of configuration status

FSM, functional safety management.

Specification (Paragraph 7.2) (Table A1) (Table B7 amplifies semiformal methods)

Feature (all SILs)Evidence
There is a software safety requirements specification including:
Revision number, config control, author(s) as specified in the Quality and Safety plan.
Reviewed, approved, derived from Functional Spec.
All modes of operation considered, support for FS and non-FS functions clear.
External interfaces specified.
Baselines and change requests.
Clear text and some graphics, use of checklist or structured method, complete, precise, unambiguous, and traceable.
Describes SR functions and their separation, performance requirements, well-defined interfaces, all modes of operation.
Requirements uniquely identified and traceable.
Capacities and response times declared.
Adequate self-monitoring and self-test features addressed to achieve the SFF required.
A review of the feasibility of requirements by the software developer.
Feature (SIL 2 and above)Evidence
Inspection of the specification (traceability to interface specs).
Either computer-aided spec tool or semiformal method.
Feature (SIL 3)Evidence
Use of a semiformal method or tool and appropriately used (i.e. systematic representation of the logic throughout the spec).
Traceability between system safety requirements, software safety requirements, and the perceived safety needs.

SFF, safe failure fraction; SR, safety related; FSM, functional safety management; FS, functional safety.

Architecture and fault tolerance (Paragraph 7.4.3) (Table A2)

Feature (all SILs)Evidence
Major elements of the software and their interconnection (based on partitioning) well defined
Modular approach and clear partitioning into functions
Use of structured methods in describing the architecture
Address graceful degradation (i.e., resilience to faults)
Program sequence monitoring (i.e., a watchdog function)
Feature (SIL 2 and above)Evidence
Clear visibility of logic (i.e., the algorithms)
Determining the software cycle behavior and timing
Feature (SIL 3)Evidence
Fault detection and diagnosis
Program sequence monitoring (i.e., counters and memory checks)
Use of a semiformal method
Static resource allocation and synchronization with shared resource

Design and development (Paragraphs 7.4.5, 7.4.6) (Tables A2, A4, B1, B9)

Feature (all SILs)Evidence
Structured S/W design, recognized methods, under config management
Use of standards and guidelines
Visible and adequate design documentation
Modular design with minimum complexity whose decomposition supports testing
Readable, testable code (each module reviewed)
Small manageable modules (and modules conform to the coding standards)
Diagnostic software (e.g., watchdog and communication checks)
Isolate and continue on detection of fault
Structured methods
Table Continued

image

Feature (SIL 2 and above)Evidence
Trusted and verified modules
No dynamic objects, limited interrupts, pointers, and recursion
No unconditional jumps
Feature (SIL 3)Evidence
Computer-aided spec tool
Semiformal method
Graceful degradation
Defensive programming (e.g., range checks)
No (or online check) dynamic variables
Limited pointers, interrupts, recursion

image

Language and support tools (Paragraph 7.5) Table A3

Feature (all SILs)Evidence
Suitable strongly typed language
Language fully defined, seen to be error free, unambiguous features, facilitates detection of programming errors, describes unsafe programming features
Coding standard/manual (fit for purpose and reviewed)
Confidence in tools
Feature (SIL 2 and above)Evidence
Certified tools or proven in use to be error free
Trusted module library
No dynamic objects
Feature (SIL 3)Evidence
Language subset (e.g., limited interrupts and pointers)

Integration and test (Paragraphs 7.4.7, 7.4.8, 7.5) (Tables A5, A6, B2, B3)

Feature (all SILs)Evidence
Overall test strategy in Quality and Safety Plan showing steps to integration and including test environment, tools, and provision for remedial action
Test specs, reports/results and discrepancy records, and remedial action evidence
Test logs in chronological order with version referencing
Module code review and test (documented)
Integration tests with specified test cases, data, and pass/fail criteria
Predefined test cases with boundary values
Response times and memory constraints
Functional and black box testing
Feature (SIL 2 and above)Evidence
Dynamic testing
Unintended functions tested on critical paths and formal structured test management
Feature (SIL 3)Evidence
Performance and interface testing
Avalanche/stress tests

Operations and maintenance (Paragraph 7.6) (Table B4)

Feature (all SILs)Evidence
Safety Manual in place—if applicable
Proof tests specified
Procedures validated by Ops and Mtce staff
Commissioning successful
Failures (and Actual Demands) reporting procedures in place
Start-up, shutdown, and fault scenarios covered
User-friendly interfaces
Lockable switch or password access
Operator i/ps to be acknowledged
Basic training specified
Feature (SIL 2 and above)Evidence
Protect against operator errors OR specify operator skill
Feature (SIL 3)Evidence
Protect against operator errors AND specify operator skill
At least annual training

Validation (Paragraphs 7.3, 7.7, 7.9) (Tables A7, A9, B5, B8)

Feature (all SILs)Evidence
Validation plan explaining technical and procedural steps including: Rev number, config management, when and who responsible, pass/fail, test environment, techniques (e.g., manual, auto, static, dynamic, statistical, computational)
Plan reviewed
Tests have chronological record
Records and close out report
Calibration of equipment
Suitable and justified choice of methods and models
Feature (SIL 2 and above)Evidence
Static analysis
Test case metrics
Feature (SIL 3)Evidence
Simulation or modeling
Further reviews (e.g. dead code, test coverage adequacy, behavior of algorithms) and traceability to the software design requirements

Modifications (Para 7.8) Table A8

Feature (all SILs)Evidence
Modification log
Change control with adequate competence
Software configuration management
Impact analysis documented
Re-verify changed modules
Feature (SIL 2 and above)Evidence
Re-verify affected modules
Feature (SIL 3)Evidence
Control of software complexity
Revalidate whole system

Acquired subsystems

Feature (at the appropriate SIL)Evidence
SIL requirements reflected onto suppliers

Proven in use (Paragraphs 7.4.2, 7.4.7)

Feature (at the appropriate SIL)Evidence
Application appropriate
Statistical data available
Failure data validated

Functional safety assessment (Paragraph 8) (Tables A10, B4)

Feature (all SILs)Evidence
Either checklists, truth tables, or block diagrams
Feature (SIL 2 and above)Evidence
As SIL 1
Feature (SIL 3 and above)Evidence
FMEA/Fault tree approach
Common cause analysis of diverse software

FMEA, failure mode effect analysis.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset