Chapter 17

Requisites for Successful Incident Reporting in Resilient Organisations

Alberto Pasquini, Simone Pozzi, Luca Save and Mark-Alexander Sujan

This contribution offers a critical reflection on standard reactive incident reporting systems and provides an outlook towards proactive methods for monitoring risk. Incident reporting systems are often regarded as a prerequisite for effective Resilience Engineering, but sometimes they fail to achieve most of the expected benefits. There is now a growing body of research that criticises incident reporting on the basis of its inability to provide an accurate representation of harm compared to other methods, as well as the fact that there is still widespread under-reporting of incidents. In this chapter we take a different angle by arguing that the problems encountered with incident reporting are, at least to some extent, to be found in the structural characteristics of the respective domains rather than within either the principle of incident reporting as such or its implementation. We identify a number of such structural characteristics that are necessary for successful incident reporting through reflection on the success and (partial) failure of two major incident reporting systems from aviation and healthcare. Where those structural characteristics are not present, incident reporting systems are bound to encounter difficulties. In such environments, a complementary proactive risk monitoring approach may be required to maximise learning from operator and front-line feedback.

Introduction

A systematic approach to safety management has greatly improved the safety performance of many safety-critical systems, to the point that very few serious accidents happen in domains like railways, aviation or nuclear processes. However, such systems face a contradiction inherent in their excellent safety performance: How can we continue to learn from accidents if we succeed to prevent most of them? The paradox is that a zero-accident system loses a valuable information source by improving its safety performance and it needs to replace it with some alternative source. One of the well recognised solutions to this contradiction is the establishment of an incident reporting system. Incident reporting systems have been devised to make sure that continuous learning is in place by relying on operators’ feedback (Johnson, 2003; Reason, 1997; Van der Shaaf et al., 1991). Operators are in the best position to closely monitor system performances and to detect any deviation from normal operating conditions. They can be asked to report all the near-misses, that is all those cases when an accident could have occurred but was avoided by operators’ intervention, or even by fortuitous circumstances. This is especially true with respect to system evolution, in the sense that operators do not only recognise existing unknown hazards, but they can also closely monitor how the system changes, due to external or internal forces.

The most prominent experience in incident reporting is the Aviation Safety Reporting System (ASRS), established in 1975 by the Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA). This is often cited as best practice in incident reporting, due to its lengthy duration and to the fact that it is almost unanimously regarded as a useful system. Unfortunately, even though many other attempts have been made in various domains to establish similar reporting systems, very few are able to claim a similar success. Different success and failure factors have been discussed in the literature, spanning all levels of analysis and explanation, for example, usability of reporting forms, organisational structure, national legislation, operators’ lack of involvement, etc. (Johnson, 2002). Many efforts aimed at improving the performance of incident reporting systems are therefore directed at the way incident reporting is implemented (user-friendly forms, user involvement during the design, etc.) and the cultural environment within which it is implemented (open, fair and just culture, feedback to reporters, etc.).

This chapter discusses the introduction of reporting systems as a socio-technical issue, that is, by considering reporting systems as embedded in safety critical systems and domains, whose characteristics have an effect on the reporting system efficacy. Depending on the nature of the system or domain, a different approach to reporting and utilising operator feedback may be necessary in order to maximise an organisation’s capability of learning from experience. Such an approach – risk monitoring – proactively elicits feedback from operators about the dynamics of variation and risk present in the system.

A Success and a Failure Story: Reporting Systems in Aviation and Healthcare

This section analyses two reporting systems in two different domains. We first review what is currently referred to as the best practice in incident reporting (i.e., the FAA reporting system) to reflect on why this system is capable of collecting good quality data and of transforming them in actionable recommendations. We then compare this case with the UK National Reporting and Learning System (NRLS) introduced in the healthcare to improve patient safety learning.

The Aviation Safety Reporting System

The Aviation Safety Reporting System (ASRS) is an independent system, run completely outside the FAA. Its main objectives are to:

•  discover patterns of frequent problems;

•  improve communication on major issues;

•  support policy making with empirical data.

Pilots, air traffic controllers, flight attendants, mechanics, ground personnel and others involved in aviation operations submit reports to the ASRS when they are involved in or observe an incident or situation in which aviation safety was compromised. All submissions are voluntary. Reports sent to the ASRS are strictly confidential.

The core part of an ASRS report is a narrative of the event. This is provided in a free text format. Other fields to be filled in contain more standardised information, like for instance the airspace type where the event occurred, the phase of flight, date, time, geographical location, etc. Each report thus contains factual information about the event – where factual does not mean ‘objective information’, but rather descriptive information about the event with no further elaboration, for example, no causal factor analysis. The free text format indicates that no strict instruction is given on what should be included. The reporter is supposed to write everything they think is appropriate, including all the required details. These narratives provide an exceptionally rich source of information for policy development and human factors research.

The process of analysing the reports can be divided into two steps. First, all reports received by ASRS are reviewed by two analysts. Analysts are experienced pilots or air traffic controllers. Each report is screened against established criteria to determine if it warrants full analysis and if it should be entered into the database. Currently, 25–30 per cent of reports pass this screening and are inserted into the database. Reports that undergo full processing fall into four categories: (i) aviation hazards that require immediate alert messaging; (ii) priority safety concerns that have been targeted for data collection; (iii) random sample to ensure database representativeness; and (iv) reports that, based on the discretion of the expert analyst, represent a new or unique learning opportunity.

After the initial screening, the report is further analysed. The first aim of the analysis is to identify any aviation hazards and flag that information for immediate action. When such hazards are identified, an alerting message is issued to the appropriate FAA office or aviation authority. The analysts’ second mission is to index reports and diagnose the causes underlying each reported event. An important point to mention here is that this may also imply that people involved in the event are contacted to gather further details or clarify key points, as one of the goals of the analyst is to make sure that the narrative is descriptive, complete and precise. The system is thus confidential, but not anonymous and reporter identity is discarded only after the analysis phase has been closed.

In the previous description of the ASRS, we have hinted upon structural domain characteristics that are of paramount importance for the success of the ASRS system. In other words, behind the successful achievement of the main ASRS objective (gathering operators’ point of view to discover unknown system weaknesses) we should not downplay the role of particular domain characteristics. Three of these appear more relevant.

First, in the aviation domain there is a clear-cut distinction between an incident and an accident, and between incidents and non-relevant events. Only incidents should be reported to ASRS, while accidents are investigated by the legally entitled authorities. In a similar way, operators know how to distinguish mundane disturbances from real system weaknesses. This is indicated by the fact that even if up to 75 per cent of the reports are not warranted to need full processing, still the system is not overflowed by irrelevant reports. To oversimplify the point for clarity’s sake, in the aviation community there is a shared agreement on what is a safety relevant fact and on the criteria to assess its severity (this characteristic will be later referred to as the ‘pass criterion’).

Second, well defined roles and professional communities are present in the aviation world, meaning that the ASRS can put together a complete team of experts to represent all the different points of view. The ASRS is considered both as an independent external organisation and as possessing the relevant expertise to conduct the analysis. ASRS analyses are seen as trustworthy and competent by the aviation community, which implies that the community is to a certain degree open to an ‘external’ judgement as long it comes from a recognised expertise. This trade-off between being independent and external, but still preserving the required expertise, is often encountered in safety critical domains, for instance in cases of investigations, of safety relevant data gathering, of regulatory bodies, etc. The solution is often very hard to achieve as a body that is too independent can fail to be recognised as competent by highly specialised professional communities, while a institution that is too internal tends to reason too similarly to the community it should oversee (this characteristic will be later elaborated under the heading ‘Understand the characteristics of your community’).

Third, given the high degree of standardisation of aviation operations, textual narratives are considered a good means to describe the event and to conduct the analysis. The high level of standardisation ensures that contextual factors can be omitted in the description, as the analysts will be able to fill in for themselves this background information without the reporter explicitly describing it. This also implies that the community knows to a reasonable extent what are to be considered ‘normal operating conditions’ and what should be regarded as a non-standard event deserving full description (cf., the discussion of ‘Degree of standardisation’ later).

Incident Reporting in Healthcare

In this section we will focus in more detail on reporting systems in healthcare, with a particular focus on the UK National Reporting and Learning System (NRLS), the only national system currently in existence (comparable, though not truly national systems, include the Veteran Affairs system in the US and the Australian Incident Monitoring System). In healthcare there is a large variety of different reporting systems belonging to different agencies and institutions. Vincent (2006: 58) provides a list of examples of the different agencies including General Medical Council, Coroner, Health and Safety Executive, NHS Litigation Authority, Police, Nursing and Midwifery Council and so on. These all serve different purposes, for example, litigation and criminal investigation. Some of these systems (e.g., claims and litigation data) provide information for enhancing patient safety (e.g., the widely cited Harvard Medical Practice study reviewed closed claims data, Leape et al., 1991), but there is frequent duplication of function and confusion of purpose.

The Department of Health report An Organisation with a Memory (Department of Health, 2000) pointed out several shortcomings of reporting in the National Health Service (NHS). Subsequently, the National Patient Safety Agency was set up with a mission to implement the National Reporting and Learning System (NRLS) in order to bring about more coordination of information about patient safety issues and to produce wider dissemination of lessons from serious incidents. The primary aim of the system is described as to ‘provide an independent system to record adverse events and near misses so that the NHS could minimise such incidents’ (Carruthers and Philip, 2006: 12). Key objectives of NRLS are, therefore, to provide an overview of the extent and the nature of harm within the NHS and to develop solutions on a national scale.

As opposed to ASRS (confidential system), NRLS had been set up as an anonymous system to encourage reporting and to provide a more representative picture of the extent of harm across the NHS. In order to assess the nature of harm, NRLS requires information about the factors contributing to incidents. Since an anonymous system does not allow the analyst to follow up incident reports, NRLS includes a set of questions about contributory factors that are to be filled in directly by the reporter (see Table 17.1). The reporting process may include up to six different steps and some of the details which the reporter should fill in are related to the where, what and how, with a level of complexity that well reflects the healthcare domain (departments and specialities, phase of care, roles involved, etc.).

At the end of 2006, the Department of Health issued a report called ‘Safety First’ to reflect on the past experiences (Carruthers and Philip, 2006). According to this report, the NRLS cannot be considered a success story. ‘Despite the high volume of incident reports collected by the NPSA to date, there is little evidence that these have resulted in actionable learning for local NHS organisations. The NRLS is not yet delivering high-quality, routinely available information on patterns, trends and underlying causes of harm to patients’ (p. 25). Such a negative verdict comes for an otherwise in many respects admirable approach that expanded on the FAA’s experience to include state-of-the-art theories of organisational safety, such as those of James Reason (Reason, 1990, 1997).

Table 17.1  Table from the NRLS, with a list of contributing factors

ID06

What were the apparent contributing factors? (Tick any that apply)

Communication factors (includes verbal, written and non-verbal between individuals, teams and/or organisations)

Education and training factors (e.g., availability of training)

Equipment and resources factors (e.g., clear machine displays, poor working order, size, placement, ease of use)

Medication factors (where one or more drugs directly contributed to the incident)

Organisation and strategic factors (e.g., organisational structure, contractor/agency use, culture)

Patient factors (e.g., clinical conditions, social/physical/psychological factors, relationships)

Task factors (includes work guidelines/procedures/policies, availability of decision making aids)

Team and social factors (includes role definitions, leadership, support, and cultural factors)

Work and environmental factors (e.g., poor/excess administration, physical environment, work load and hours of work, time pressures)

Why is NRLS experiencing such problems despite the efforts that went into its design and implementation? To some extent, a brief comparison with major structural characteristics within which the successful ASRS operates, provides some insights into the problems that arise from design decisions that were taken for NRLS.

First, the distinction between adverse events, near-misses and events of less significance is more difficult than in aviation. The definition of adverse event usually adopted (harm incurred by a patient stemming from the process of care rather than from the illness itself) implies a full understanding of the clinical situation of a patient (see section ‘The Pass Criterion’).

As a result, it is not surprising that the main categories of incidents identified through incident reporting are concerned with adverse events such as patient falls and adverse drug events. A patient fall is clearly identifiable and often the reporter is in a good position to provide an account of the factors that played a role. For example, during the one-year period April 2006–March 2007 a total number of 727,236 incidents were reported to the NRLS. Of these, 265,343 incidents belonged to the category of patient accidents (patient falls, etc.). The next largest categories were treatment/procedure (64,227) and medication (62,660). The category of clinical assessment – a major activity within healthcare – contains only 35,316 reports. In view of the above deliberations this may not be surprising. This suggests that the categories that do get reported are those that are observable and identifiable, but this provides only very selective insights into the dynamics behind adverse events in healthcare: a large part of situations that pose risk are not reported because they cannot be identified as reportable incidents by the workers.

Second, NRLS adopted anonymous reporting to encourage a higher number of reports and moved the identification of contributory factors to a taxonomy within the reporting system to be filled in by the reporter rather than by the analyst. This was done in order to meet the dual aim of assessing the extent and the nature of harm within the NHS. For many events in healthcare, the relevant patient journey may span several shifts or even days and weeks and frequently the reporter is in no position to describe adequately the contributory factors without a thorough investigation, which is a clearly inappropriate task for the reporter. For example, an adverse drug reaction may be detected by a nurse or a doctor on a ward, but some of the main contributory factors may be distant in time and space, such as the possible failure to record a drug allergy on part of the patient’s GP (family doctor). Such a constellation, where there are many different actors involved and the relevant activities unfold over a prolonged period of time and distributed in space, pose almost insurmountable problems to attempts to generate meaningful learning with a reasonable amount of effort from incident reports about the dynamics behind adverse events (this characteristic will be later referred to as ‘Visibility’).

Third, the specialisation of the healthcare domain makes it difficult to maintain a body of investigators to centrally analyse all of the reports. One of the lessons learned in the NPSA review of NRLS was that more clinical and front-line expertise was needed to ensure quick and accurate screening and acting upon reports received (see section headed ‘Understand the characteristics of your community’).

Fourth, compared to the aviation world, the healthcare world is a lot less uniform as well as less standardised. Even if we consider for the sake of simplicity only the world of secondary care, where most of the efforts in patient safety and incident reporting have been (albeit most of the patients’ contact is actually within primary care), major differences within this domain become quickly evident. Secondary care presents with an extraordinary range of diverse activities, such as the mostly routine, but sometimes highly unpredictable and hazardous activities within surgery, the inherently unpredictable and constantly changing world of emergency care, or hospital medicine where diseases may be masked, difficult to diagnose, the treatment risky and complicated by multiple co-morbidities (Vincent, 2006).

Handle with Care: All Reporting System are Different

We have seen through the aviation and healthcare examples how incident reporting systems are better understood as deeply intertwined with their respective domains, and we analysed some structural domain characteristics that can contribute to their success or failure (i.e., whether incidents are observable and identifiable and whether the reporter or the analyst are in a position to identify and characterise the contributory factors). Can we identify more general properties that are present also in other domains of application?

In the following, we move from the two properties described in the previous section to offer a more elaborate reflection on five key dimensions that should be analysed when implementing a reporting system. We also describe how these dimensions should inform the decision on which reporting system to adopt. No clear cut answer on what is the best option can be given. Domain structural characteristics are a key dimension to analyse, but the final choice will always depend on what you are going to use your system for. Incident reporting systems serve many different purposes and an incident reporting system does not address by itself all purposes. A careful selection has to be undertaken.

The Pass Criterion

The first domain characteristic to be analysed is related to how easy events to be reported can be told apart from negligible ones on the basis of factual information, that is, with no (or minimal) subjective judgement. In some domains, it is possible to provide front-line people with clear guidance on what should be reported, while in other domains front-line people should exercise their best judgement to assess whether the event deserves reporting, or whether it has no significance. For instance, if we expect accidents and incidents to be reported, then we would need to analyse whether these events can be easily distinguished one from the other, and from other categories of events.

In case a clear cut distinction cannot be drawn, other concepts may be used. For instance, front-line operators may be requested to report risks (i.e., hazardous situations), which seems to be a viable solution especially in those cases where outcomes are hard to observe and assess, or if doubts exist that operators would report events with any consequence. In any case, the pass criterion remains valid also for risk reporting systems: guidance should be given on what to report and operators should undergo specific training to correctly identify events to be reported.

The key decision that should be informed by this domain characteristic concerns the risk–accident continuum, meaning whether one organisation should try to implement an incident reporting system or a risk reporting one. The first option is a viable one only for those domains where accidents/incidents can be distinguished from other events on the basis of factual information, that is, when the outcome of an event can be factually appreciated, with no need of subjective judgement. Risk reporting systems should instead be preferred in all those cases where events cannot be distinguished on the basis of the outcome, making it hard for front-line people to tell which type of event they just witnessed. In these cases, front-line staff should be more rightfully asked for their (subjective) perception of risk.

Degree of Standardisation

The degree of standardisation that is typical of one domain affects the type of information needed to reconstruct one event. In a standardised world like aviation, context can be often assumed to remain stable and can be left implicit, taken for granted. For a truthful reconstruction only the main events and actions are needed, while the scene on which the events unfolded can be assumed to be standard. For instance, in the ASRS system, operators report mainly on the event itself, but can disregard mentioning most of the contextual information. On the contrary, other domains may need a major emphasis on the context, as contextual features may be very relevant to determine the actual unfolding of the event. So this decision should be made considering the degree of standardisation of the reference domain.

To better elaborate on this concept, we may derive some notions from literary critique, more precisely from the work of Burke (1969). Burke defines an event with five elements: what was done (the act), who did it (the agent), when or where it was done (the scene), how he did it (means and tools), and why (purpose). According to Burke, these five elements are required to describe an event. These elements can be used to differentiate which information is typically requested to reporters in an incident reporting system or in a risk reporting one. Incident reporting may be said to focus on the description of the agent and the act (what has happened is the most important piece of information), while risk reporting may demand focusing on the scene or on the tool (the context is more important than the specific outcome).

The key decision that follows from the analysis of the standardisation degree concerns what information is required to reconstruct an event. In a structured domain, scene, tools and purpose can be considered as stable, thus taken for granted and left implicit in an event description. This would not be the case in less structured ones. In these latter cases, the description of the actor and of their actions should be complemented with a certain amount of information of why those actions were performed, with which tools and in which context. Information on actor and actions are not enough to reconstruct the event in a satisfactorily manner.

The standardisation degree is often linked with the pass criterion, in the sense that standardised domains often warrant a clear definition of what a significant event is, while non-standardised domains treat every event as a separate case. This correlation between the two different characteristics is often present, but it does not follow by any necessity. We may theoretically give the case of a non structured domain with a clear pass criterion, or vice versa. However, standardisation often affects not only the way operations are conducted, but also the expected outcomes, thus making it often proceed paired with the pass criterion.

Visibility

Once all of the above points have been scrutinised, we need to analyse operators’ perspective in a realistic manner, in order to understand what operators are willing and able to report. Whatever our decisions have been on the other dimensions, the end question to address would be: Are the operators in a good position to observe the events I would like to collect? Are they willing to report?

A reporting system starts from the assumption that operators are an essential source of information. At this stage, we would need to challenge this assumption and delve further into it to understand in which respects and to what extent this is true. A similar recommendation comes from Eurocontrol, which advises to complement incident reporting with routine safety surveys to collect operators’ feedback on their daily risk perception (Eurocontrol, 2000). Not all aspects of our system have the same degree of visibility, some may be easily perceived by frontline staff, whilst others may be hard to appreciate from their perspective. In other words, organisational processes may be shaped by visible factors, as well as by non-visible ones.

Visibility is affected by several dimensions, including how organisational processes are designed, the position and role of front-line staff, the duration and spatial span of processes, etc. It is also affected by the nature of the actual content of work. For instance, in the healthcare domain the content of work is the care of humans, and a human body has its specific dynamics, often not visible unless dedicated diagnostic activities are undertaken. As a result, not all the actions done on patients have an immediate, easy to appreciate effect, and some of the outcomes may be actually shaped by factors hard to single out. Moreover, different actors will most likely be able to perceive different aspects of the system, and will also possess different terminology and analytical skills to draft a report on that.

Considerations on visibility should inform the decision on what front-line people are asked to report. For each event to be reported, we should then analyse which aspects of the event are visible and which are not. For instance, the degree of visibility of aspects such as the outcome, causal factors, contributing factors, may be very different. This consideration should be complemented by the analysis on what front-line people are willing to report, which often depends on their safety culture.

Understand the Characteristics of your Community

The reference domain should be analysed also in terms of communities of practice and professional communities (Lave and Wenger, 1991). Micro-communities are likely to have different understandings (to a various extent) of the same situations, to appreciate different aspects, to use different tools and pursue different objectives (sometimes converging with other micro-communities, other times even contradictory). The more varied a domain as far as communities are concerned, the harder to establish a domain-wide (or nation-wide) reporting system. In other words, the immediate consequence of the heterogeneity of communities in one domain is on the scale of the reporting system. If the community is homogeneous, it is going to be easier to establish a large program, covering the whole domain or the whole country. On the contrary, very diverse communities may suggest establishment of more local systems.

Two other key decisions can be informed by the analysis of the community characteristics. First, analysts of incident reports should cover the whole spectrum of expertise (as in the ASRS case), to provide meaningful results and to competently analyse the information contained in the reports. If we get back to the discussion of the importance of context we just developed, analysts need to possess the background knowledge required to ‘fill in the gaps’ in the reports, to understand what is implicit and what was taken for granted by reporters. Different communities demand for a ‘as varied as they are’ body of expert analysts, which may be hard to put together and maintain. Second, the analysis of the communities in our domain can also inform how to provide feedback to reporters. For homogeneous communities, a non-targeted message may suffice, which would not be the case for more varied communities. In both cases, the feedback loop should remain as close as possible to operators to obtain effective results, to be as quick as possible in reporting back to them. But while a homogeneous community may leave extra space (the community has its own means of circulating the feedback and once the feedback is out it will spread quickly to everyone), diversity in the micro-cultures requires the loop to be as quick as possible and as targeted as possible.

Assess Safety Culture

The safety culture of different domains (and organisations) can be classified on the five levels described by Westrum (1993) – from pathological to generative. Each level distinguishes a different way in which domain members approach safety issues, from cultures that do not see any value in safety-related activities, to cultures that see safety as an integral part of everything is done. As far as reporting systems are concerned, safety culture will affect many dimensions. To mention just the main ones, we may list the protection offered to those who report, the amount of education about safety (awareness of safety issues), the ability to perceive causes of incidents (Dekker, 2007; Reason, 1997).

The key decisions to be made after having assessed the domain safety culture are those already listed for the other structural characteristics, including the accident-risk continuum, which information front-line people should report, which information they are able and willing to report, the type of feedback that can be offered, and the type of analysis that should be carried out on the reports. In addition to these dimensions, the safety culture level is a primary information to decide whether the reporting system should be anonymous or not, and the degree of confidentiality offered by it. Higher safety culture levels may warrant ‘open systems,’ with disclosure of names, while lower safety culture levels need to offer confidentiality, or even anonymity, as a condition to encourage reporting.

What Happens When Key Structural Properties are Missing?

We discussed how the success of incidents reporting can depend on some key structural properties of the target domain. The underpinning idea of incident reporting systems is that often the accidents and incidents precursors are similar. A lot of learning can be generated by focusing on events that can potentially cause harm (i.e., incidents), rather than exclusively focusing on actual harmful events. In this way, more data points are available from which more robust learning about the dynamics behind adverse events can be extracted. This is illustrated in Figure 17.1 below. In Figure 17.1, safety is represented as a control problem, i.e., accidents happen in the area where variation is out of control. The incident boundary represents the area where incidents are identified and reported. Each event provides a window through which the driving forces of harmful events and corresponding contributory factors (represented as arrows) can be identified, understood and subsequently generalised across the range of incidents.

Image

Figure 17.1  Safety represented as a control problem

For domains and organisations where the above structural characteristics for successful incident reporting systems do not hold, a promising approach may be to focus directly on the driving forces behind adverse events.

Proactive Risk Monitoring

This approach is inspired in part by Reason’s Tripod methodology (Reason, 1997) developed for the oil and gas industry. Tripod suggests the monitoring of basic risk factors at regular intervals either through audit or through feedback from staff. In this way, a risk profile can be built up over time and basic risk factors most in need of addressing can be focused on (in Figure 17.2 the upward arrows represent forces that drive variation, the downward arrows represent forces that lead to a control of variation). In this way, we are reliant neither on incidents as triggers (which may not be easily observable) nor on the ability of a single reporter to provide a full account of a complex system dynamics.

A main difference to incident reporting is the fact that risks (or rather: factors contributing to risk) are monitored themselves. Reason identifies as basic risk factors organisational processes giving rise to latent conditions, such as processes for procurement of equipment, maintenance management, processes for defining communication interfaces etc. This is in line with Reason’s model of organisational accidents that suggests that accidents are the result of multiple active and latent failures, where only the latter are sufficiently predictable and controllable. This approach to risk monitoring focuses in particular on the forces that drive variation out of control (upwards arrows in Figure 17.2). Alternative models, such as the Functional Resonance Analysis Method (FRAM) (Hollnagel, 2004) may give rise to a different emphasis. The concept of resilience as ‘the capabilities on all levels of a system to respond to regular and irregular threats in a robust yet flexible manner, and to anticipate the consequences of disruptions’ (Hollnagel et al., 2006) highlights how reporting system may be better aimed at detecting near-resonance situations, that is those situations where the system faces ‘disruptions and variations that fall outside of the base mechanisms/model for being adaptive as defined in that system’ (Woods, 2006a: 21).

Image

Figure 17.2  Risk monitoring elicits regular feedback from staff about a number of key contributory processes and factors

In either of these approaches, there is a shift from the concept of incident reporting to the identification, reporting and analysis of situations that fall outside the ‘design envelope,’ in order to better understand how ‘a system is competent at designed-for-uncertainties’ (Woods, 2006a). Instead of focusing on organisational processes that give rise to latent conditions, the aim of a risk monitoring system becomes the monitoring of variations within activities and processes or more generically the extent to which an organisation and staff are capable of anticipating, recognising and adapting to variations and disturbances. This approach emphasises also the positive side of performance by taking into account the forces that enhance control of variation (downwards arrows in Figure 17.2). Both, the establishment of tripod-like risk monitoring and the development of meaningful markers of resilience within healthcare are currently the object of ongoing research.

Conclusion

The aim of this chapter has been to show how reporting systems should be considered as tools embedded in socio-technical systems. A reporting scheme has no unique objective in it, but may serve different purposes. A task for any reporting system is then to clearly target some objectives and to design a corresponding process of data collection, analysis, feedback and action. Our aim is to provide an initial answer to the following research question: which are the structural domain and organisational properties we need to look for when trying to implement reactive and proactive risk monitoring systems? Which objectives can these systems address? By analysing the use of incident reporting systems in civil aviation and in healthcare, we reflect on their role within the wider safety management system. Not all organisations are equal, so each may require its own reporting system. To implement a reporting system, it is thus necessary to clearly target some objectives and to design the corresponding process of data collection, analysis, feedback and action.

This chapter started from a literature review to highlight some structural characteristics of a domain that should be considered when designing an incident reporting system. If we were to summarise the five structural characteristics we have discussed above and to find a common explanation for them, the best way would probably be to reason in terms of domain culture. From the above discussion we see that the aviation culture presents a good degree of homogeneity, which ensures stable definition of operations, of anomalies and of expertise. Even if micro-cultures are present (e.g., pilot community, air traffic controllers, cabin crew, etc.) these are well recognised and their voice is represented in the ASRS panel of experts, so that the community can speak with one ‘non-controversial’ voice. Billings (a founding father of the ASRS system) clearly states that one key requirement for a successful incident reporting system is ‘a demonstrated, tangible, widely agreed upon need for more and better information’ (Cook et al., 1998: 52, emphasis added). Billings also states that consensus is not enough and that understanding of what the ASRS is doing is a necessary point, among all the stakeholders (p. 55). Both consensus and understanding can be considered as indicators of a shared culture in the aviation community. The healthcare domain does not exhibit a comparable degree of sharing and definitively presents a more varied array of different professional communities.

In the last section, we have linked the five structural characteristics to key decisions to be made when establishing a reporting scheme. We have also described how reporting systems can be shaped in different forms, once a good awareness of their purposes has been achieved. This variability has been placed on a three-dimensional continuum:

•  from risks to accidents

•  from the scene to the event

•  from small scale to nation-wide scale.

Being linked with structural characteristics, these dimensions are to some extent domain independent, and can be used to compare systems across various domains.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset