Chapter 2
Operational Risk, Operational Safety, and Economics

2.1 Defining the Concept of Operational Risk

A “risk” is defined by ISO 31000:2009 as “the effect of uncertainties on (achieving) objectives” [1]. Our world can indeed not be perfectly predicted and life and businesses are always exposed to uncertainties, which have an influence on whether objectives will be reached or not. Risks are double-sided: we call them negative risks if the outcome is negative, and positive risks if the outcome is positive. It is straightforward that organizations should manage risks in a way that minimizes the negative outcomes and maximizes the positive outcomes. Such management is called risk management RM) and contains, among other things, a process of risk identification, analysis, evaluation, prioritization, handling, and monitoring (see, e.g., Meyer and Reniers [2]), aimed at controlling all existing risks, whether known or not, and whether they are positive or negative. In this book, to make it workable, “operational risks” are assumed to arise from involuntary undesirable events within an organizational context. The rest of the book will thus be concerned with taking decisions regarding the management of these undesirable events and thereby considering economics-related issues.

The adoption of consistent risk management processes within a comprehensive framework can help to ensure that all types and amounts of risk are managed effectively, efficiently, and coherently across an organization. As mentioned, the economics of operational risks are focused upon. Operational risks imply unwanted events with possible negative consequences resulting from industrial operations. The economics implies approaches (concepts, models, theories, etc.) linked to financial considerations, whatever they are and in whatever form they occur. Evidently, economic considerations are very important while dealing with operational risk. Managing risks always demands making choices and allocating available budgets in the best possible way. This is not an easy task; on the contrary, it can be extremely difficult.

These days, companies and their safety managers are usually overwhelmed with tasks concerning the operational safety policy of a company. The number of tasks is huge, as are the responsibilities accompanying the decisions and choices that have to be made. Economic considerations are only one part of the larger domain of risk management. Other elements that form part of risk management, and which are, in a way, also related to economic considerations, include safety training and education, on-the-job training, management by walking around, emergency response, business continuity planning, risk communication, risk perception, psycho-social aspects of risk, emergency planning, and risk governance. Meyer and Reniers [2] define operational risk management as “the systematic application of management policies, procedures, and practices to the tasks of identifying, analyzing, evaluating, treating, and monitoring risks.” Figure 2.1 illustrates the operational risk management set.

nfgz001

Figure 2.1 The operational risk management set.

(Source: Meyer and Reniers [2]. Reproduced with permission from De Gruyter.)

Although economic issues of risk may only be one part of the risk management set, as can be seen in Figure 2.1, it is a very important part, being interconnected with all other parts of the risk management set, affecting the effectiveness of a company's safety policy as a whole, and, by extension, of a company's profitability in the long term. Therefore, this domain deserves to be well elaborated, both in theory and in practice. This book provides practitioners as well as the academic community new insights into this very interesting and challenging research domain, and offers practitioners concrete approaches and models to improve their risk management practice from an economic perspective.

2.2 Dealing with Operational Risks

As defined by the Center for Chemical Process Safety [3], operational risk can be seen as an index of potential economic loss, human injury, or environmental damage, which is measured in terms of both the incident probability and the magnitude of the loss, injury, or damage. The operational risk associated with a specific unwanted event can thus be expressed as the product of two factors: the likelihood that the event will occur (c02-math-0001) and its consequences (c02-math-0002). Therefore, such an operational risk index, as calculated according to Eq. (2.1), represents the “expected consequence” of the undesired event (see also Chapters 4 and 7):

However, the risk estimation always refers to specific scenarios in which the perception and the attitude to the consequences of the decision-maker may also differ in an important way. For example, most people judge a high-impact, low-probability HILP) event as more undesirable than a low-impact, high-probability LIHP) event, even if the expected consequence of the two events is exactly the same (e.g., a fatality). By introducing a risk preference parameter, the previously formulated risk index, taking into account decision-makers' preferences, can be re-formulated into:

where the parameter c02-math-0005 represents the attitude of the decision-maker to the consequences. If a decision-maker is consequence-averse (also called “risk-averse”), c02-math-0006 1; if risk-neutral, c02-math-0007, and if risk-seeking, c02-math-0008. It is obvious from this that the way a risk is calculated, which depends on preferences of people, has an influence on the resulting index outcomes. The risk index as calculated according to Eq. (2.2), should thus be seen as the “calculated perception of risk reality” by a person or a group of persons using a certain calculation method that they agreed upon. In any case, the index outcomes allow us to distinguish between different types of risks.

Hence, in general, a “risk” calculation includes four terms: likelihood, consequences, risk aversion, and what can go wrong, in terms of “the event” (sometimes also called “the scenario”). To have an idea of the accumulated risk in an organization, the risks of different events (or scenarios) thus need to be summed.

Furthermore, as companies face many risks, especially when operating in high-risk environments, but also in “low-risk” environments, operational risks are usually classified into the following three categories:

  • very small risks where no further investments in risk reduction are necessary;
  • very large risks with an outcome so unacceptable that these risks need to be reduced immediately;
  • risks that fall between the previous two risk categories.

For each of these categories of risks, discussion is possible; for example, what does the company consider to be a “very small risk” or what is considered a “very large risk”? The definitions of these categories are usually decided upon and discussed within the organization and they can differ from company to company. The risks, using their likelihood and consequences, are usually displayed on a so-called (likelihood, consequences) risk assessment decision matrix (often shortened to risk matrix), and the need for further reduction of risk (or not) is usually determined by the position of the risk within the risk matrix. The reader wishing more information on the concept of a risk matrix and on how to use it adequately is referred to Meyer and Reniers [2]. Depending on their position in the matrix, the risks should be simply monitored (negligible risk region), reduced immediately (unacceptable risk region), or reduced to the lowest level practicable [tolerable risk region or as low as reasonably practicable ALARP) region – see also Chapter 4], bearing in mind the benefits of further risk reduction and taking into account the costs of that risk reduction. A company usually has two choices when the risk is located in the ALARP region: either take further risk reduction measures or show that additional risk reduction is not reasonably practicable. “Not reasonably practicable” usually means that the risk reduction costs are disproportionally higher than the accompanying benefits [4].

However, such an approach as described above implies that the company has sufficient knowledge of the risk to assign it to a certain risk matrix cell. This is the first problem: companies often do not possess adequate information on all risks to be able to assign them unambiguously to one risk matrix cell. Unknown risks will never occur in risk matrices, for example. The more information there is about a risk, the easier it is to assign it to one risk matrix cell. Another disadvantage is that the risk matrix does not distinguish between different types of risks and thus that all risks are treated in the same way. This is very dangerous, and it has led to “blindness for disaster” in the past (cf. the BP Texas City disaster in 2005 [5]). Operational risks of different types should thus not be mixed when dealing with them and when making safety investment decisions for them. But how can the different types of operational risk be distinguished?

2.3 Types of Operational Risk

“Risk” means different things to different people at different times. However, as already mentioned, one element characterizing risk is the notion of uncertainty. Unexpected things happen and cause unexpected events. The level of uncertainty can, however, be very different from event to event.

In this book there is only a focus on operational risks, composed of the elements belonging to the negative risk triangle, i.e., “hazards – exposure to hazards – losses.” If one of these elements is removed from this triangle, there is no operational risk. Hence, the economic aspects of operational risk management, discussed in this book, are all about what decisions to take, considering economic features, to diminish, decrease, or soften in an optimal way one of the three elements of the risk triangle (hazards, exposure, or losses), or a combination thereof. In any case, every risk triangle element is accompanied by uncertainty: indeed, not all hazards are known, not everything is known about the recognized hazards, not all information is available about possible exposures, certain losses are simply not known or considered, and there is also substantial uncertainty about all potential losses being taken into account.

Roughly, three types of uncertainty can be distinguished: uncertainties where a lot of historical data are available (type I), uncertainties where little or extremely few historical data are available (type II), and uncertainties where no historical data are available (type III).

Whereas type I negative risks usually lead to LIHP events (e.g., most work-related accidents, such as falling, small fires, slipping), type II negative risks can result in catastrophes with major consequences and often multiple fatalities, so-called HILP events. Type II accidents do occur on a (semi-)regular basis in a worldwide perspective, and large fires, large releases, explosions, toxic clouds, and so on belong to this class of accidents. Type III negative risks may turn into “true disasters” in terms of the loss of life and/or economic devastation. These accidents often become part of the collective memory of humankind. Examples include disasters such as Seveso (Italy, 1976), Bhopal (India, 1985), Chernobyl (USSR, 1986), Piper Alpha (North Sea, 1988), the 9/11 terrorist attacks (USA, 2001), and more recently Deepwater Horizon (Gulf of Mexico, 2010) and Fukushima (Japan, 2011). It should be noted that once type III risks have turned from the theoretical phase into reality, they become type II risks.

To prevent type I risks from turning into accidents, risk management techniques and practices are widely available. Statistical and mathematical models based on past accidents can be used to predict possible future type I accidents, indicating the prevention measures that need to be taken to prevent such accidents. Type II uncertainties and related risks and accidents are much more difficult to predict. They are extremely difficult to forecast via commonly used mathematical models, as the frequency with which these events happen is very low within one organization and the available information is therefore not enough to be investigated using, for example, regular statistics. The errors of probability estimates are very large and one should thus be extremely careful while using such probabilities. Hence, managing such risks is based on the scarce data that are available within the organization, and, more generally, on a global scale, and on extrapolations, assumptions, and expert opinions. Such risks are also investigated via available risk management techniques and practices, but these techniques should be used with much more caution, as the uncertainties are much higher for these types of risks than for type I risks. A lot of risks (and latent causes) are present which never turn into large-scale accidents due to adequate risk management, but very few risks that are present turn into accidents with huge consequences. The third type of uncertainties are extremely high, and their related accidents are simply impossible to predict. No information is available on them and they only happen extremely rarely. They cannot be predicted by past events in any way; they can only be predicted or conceived by imagination. Such accidents are also called “black swan accidents” [6]. Such events can truly only be described as “the unthinkable” – which does not mean that they cannot be thought of, but merely that people are not always capable of appreciating (or mentally ready to appreciate) that such an event may really take place.

Figure 2.2 illustrates in a qualitative way the three uncertainty types of events as a function of their frequency.

nfgz002

Figure 2.2 Number of events as a function of the events' frequencies (qualitative figure).

(Source: Meyer and Reniers [2]. Reproduced with permission from De Gruyter.)

As mentioned before, type I unwanted events from Figure 2.2 can be regarded as “occupational accidents” (e.g., accidents resulting in an inability to work for several days, accidents requiring first aid). Type II and III accidents can both be categorized as “major accidents” (e.g., multiple fatality accidents, accidents with huge economic losses). Type III events are not considered further in this book, due to the fact that it is simply impossible to carry out economic analyses for such events. They are so rare that precaution and application of the high reliability organization (HRO – see later) principles are the only ways to rationally deal with them. In fact, they can also be considered as an extremum of type II events.

For each of these types of event, different economic considerations should be made and different kinds of economic analysis carried out, as will be explained and elaborated upon later on in this book. It is thus obvious that a different kind of matrix should be used before an adequate economic analysis can be carried out. An organization should therefore be able to distinguish between the different types of risk in the most objective way possible. A matrix that can be used to this end is the (information, variability) risk type matrix (see Figure 2.3).

nfgz003

Figure 2.3 Risk type matrix based on variability and information availability.

The matrix in Figure 2.3 can be set up and employed by a company to determine the risk type, and thus the approach that is to be followed to tackle the risk from an economic viewpoint. Area A in the center of the matrix, as displayed in Figure 2.3, will be the most difficult to deal with and thereby to take economic considerations into account, while area D will be the easiest for decision-making. The importance of the factor “variability” in cost-benefit decisions can be illustrated using the following reasoning. Assume that a decision about prevention investment has to be made for two type II event scenarios. A cost-benefit ratio of the required prevention investments can then be calculated in both cases. Assume that one event (E1) is characterized with a high level of variability, and the other is characterized with a low level of variability (E2). If there were a cost-benefit ratio limit set by the company, it would be possible that, by looking only at the average ratio position, a different decision would be made than if one were to also consider the variability. Figure 2.4 illustrates this reasoning.

nfgz004

Figure 2.4 Uncertainties and variability in economic decision-making on risks.

Figure 2.4 illustrates that without the variability, Average 1 of E1 would be the best choice as it is situated further away from the company limit. Average 2 of E2 is closer to the limit, and thus the average cost-benefit ratio is higher. Because the company would prefer a cost-benefit ratio that is as low as possible (defining cost-benefit ratio as the costs divided by the benefits), prevention investments would be chosen for E1. However, since E1 is characterized with a higher level of variability, the tail of the distribution of E1 exceeds the company's cost-benefit limit and thus a small possibility still exists that the ratio for prevention investment related to E1 is worse than the absolute boundary set by the company. Conversely, the ratio of prevention investments with respect to E2 never exceeds the company limit, thanks to the low level of variability of E2. Hence, although on average the cost-benefit ratio is worse for E2 than for E1, investing in prevention for E2 is preferred in this case due to the variability differences between the two events.

In reality, any company can choose its own means of developing such a (information, variability) matrix. For example, a concrete risk type matrix for an organization may look like the one in Figure 2.5.

nfgz005

Figure 2.5 Illustrative example of matrix for determining the operational risk type and the area.

The company can then further elaborate and define the qualitative parameters of the matrix from Figure 2.5 (“very low,” “low,” “very limited,” “limited,” “adequate,” etc.) and use the matrix to distinguish between the different areas A–D, to determine which economic analysis technique(s) should be employed to make objective safety investment decisions with respect to the risks present within the company. In Chapter 8, indications and suggestions are given to use a certain decision-making technique for the domains A–D.

2.4 The Importance of Operational Safety Economics for a Company

As already mentioned, operational safety economics is extremely important for the profitability of a company in the long term. Figure 2.6 provides an overview of the economic effects and advantages resulting from safety investments.

nfgz006

Figure 2.6 Economic consequences of health and safety investments.

(Source: Fernández-Muñiz et al. [7]. Reproduced with permission from Elsevier.)

The financial consequences of accidents cannot be underestimated, and avoiding accidents leads to a double positive effect. On the one hand, as a result of accidents, real financial as well as opportunity costs emerge (see also Chapter 5); hence, by avoiding such accident costs via adequate operational safety, health and safety performance is enhanced and operational negative uncertainties (see Figure 1.1) decrease (see also Figure 2.6). On the other hand, productivity decreases due to accidents happening, both quantitatively and qualitatively (due to the influence of factors such as reputation and the “happiness” of working in the company). Therefore, by avoiding accidents, operational positive uncertainties (see Figure 1.1) increase (see Figure 2.6).

It should be clear that occupational accidents (and not only major accidents) can be a type of negative and harmful publicity for a company, possibly leading to repercussions such as excellent employees leaving the company (and having to replace them), and small or large customers not becoming (or remaining) interested in the company . These are all possible societal consequences of accidents, in a worst-case scenario leading to bankruptcy, companies losing shareholder value on a massive scale, or companies losing a competitive market position. Chapters 58 elaborate further on all the direct and indirect costs of accidents for organizations, and how they contribute to and can be used in the decision-making process regarding safety investments.

It should be obvious by now that companies invest in safety. Safety investments not only lead to fewer costs of accidents, but also to an increasing company performance and competitiveness. But why is it then sometimes so difficult to convince managers to further increase safety budgets/investments or to make the necessary safety investments for HILP risks (in other words, type II risks)? There are many answers to this question, among them psychological, emotional, and economic ones. At first sight, it seems evident that risk management and safety management are essential in any manager's decision. However, Perrow [8] indicates that there are indeed reasons why managers and decision-makers would not put safety first. One very important reason for focusing on production over safety (and not on top of safety) is that the harm and the consequences are not evenly distributed: the latency period may be longer than any decision-maker's career. Few managers are punished for not putting safety first even after an accident, but they will be punished quickly for not putting profits, market share, or prestige first. But in the long term, this approach is obviously not the best management solution for any organization.

The economic issues of risk, playing a crucial role in the decision-making process with respect to operational safety management, as well as safety budgets and budget constraints, are explained in this book. One question increasingly being asked by a lot of corporate senior executives concerns the risk–opportunity trade-offs of investing in operational safety. Accidents and illness at work are matters of health and operational safety, but they are also indirectly matters of company profitability.

Operational safety is not without cost. Providing it absorbs scarce resources that could have alternative uses; they constitute the visible cost of safety. The question of whether it would be worth investing in stock options the money intended to be invested in operational safety (so-called “opportunity costs”; see also Chapter 5) can always be posed. At the same time, one should realize that the fact that safety (or prevention) has a cost does not mean that it does not have a benefit – on the contrary. However, the benefit is much harder to acknowledge by managers, as it has a hypothetical and uncertain nature. This is due to the very nature of safety. One of the definitions of safety is that it is a “dynamic non-event” [9], consistent with the understanding of safety as “the freedom from unacceptable risk.” If safety is regarded as a dynamic non-event, the question of how to count or detect the non-events in time arises, as this is actually what safety represents or can be regarded as a proxy for. A non-event is, by definition, something that has not happened or will not happen, and is therefore quite difficult to measure. At the end of every working day, employees from a company may come home safely and may ask themselves, “How many times was I not injured at work today?” or “How many accidents did I not encounter today?” or “How many cyclists or cars did I not hit when I drove to work this morning or when I drove home from work this evening?” These are all legitimate questions, but they are very hard to answer. Nonetheless, this is what companies pay for: for dynamic non-events, or in other words, for events not to happen. However, statistics of non-events within companies do not exist. There are no statistical data or there is no information on non-events. Therefore, it is obviously very difficult to prove how costly, or perhaps how cost-efficient, safety really is. It is only possible to use non-safety information, such as accidents, incidents, and the like, to verify the cost and benefit of non-safety. One of the problems with this is that non-safety information can only be collected easily for type I events, and not – or it is much more difficult to do so – for type II events, as it is evidently not possible to have information based on a number of disasters that happened within the company. A disaster usually only strikes once. Hence, as can be seen from the earlier discussion, the economics of operational safety is not an easy subject.

More about cost, benefit, uncertainty, and other economic topics around safety will be explained in this book in the chapters to come. It is easy to understand that a minimum safety level is needed (it is even required by legislation) within a company. Without the minimum operational safety level, there would be unacceptable losses to the company, but also to victims and to society as a whole. But there also exists a maximum safety level, which can be seen as some kind of economic constraint. This is much harder to explain, as it depends on the type of risk and it varies from organization to organization. The problem lies at the very essence of risk: risk is uncertain and dynamic. Even the perception of what is an acceptable risk varies over time within society. Thirty years ago it was common practice not to wear a safety belt in western Europe, for example, while nowadays this is unacceptable to western European societies. The acceptability of risk (“How safe is safe enough?”) is thus a very difficult question and one that will be discussed in Chapter 4.

2.5 Balancing between Productivity and Safety

To provide a tentative answer to the question of the acceptability of risk, the classic approach is the view that productivity is the enemy of safety. However, this is a false premise and a meaningless discussion. The comparison can be made with the philosophical discussion of the chicken and the egg: “Which came first?” This is a silly question; they are both equally important, and one simply cannot exist without the other. If productivity were the nitrogen of air, then safety would be the oxygen: together, they make it possible for life as we know it to exist. Analogously, the combination of productivity and safety and the balance between them make it possible for an organization to exist and to be profitable in the long run.

With the knowledge that both productivity and safety are very important for the profitability of an organization, there is indeed an “optimal situation” which can be represented by an equilibrium state. On the one hand, there are “safety means for the zero-accident (ideal) situation,” and on the other, there are “safety means for the as is situation.” Both of these means should be aligned as much as possible, and the difference between both should be well considered to achieve an equilibrium situation (e.g., HRO safety; see later in this section, and also the next section).

The safety means in both cases are composed of all known safety features, e.g., to a greater or lesser extent, safety management system, business continuity plans, available technology, reliability and maintenance expertise, training, competences, staffing levels, compliance, and so on. This concept of an equilibrium situation can be represented as in Figure 2.7.

nfgz007

Figure 2.7 “Absolute safety” versus “AS IS safety” and the safety equilibrium situation.

A distinction can thus be made between “absolute safety” and “as is safety,” and every company can be situated somewhere on the continuum between both extreme situations. The absolute safety situation comprises all safety measures and actions that should be present in the organization to achieve the mythical zero-accident situation. Hence, it can be seen as a theoretical situation that can never be reached, unless there are infinite safety resources. The as is safety situation represents all safety measures and actions as they exist at present in the organization. This situation is therefore the safety result based on current practice.

If there were a disproportionate focus on safety over production in an organization, this would lead to the “absolute safety situation” and to an economically suboptimal situation. Conversely, if there were a disproportionate focus on production over safety, the consequence would be a hazardous situation within the organization. Therefore, it is important that operational safety economics, and all costs and benefits of safety and non-safety, are well elaborated and well managed in any organization.

One of the premises of good safety economics is to make a distinction between the different existing types of accidents. For the different risk types, a conceptual figure (see Figure 2.8) can be drawn displaying the different levels of safety and the fluctuation of the real safety level, trying achieve an optimum situation for the company. Regretfully, in many companies the actual fluctuating safety level curve is not situated around the equilibrium situation, but rather below it.

nfgz008

Figure 2.8 Company fluctuating safety level (to be drawn for type I and type II risks separately).

The reader may have noticed that the equilibrium situation is also referred to as “HRO safety.” But what is HRO safety? The principles that apply in HRO safety are mainly aimed at type II risks, but type I risks also benefit just as much from the practices and the mindset of such environments – hence the suggestion to consider it as the safety equilibrium situation. HRO safety is explained more in detail in the next section.

2.6 The Safety Equilibrium Situation or “HRO Safety”

Organizations capable of gaining and sustaining high reliability levels are called “high reliability organizations” (HROs). Despite the fact that HROs operate hazardous activities within a high-risk environment, they succeed in achieving excellent health and safety figures. Hence, they identify and correct risks very efficiently and effectively. A typical characteristic of HROs is collective mindfulness. Hopkins [10] also indicates that HROs organize themselves in such a way that they are better able to notice the unexpected in the making and halt its development. Hence, collective mindfulness in HROs implies a certain approach in the way they organize themselves. Five key principles are used by HROs to achieve such a mindful and reliable organization, as discussed in the following (see also Weick and Sutcliffe [11]).

The first three principles mainly relate to anticipation, or the ability of organizations to cope with unexpected events. Anticipation concerns disruptions, simplifications, and execution, and requires means of detecting small clues and indications with the potential to result in large, disruptive events. Of course, such organizations should also be able to decrease, diminish, or stop the consequences of (a chain of) unwanted events. Anticipation implies the ability to imagine new, uncontrollable situations, which are based on small differences with well-known and controllable situations. HROs take this into account through principles 1–3. Whereas the first three principles relate to proaction, the fourth and fifth principles focus on reaction. It is evident that if unexpected events happen despite all the precautions taken, the consequences of these events need to be mitigated. HROs take this into account via principles 4 and 5.

2.6.1 HRO Principle 1: Targeted at Disturbances

This principle points out that HROs are actively and in a proactive manner looking for failures, disturbances, deviations, inconsistencies, and the like, because they realize that these phenomena can escalate into larger problems and system failures. They achieve this goal by urging all employees to report (without a blame culture) mistakes, errors, failures, near-misses, and so on. HROs are also very much aware that a long period of time without any incidents or accidents may lead to complacency among the employees of an organization, and may thus further lead to less risk awareness and less collective mindfulness, eventually leading to accidents. Hence, HROs rigorously see to it that such complacency is avoided at all times.

2.6.2 HRO Principle 2: Reluctant for Simplification

When people – or organizations – receive information or data, there is a natural tendency to simplify or reduce it. Parts of the information considered as unimportant or irrelevant are – almost automatically – omitted. Evidently, information which may be perceived as irrelevant might in fact be very relevant in terms of avoiding incidents or accidents, especially those of type II. HROs will therefore question the knowledge they possess from different perspectives and at all times. This way, the organizations try to discover “blind spots” or phenomena that are hard to perceive. To this end, extra personnel (as a type of human redundancy) can, for example, be used to gather information.

2.6.3 HRO Principle 3: Sensitive toward Implementation

High reliability organizations strive for continuous attention toward real-time information. All employees (from frontline workers to top management) should be very well informed about all organizational processes, and not only about the process or task for which they are responsible. They should also be informed about the way that organizational processes can fail and how to control or repair such failures. To this end, an organizational culture of trust among all employees is an absolute must. A working environment in which employees are afraid to provide certain information (e.g., to report incidents) will result in an organization that is information-poor, and one in which efficient working is impossible. A so-called “engineering culture,” in which quantitative data/information are much more appreciated than qualitative knowledge/information, should also be avoided. HROs do not distinguish between qualitative and quantitative information.

High reliability organizations are also sensitive toward routines and routine-wise handling. Routines can be dangerous if they lead to absent-mindedness and distraction. By instituting job rotation and/or task rotation in an intelligent way, HROs try to prevent such routine-wise handling.

Furthermore, HROs view near-misses and incidents as opportunities to learn. The failures that go hand in hand with the near-misses always reveal potential (or otherwise hidden) hazards, and hence such failures serve as an opportunity to avoid future similarly caused incidents.

2.6.4 HRO Principle 4: Devoted to Resiliency

High reliability organizations define resiliency as the capacity of a system to retain its function and structure, regardless of internal and external changes. The system's flexibility allows it to keep on functioning, even when certain system parts no longer function as required. An approach to ensure this is for employees to organize themselves into ad hoc networks when unexpected events happen. Ad hoc networks can be regarded as temporary informal networks capable of supplying the required expertise to solve the problems. When the problems have disappeared or are solved, the network ceases to exist.

2.6.5 HRO Principle 5: Respectful for Expertise

Most organizations are characterized by a hierarchical structure with a hierarchical power structure, at least to some degree. This is also the case for HROs. However, in HROs, the power structure is no longer valid in unexpected situations in which certain expertise is required. The decision process and the power are transferred from those highest up in the hierarchy (in normal situations) to those with the most expertise regarding certain topics (in exceptional situations).

But how to achieve “HRO safety” and the correct equilibrium between productivity and safety? The first concept to consider, in this respect, is an adequate organizational safety culture. This can be reached by using performance management science in combination with The Egg Aggregated Model (TEAM) for safety culture. The second concept that should be taken into account is that of “safety futures.” The TEAM model and safety futures are explained in the following sections.

2.7 The Egg Aggregated Model (TEAM) of Safety Culture

A lot of research has been carried out on the subject of safety culture. This research has been carried out by a variety of scientific disciplines, e.g., engineering, sociology, psychology, safety scientists, and others. There has never been an integrated and holistic overview of what constitutes a safety culture, and a vivid debate among scientists on this topic can be observed. However, Vierendeels et al. [12] recently developed a unifying model of safety culture, taking all aspects of safety science within any organization into consideration in the model and explaining their position toward each other. Figure 2.9 illustrates TEAM of safety culture.

nfgz009

Figure 2.9 The Egg Aggregated Model of safety culture (Vierendeels et al. [12]).

A safety culture is like a relationship: it needs constant attention and constant labor to ensure its success. An (internal or external) audit only provides an idea of the safety climate at a certain point in time and thus does not give a true indication of the safety culture or of the “safety DNA” of an organization. To obtain a more accurate picture, several research methods should be combined and safety performance management should be established within the organization to make sure that there is continuous improvement over time.

The different research methods that need to be used are: document and quantitative analyses for assessing the “observable factors” domain of the TEAM model in the company; surveys and questionnaires for assessing the “perceptual factors” domain of TEAM (or the so-called “safety climate”); and in-depth interviewing with individuals and with groups of individuals to assess the “personal psychological factors” domain of the TEAM model within the company. Hence, both quantitative and qualitative research techniques should be used to obtain a good idea of an organization's safety culture.

Furthermore, performance management science should be employed to make sure that the company's safety culture is constantly monitored and improved where needed. To this end, there should be clear and unambiguous performance indicators linked to objectives to be able to evaluate the company safety culture parts.

Indicators should be “SMART,” which is an acronym for:

  • Specific and clearly defined;
  • Measurable so that one can check on a regular basis how the indicator is performing;
  • Achievable so that each indicator provides a target that is challenging but not so extreme that it is no longer motivational (the indicator needs to have sufficient support);
  • Relevant to the organization and what it is aiming to achieve;
  • Time-bound in terms of (realistic) deadlines or timing regarding when each indicator will be achieved.

Objectives can be formulated in different ways: as an absolute number (target numbers), as a percentage (e.g., decrease of x%, satisfy x% of criteria, satisfy x% of a checklist), or as a relative position to a benchmark (e.g., higher than the national mean, lower than the mean of the industrial sector, lower than one's own performances of the past x years).

Moreover, there are different types and levels of indicators. First, indicators should be identified for the two realistic types of risk (type I and type II). Second, different decision levels require different indicators: management, process, and result indicators. Management indicators establish whether the conditions are present for achieving certain predefined goals. They answer the question, “With what means can the goal be reached?” Process indicators provide an idea as to whether a predefined goal is achievable and whether the (different stages of the) efforts that are prefaced to achieve this goal are executed. They provide information on the working processes within the organization and answer the question, “How can the goal be reached?” Such indicators are very important, as they allow one to gain a systems view on the operational safety of the organization. Next to management and process indicators, result indicators give an indication as to what has been achieved, and whether a predefined goal has been reached or not. They answer the question, “What goals have been reached?” Another distinction is the position of the indicator: measuring “before” the result (proactive or leading indicator) or “after” the result (reactive or lagging indicator). Management and process indicators are usually leading, whereas result indicators are lagging. It should be obvious that some theoretical indicators will sometimes be extremely hard to realize in practice and others will be difficult to think of, or will simply not exist.

Linking the different parts of the TEAM safety culture model with the different possible types of indicator, practitioners should focus on elaborating process indicators and objectives for the different state variables of the model (i.e., the Venn diagrams and their intersections), whereas mainly management and result indicators and objectives need to be worked out for the gray rectangles (i.e., the aggregated results of the state variables). It should be stressed that performance management science is not an easy task and, depending on the organization, a trial-and-error approach will most probably have to be employed to eventually achieve a good performance management system and policy to monitor and continuously improve the company's safety culture.

2.8 Safety Futures

A “safety future” could be seen as an agreement between various parties to have achieved a specified level of safety at some agreed point in time in the future. This could, for example, involve senior management and the safety managers working in an organization. Preventive investments will be required to achieve this goal. This example also immediately further demonstrates the link between safety and economic issues. All companies deal in “safety futures,” even if they don't usually look at it this way explicitly.

To do this properly, one should realize that the different types of risk deserve attention and that both type I and type II risks give rise to two types of safety futures, which should not be confused. Furthermore, the credo that says “you cannot put a price on safety” is wrong. A price can be put on safety, but in many cases that estimated price should be much higher than is currently assumed by many practitioners. Safety, or the avoidance of accidents, should simply be seen as part of the business of making a profit or benefit, as also explained in Chapter 1. In the case of operational safety, the profit is hypothetical, because the accidents that are being postulated have not actually happened; but nevertheless these profits/benefits can be calculated, and the sums at stake are many times higher than is generally believed. The costs and hypothetical benefits of accidents and safety are elaborated in depth in Chapter 5 on costs and benefits. In Section 8.12 the hypothetical benefit, in the form/terminology of “maximum investment willingness”, is derived by way of the Bayesian decision theory.

Much criticism is also directed toward any method of calculating the cost of safety in economic terms, because so many assumptions have to be made, and choices have to be made in order to arrive at a result. The result is therefore surrounded by a great deal of uncertainty. But it is not because the calculations are imperfect, and the assumptions are many, that an issue should just be ignored. Rather, calculations need to be made more accurate. A tool that has a high degree of reliability and the validity that is required should be developed. The next sections discuss the controversy surrounding economic analyses and the requirements for adequate economic assessment analyses in greater depth.

2.9 The Controversy of Economic Analyses

The usefulness of economic analyses in operational safety has been questioned in many ways and by many people. Although economic analyses can support normative risk control decisions, they should not be used to determine the efficiency and effectiveness of prevention measures. They cannot prove that one prevention measure is intrinsically better than another. Economic analyses as regards operational safety should provide appropriate information on economic and financial aspects of safety to decision-makers, and this information should be easy to understand and interpret.

Unfortunately, economic analyses require debatable information, e.g., the price of a fatality, the price of a lost finger, the question of who pays which costs, the question of who receives which benefits, and other potentially controversial information. Using such data (or not), choices have to be made with respect to safety and prevention measures, constrained by the available safety budget of an organization. A well-known example of a possibly difficult decision, as already mentioned, is having to choose between investing in safety measures and HILP (type II) risks, or LIHP (type I) risks. How to deal with such a question is explained in this book, amongst others. Based on rigorous economic analyses, evaluative best-value-for-money risk reduction measures should obviously be sought.

As mentioned earlier, many critiques have been formulated regarding the concept of economic approaches for safety decisions. Economic assessments can only be based on the “best estimates” available. Such estimates are obtained by using models, data, and information accompanied by many uncertainties and assumptions. Hence, the accuracy of economic analyses is often limited, and thus decision-makers should be careful when using the results.

One should also be careful that an economic approach that is employed to back up safety decisions is not misused and does not merely serve the purpose of giving an organization an aura of being scientific about prevention measures taken. The complexity of an economic analysis often leads to non-experts having difficulties in understanding the premises and assumptions made. Economic analyses can indeed be misused by those who desire to do so, as there is plenty of room for the assumptions and methods to be adjusted to arrive at certain recommendations. This is possible for every type of risk. Furthermore, economic approaches and processes allow organizations to hide behind “rationality” and “objectivity” if the recommendations following an economic assessment are followed without thinking them through. In principle, economic assessments are not necessarily carried out to measure the financial aspects of safety investments. The focus of economic-based recommendations should be placed on selecting the most optimal safety investments. The aim is to improve the operational safety of an organization in the most optimal way, thereby taking financial aspects into consideration. Carrying out an economic assessment with respect to safety is not about figures, although managers are sometimes blinded by the figures obtained. However, the figures are always relative and should always be checked for their meaningfulness, and they should be explained and interpreted.

The economic assessments can be based on selective information, sometimes arbitrary assumptions, and small or large uncertainties. Nevertheless, the assessments may lead to recommendations providing input for decision-makers to select certain safety investments and to prefer one option over others. Hence, the objectivity of such assessments can, in certain cases, be contested and one should be aware of simplistic and unrealistic claims and/or recommendations. The disguised subjectivity of economic analyses is thus potentially dangerous and open to abuse, if it is not recognized.

However, there is no alternative to rigorous economic assessments, unless one is being naïve about the financial aspects of operational safety in an organization. Such naivety can lead to imbalance in two directions: either one has an unclear view of the possible losses due to lack of safety, and safety investments are inadequate (which leads to an undesired and dangerous situation), or one invests much more in safety than would be recommended from a rational, economic point of view (which leads to an economically suboptimal situation).

To support and continuously improve decision-making about prevention and safety measures, economic assessments need to be made. The right way forward is therefore not to reject the economic approach in safety decision-making, but to improve the methods, data, information, concepts, and their use.

2.10 Scientific Requirements for Adequate Economic Assessment Techniques

Aven and Heide [13] indicate that a scientific method, such as, for example, an economic assessment, should be characterized by the following requirements:

  1. 1. The scientific work shall be in compliance with all rules, assumptions, limitations or constraints introduced, and the basis for all choices, judgments, and so on, given shall be clear, and finally the principles, methods, and models shall be subjected to order and system, to ensure that critique can be raised and that it is comprehensible.
  2. 2. The analysis is relevant and useful – it contributes to a development within the disciplines it concerns, and it is useful with a view to solving the “problem(s)” it concerns or with a view to further development in order to solve the “problem(s)” it concerns.
  3. 3. The analysis of the results are reliable and valid.

As Aven [14] mentions, the first two requirements are based on standard requirements for scientific work. Economic assessments provide decision support by systematization of financial aspects of safety-related choices. As there is no general consensus about the how, when, and why of performing an economic analysis in relation to operational safety, there are many possible principles and methods that are available and that can be employed at any time for a variety of purposes. The third requirement is therefore important to ensure that the results (and recommendations based on them) of an economic assessment are reliable and valid. The consistency of the “measuring instrument” (analysts, experts, methods, procedures) is expressed by its reliability. The success at “measuring” what was set out to be “measured” in the analysis is determined by an analysis's validity. The following definitions are proposed by Aven [14]: reliability is the extent to which the analysis yields the same results when repeated, and validity can be seen as the degree to which the analysis describes the specific concepts that one is attempting to describe. Aven and Heide [13] formulated more specific criteria for both concepts (in relation to risk analysis).

In the case of reliability, the criteria, applied to economic assessments, are as follows: the degree to which the economic analysis methods produce the same results at reruns of these methods; the degree to which the economic analysis produces identical results when conducted by different analysis teams, but using the same methods and data; and the degree to which the economic analysis produces identical results when conducted by different analysis teams with the same analysis scope and objectives, but with no restrictions on methods and data.

The validity criteria of an economic assessment are as follows: the degree to which the economic/financial numbers produced are accurate compared with the underlying true number; the degree to which the assigned probabilities adequately describe the assessor's uncertainties of the unknown quantities considered; the degree to which the epistemic uncertainty assessments are complete; and the degree to which the economic analysis addresses the right quantities.

Some recommendations can thus be made for decision-makers who decide to use an economic analysis to help with safety investment decisions. An economic analysis is as accurate as its input information, and it is often easier to obtain data on costs than on potential (hypothetical) benefits. Indirect and invisible financial information can play an important role in the (lack of) accuracy of an economic assessment. People carrying out the economic analysis should be objective and open-minded, such that their perception regarding safety and risks within the organization becomes as close to reality as feasible. After all, it should be kept in mind that while an economic analysis creates an image of precision, (mostly) it is not precise.

2.11 Four Categories of Data

Based upon the characteristics of the numbers, data can be classified into four categories [15]: ratio, interval, ordinal, and categorical. As certain economic models and mathematical approaches can only be used with certain data, it is important for decision-makers to understand this. Ratio data, for example, can be used with all statistical approaches, while categorical data can only be used with statistical tools designed specifically for such data. Hence, identifying the format of the data prior to deciding about the economic approach to be employed is essential, as the approach that the data can be used with is dependent upon this format.

Ratio data are continuous data and it is the only data scale in which it is possible to make comparisons between the values of the scale. Hence, magnitude between values on the scale exists for this type of data. This means that if, for example, one safety investment leads to the avoidance of four accidents of a certain type, and another investment avoids eight accidents of the same type, it is correct to say that the second investment leads to twice as many avoided accidents as the first.

Interval data are a form of continuous data, but less strict than the ratio data, i.e., there is no magnitude between values on the scale. An interval scale is divided into equal measurements and should be used like this. For example, the difference between 10 and 20 units of the scale is the same as that between 20 and 30 of the scale. However, it is not accurate to say that 20 units of the scale is twice as much as 10 units of the scale. Utility values, for instance, could be designed and determined in a way that corresponds to interval data.

Ordinal data are rank-order data. The term “ordinal” implies that the data are ordered in some way. For example, rankings from “very low” to “very high,” worst to best, “strongly disagree” to “strongly agree,” or “bad, medium, good,” belong to the ordinal data category. It is important to realize that it is not possible to make comparisons between the values if ordinal data are used. For example, if a scale from ‘“strongly disagree”, “disagree”, “undecided”, “agree”, “strongly agree”’ is assigned the values 1–5, and decision-maker A disagrees with an item (hence value 2 in his perception), and decision-maker B agrees with the same item (hence value 4 in his perception), this does not indicate at all that decision-maker B agrees twice as strongly as decision-maker A. The only conclusion that can be drawn from such ordinal scale information, is that decision-maker B is more agreeable to the item than decision-maker A.

Categorical data are sometimes also called “discrete data” and represent categories. The values assigned to categorical data only serve to differentiate between memberships in the groups. Category data examples are the types of risk (type I or type II), the departments of an organization, the categories male–female, and so on. Magnitude does not exist between category values. For instance, it would be absurd to say that a category numbered “1” is half as large as a category numbered “2.”

2.12 Improving Decision-making Processes for Investing in Safety

In general, the literature indicates that company management often has difficulties with the decision-making process for operational safety investments. An important reason for this observation is that managers within organizations have a general lack of knowledge concerning the costs of accidents. Because of this lack of understanding, most of the costs related to an accident are believed to be insured and thus are believed not to play an essential part in the financial situation of the company. In addition, costs are assumed to be limited to the direct accident costs, although indirect accident costs also need to be included. Therefore company managers often believe that there is no valid reason to spend significant capital and time on the complex decision-making process of investing in safety [16].

Another reason why companies tend not to be able to see the importance of a transparent and extensive decision-making process of operational safety investments relates to the measurement difficulties of costs and benefits of prevention. An accurate calculation of many of the required data in the economic analysis is a complex and highly time-consuming process, being costly in itself.

Furthermore, the common assumption of many managers (certainly in the past, and sometimes still in the present) is that accident costs are inevitable and thus represent sunk costs. Of course, such reasoning is very wrong indeed and leads to companies performing badly or even going bankrupt. Moreover, managers may consider investments in safety and accident prevention merely as marketing or reputation expenses, to enhance the company's, or their own, image. Therefore there will be neither time nor money for an extensive decision-making process of investments in prevention measures. In reality, taking into account all the benefits of accident prevention and mitigation while deciding on prevention investments truly leads to companies not having blind spots with regard to a very important area for the profitability of any organization, i.e., prevention investment decision-making, and lowering the operational negative uncertainties of the company (using the terminology of Figure 1.1). Chapters 5 and 6 look in more depth at cost-benefit and cost-effectiveness analyses.

There is also a psychological bias, known as the “loss aversion” principle, that prevents many managers from making the correct safety investment decisions. Indeed, due to the psychological principle of loss aversion [17], the fact that people hate to lose, safety investments to manage and control all types of accidents, but especially precautionary investments to deal with highly unlikely events, are not at all evident. Top managers, risk managers, and the like, being human beings like all other people, may also let their decision judgment be influenced by this psychological principle.

To have a clear understanding of loss aversion, consider the following example. Suppose you are offered two options: (i) you receive €5000 from me (with certainty); or (ii) we toss a coin. You receive €10 000 from me if it is heads, otherwise (if it is tails), you receive nothing. What will you choose? Although the expected outcome in both cases is identical, by far most of the people will choose option (i). They go for the certainty, and prefer to take €5000 for certain rather than gamble and receive nothing if the coin turns up tails.

Let's now consider two different options: (iii) you have to pay me €5000 (with certainty); or (iv) we toss a coin. You have to pay me €10 000 if the coin turns up heads, otherwise (in case of tails) you pay me nothing. Which option would you prefer this time? Notice that the expected outcome is still identical. By far most people in this case would prefer option (iv). Hence, they go for the gamble, and risk paying €10 000 with a level of uncertainty (there is a 50% probability that they will not have to pay anything) instead of paying €5000 for certain.

From this example, it is clear that people hate to lose and love certain gains. People are more inclined to take risks to avoid certain losses than they are to take risks to gain uncertain gains.

Translating this psychological principle into safety terminology, it is clear that company management would be more inclined to invest in production (“certain gains”) than to invest in prevention (“uncertain gains”). Also, management is more inclined to risk highly improbable accidents (“uncertain losses”) than to make large investments (“certain losses”) to prevent such accidents.

Therefore, management should be well aware of this basic human psychological principle, and when making prevention investment decisions, managers should take this into account in their decision. The fact that human beings are prejudiced and that some predetermined preferences are present in the human mind, should thus be consciously considered in the decision-making process of risk managers.

2.13 Conclusions

Operational safety and accident risk are not adequately incorporated into the economic planning and decision processes in organizations. The business incentives for investing in operational safety are unclear. There is a need to demonstrate that safety measures have an essential value in an economic sense. A valuable question is, “To what extent is it true that businesses would not invest in higher operational safety if such values cannot be demonstrated”? An over-investment in safety measures is very likely if, for instance, the fact that there is access to an insurance market is ignored, while an under-investment in safety measures is very likely if insurance is purchased without paying attention to the fact that the probability and consequences can be reduced by safety measures.

Abrhamsen and Asche [18] stated that the final decision regarding how much company resources should be spent on operational safety measures and insurance may be very different depending on what kinds of risks are considered. It makes a difference, for instance, if the risks are of a voluntary nature or if they are involuntary and imposed by others. Clearly, there is more reason for society to enforce standards in the latter case. However, the decision criterion itself is independent of the kind of risk: an expected utility maximization should combine insurance, invest in safety measures, and take the direct costs of an accident, such that the marginal utility of the different actions are the same (see also Chapters 3, 4, and 7).

The fact that decision-makers have an in-built psychological preference to avoid losses should be consciously considered by these decision-makers when making precaution investment decisions.

Now that the importance of economics and economic considerations with respect to industrial safety has been explained, more basic information is needed about economic principles and how they can be applied to operational safety. The next chapter deals with the foundations of economics, and applies them to the field of operational safety.

References

  1. [1] ISO NEN-ISO 31000:2009 (2009). Risk Management – Principles and Guidelines. NEN, Delft.
  2. [2] Meyer, T., Reniers, G. (2013). Engineering Risk Management. De Gruyter, Berlin.
  3. [3] Center for Chemical Process Safety (2008). Guidelines for Chemical Transportation Safety, Security and Management. American Institute of Chemical Engineers, Hoboken, NJ.
  4. [4] Rushton, A. (2006). CBA, ALARP and Industrial Safety in the United Kingdom.
  5. [5] Hopkins, A. (2010). Failure to Learn. The BP Texas City Refinery Disaster. CCH Australia Limited, Sydney.
  6. [6] Taleb, N.N. (2007). The Black Swan. The Impact of the Highly Improbable. Random House, New York.
  7. [7] Fernández-Muñiz, B., Manuel Montes-Peón, J., Vázquez-Ordás, C.J. (2009). Relation between occupational safety management and firm performance. Safety Science, 47, 980–991.
  8. [8] Perrow, Ch. (2006). The limits of safety: the enhancement of a theory of accidents. In: Key Readings in Risk Management. Systems and Structure for Prevention and Recovery (eds Smith, D. & Elliott, D.). Routledge, Abingdon.
  9. [9] Hollnagel, E. (2014). Safety-I and Safety-II. The Past and Future of Safety Management. Ashgate, Burlington, VT.
  10. [10] Hopkins, A. (2005). Safety, Culture and Risk. The Organizational Causes of Disasters. CCH Australia Limited, Sydney.
  11. [11] Weick, K.E., Sutcliffe, K.M. (2007). Managing the Unexpected. Resilient Performance in An Age of Uncertainty, 2nd edn. Jossey-Bass, San Francisco, CA.
  12. [12] Vierendeels, G., Reniers, G.L.L., Van Nunen, K., Ponnet, K. (2016) An integrative conceptual framework for safety culture: The Egg Aggregated Model (TEAM) of safety culture, forthcoming.
  13. [13] Aven, T., Heide, B. (2009). Reliability and validity of risk analysis. Reliability Engineering & System Safety, 94, 1862–1868.
  14. [14] Aven, T. (2011). Quantitative Risk Assessment. The Scientific Platform. Cambridge University Press, Cambridge.
  15. [15] Janicak, C.A. (2010). Safety Metrics. Tools and Techniques for Measuring Safety Performance, 2nd edn. The Scarecrow Press Inc., Lanham, MD.
  16. [16] Gavious, A., Mizrahi, S., Shani, Y., Minchuk, Y. (2009). The cost of industrial accidents for the organization: developing methods and tools for evaluation and cost-benefit analysis of investment in safety. Journal of Loss Prevention in the Process Industries, 22(4), 434–438.
  17. [17] Tversky, A., Kahneman, D. (2004). Loss aversion in riskless choice: a reference-dependent model. In: Preference, Belief, and Similarity: Selected Writings (ed. Shafir, E.). MIT Press, Cambridge, MA.
  18. [18] Abrhamsen, E.B., Asche, F. (2010). The insurance market's influence on investments in safety measures. Safety Science, 48, 1279–1285.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset