Chapter 4

Complexity, Emergence, Resilience …

Jean Pariès

Future is inevitable, but it may not happen

Jorges Luis Borges

Introduction

If safety is simply taken as the ability to operate without killing (too many) people, organizational resilience does not necessarily mean safety. If resilience is be taken as the intrinsic capacity of an organization to recover a stable state (the initial one or a new one) allowing it to continue operations after a major mishap or in presence of a continuous stress, then the ability of an organization to ensure its own survival/operations against adverse circumstances may well imply being momentarily unsafe for its members, or other stakeholders. A good example of that is a country in war, defending itself against military aggression. Resilience may imply accepting to lose some ‘lives’ among the people. However, from a different perspective, which is widely developed in this book, safety can also be seen as a form of resilience, i.e., as the result of the robustness of all the processes that keep the system safe towards all kinds of stressors, pathogens, threats and the like.

A further strong contention of this book is that organizational resilience is an emerging property of complex systems. The goal of this short section is to elaborate on the notion of emergence. It will not discuss the notion of resilience itself, extensively addressed elsewhere throughout this book. The focus will rather be put on the relationship between complexity and emergence. The issues that will be addressed are questions such as what does it mean that resilience (or any other property) is an emerging property, what the relationship is between emergence and complexity, and whether a more complex system necessarily is less resilient, as Perrow (1984) suggested. The goal is to provide the reader of this book with a framework and some insight (and hopefully some questions left) about these transverse notions, in order to facilitate the comprehension of the discussions in the other chapters about resilience and the challenge of engineering it into organizations.

Emergence and Systems

In the Middle Age in the alchemists’ laboratories, persistent researchers were seeking the long life elixir – the infallible remedy for all diseases and the philosophers’ stone – able to trigger the transmutation of common metal in gold (Figure 4.1). While they did not really succeed, they did discover a lot of less ambitious properties of matter, and above all, they realised that mixtures could have properties that components had not. In other words, they discovered the power of interaction. And progressively, alchemists became chemists.

Since then, science has been reductionistic. Scientists have been decomposing phenomena, systems and matter into interacting parts, explaining properties at one level from laws describing the interaction of component properties at a lower level of organization. And because more and more components (e.g., atoms) are shared by all phenomena as we go deeper into the decomposition, science has been able to reduce complexity. A multitude of phenomena are explained by a few laws and by the particular parameters of the case. Conversely, as in chess where a small number of rules can generate a huge number of board configurations, a few laws and parameters generate the diversity of properties and phenomena. One could say that properties emerge from the interaction of lower level components. As Holland (1998) puts it, ‘emergence in rule-governed systems comes close to being the obverse of reduction’.

So there is a strong relationship between emergence and complexity, as well as between complexity and explanation. ‘Emergence’ is what happens when we try to understand the properties of a system that exceeds the level of size and complexity that our intellect can grasp at once, so we decompose the system into interacting component parts. But an obvious question immediately arises from here: can we ‘explain’ all the properties of the world (physical, biological, psychological, social, etc.) through such a reduction process? Clearly, the least one can say is that the reductionistic strategy does not have the same efficiency for all aspects of the world.

Image

Figure 4.1: The alchemists’ laboratory

If I try to explain the weight of my alarm clock from the weight of its constituents, it works pretty well. When I try to explain the capability of my alarm clock to wake me up at a specific time, it is clearly no longer an issue of summing up individual components’ properties: ‘the whole is greater than the sum of its parts’. But I can still figure out and describe how components’ interactions complement each other to wake me up.

Now, if one tried to generate an explanation of my waking-up, from the properties of the atoms composing my body, the challenge might well be simply insuperable. Only a living being can wake up. No atoms of my body are living, yet I am living (Morowitz, 2002). Life, but also societies, economies, ecosystems, organizations, consciousness, have properties that cannot be deducted from their component agents’ properties, and even have a rather high degree of autonomy from their parts. Millions of cells of my body die and are replaced every day, but I am still myself.

So why is it that we cannot derive these properties from components’ properties? It may be that we could in principle – that every aspect, property, state of the world is entirely understandable in the language of chemistry – notwithstanding our computing limitation, which would make any mathematical and computational approach overwhelmingly long and complex. The famous French mathematician and astronomer Pierre Laplace nicely captured this vision, through his ‘demon’ metaphor:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes. (Laplace, 1814)

The contention here is that the states of a deterministic macro system, such as a collection of particles, are completely fixed once the laws (e.g., Newtonian mechanics) and the initial/boundary conditions are specified at the microscopic level, whether or not we poor Humans can actually predict these states through computation. This is one form of possible relationship between micro and macro phenomena, in which the causal dynamics at one level are entirely determined by the causal dynamics at lower levels of organization.

The notion of emergence is then simply a broader form of relationship between micro and macro levels, in which properties at a higher level are both dependent on, and autonomous from, underlying processes at lower levels. (Bedau, 1997, 2002) has categorized emergence into three categories.

•  Nominal emergence when macro level properties, while meaningless at the micro level, can be derived from assembling micro level properties. Innumerable illustrations of nominal emergence range from physics (e.g., from atomic structure to the physical properties of a metal) to all designed systems (e.g., see the alarm-clock example above);

•  Weak emergence when an ‘in principle’ microscopic account of macroscopic behaviour is still possible, but the detailed and comprehensive behaviour cannot be predicted without performing a one-to-one simulation, because there is no condensed explanation1 of the system’s causal dynamics. Illustrations of weak emergence are provided by insect swarms, neural networks, cellular automata, and the like.

•  Strong emergence when macro-level properties cannot be explained, and even less predicted, even in principle, by any microlevel causality. Strong emergence would defeat even Laplace’s unlimited computer demon. The existence of strong emergence is a contentious issue, as it is inconsistent with the common scientific dogma of upward causation, and introduces a presumption that holistic, downward causation binds the underlying laws of physics to impose organizing principles over components. For a very exciting contribution to this discussion, for cosmology as well as for biology, see Davies (2004). He makes the point that a system above a specific level of complexity (of which the order of magnitude can be computed) cannot be entirely controlled by upward causation, because of the existence of fundamental upper bounds on information content and the information processing rate.

From Emergence to Resilience

The above discussion about different types of emergence may provide some insight into the notion of resilience. If, as discussed elsewhere in this book, organizational resilience is an emergent property of complex organizations, one could build on the above-mentioned taxonomy of emergence and define nominal, weak and strong organizational resilience. Among others, an expected benefit of doing that could be to get a broader perspective of the respective contributions of the ‘sharp end’ and the ‘blunt end’ in the aetiology of accidents. A question is whether the focus on ‘organizational psychopathology’ has been overplayed during the last few years’ effort to go beyond the front-line operators’ error perspective in the understanding of accidents, and whether ‘we should redress some of our efforts back to the human at the sharp end’ (Shorrock et al., 2005). In his book Managing the risks of organizational accidents, J. Reason (1997) warned that ‘the pendulum may have swung too far in our present attempts to track down possible errors and accident contributions that are widely separated in both time and place from the events themselves’. Differentiating the construction of organizational resilience according to the nature of its underlying emergence process may be a way to reconcile the sharp end and blunt end perspectives into a better integrated vision.

So what would ‘nominal’, ‘weak’, and ‘strong’ organizational resilience stand for? Inspired by the discussion of the previous section, one could suggest definitions as follows.

•  Nominally Emergent Resilience (NER) would refer to resilience features of an organization resulting from a ‘computable’ combination of individual agent properties. These may well themselves be complex emergent properties, such as consciousness, risk awareness, etc. NER would include all what contributes to the reliability, and also to the resilience, of local couplings between individual agents and their environment. It would cover individual human features (e.g., individual cognition, error management, surprise management, stress management), as well as organizational measures capable of efficacy over these features (e.g., the design of error tolerant environments). It would also include the interactions between individual agents that can be rationally perceived as facilitating collective resilience at different scales, from small groups to large communities: role definition, shared procedures, communication, leadership, delegation, cross-monitoring, and the like. Finally, it would include most aspects of a Safety Management System.

•  Weakly Emergent Resilience (WER) would refer to the resilience features of an organization resulting from a combination of individual agent properties in such a way that, although in principle ‘computable’, the detailed and comprehensive resilient macroscopic behaviour cannot be predicted without performing a one-to-one simulation. This will happen with complex self-regulated systems in which feedback and feedforward loops interact to govern the behaviour of the system (Leveson, 2004). This will also typically happen for some, although not for all, large scale (large population) systems. Indeed, such systems can remain ‘simple’ if mutual relationships between basic components tend to annihilate individual deviations: individual variety is then submerged by statistical mean values, so that microscopic complexity disappears at macroscopic scales (e.g., Boltzmann’s statistical interpretation of entropy in physics). Large scale systems may also become ‘complex’ when individual variety at microscopic levels is combined and amplified so that it creates new properties at macroscopic level. Simplified illustrations of these phenomena are provided by insect colonies (‘smart swarms’), (see box below, also Bonabeau et al., 1999, 2000), neural networks, collective robotics intelligence (Kube & Bonabeau, 2000), and the like. A key mechanism here is the indirect coupling between component agents through modifications that their individual behaviour introduces into the shared environment.

Individual ants run random trips, and put down a chemical trace (pheromone) on their way (Figure 4.2). Pheromone works as a probabilistic guidance for ants: they tend to follow it, but often lose the track. When one ant comes across a prey, it will automatically bite it and follow its own trace, hence heading back to the nest. As it still puts down pheromone, it will also reinforce the chemical track, increasing the chances for another ant to follow the track to the prey. From a small set of very simple individual behavioural rules (‘follow pheromone’, ‘bite preys’), a self-organizing, auto-catalytic process is thus triggered, amplifying guidance to the prey, while individual ‘errors’ of ants losing the track allow for new prey discovery. A memory of positive past experience is literally written into the environment, leading to an efficient and adaptive collective strategy to collect food. A ‘collective teleonomy’ seems to emerge, while no direct communication has taken place, and no representation of the task whatsoever is accessible to individuals (after Bonabeau et al., 1999; 2000).

Ants have virtually no brains (about one hundred neurons, versus about one hundred billion for humans). One could then dryly reject the legitimacy of any comparison, even metaphoric, between insect swarms and human societies. Nevertheless, humans do exhibit collective behaviour based on the interaction of rather simple individual rules. Group or crowd behaviour during aircraft, ship, tunnel or building evacuation, or simply daily pedestrian flows or automobile traffic jams can be and have been modelled using software simulators based on low-level individual rules interaction (e.g., Galea, 2001). Similar, but much more complex interactions are behind the notion of distributed cognition (Hutchins, 1996). And from a more philosophical point of view, the anthropologist René Girard (1961) has been able to build a fascinating, although controversial, holistic theory of mankind behaviour (including myths, religions, power, laws, war, and so on) based on the mere idea that humans tend to imitate each other.

Image

Figure 4.2: Ants and smart cooperation

From a resilience perspective, this category of complex systems has interesting properties. First, their operational efficiency is really an emergence, and no ‘process representation’, no understanding of the collective goal, no grasp on the conditions for collective efficiency is needed at the individual agent level. The interaction of agent components produces an aggregate entity that is more flexible and adaptive than its component agents. Second, they develop and stabilise ‘on the edge of chaos’: they create order (invariants, rules, regularities, structures) against chaos, but they need residual disorder to survive. If no ant got lost while following the pheromone track to the prey, no new prey would be discovered soon enough. Too much order leads to crisis (e.g., epilepsy crises in the brain are due to a sudden synchronization of neuronal electrical activity). This leads to a vision of accidents as a resonance phenomenon (Hollnagel, 2004). So these systems/processes are not optimised for a given environment: they do not do the best possible job, but they do enough of the job. They are ‘sufficient’, to take René Amalberti’s words elsewhere in this book (Chapter 16). They keep sub-optimal features as a trade-off against efficiency and the ability to cope with variation in their environment. Due to their auto-catalytic mechanisms, they have non-linear behaviour and go through cycles of stability and instability (like the stock market), but they are extremely stable (resilient) against aggressions, up to a certain threshold at which point they collapse. More precisely, their ‘internal variables have been tuned to favour small losses in common events, at the expense of large losses when subject to rare or unexpected perturbations, even if the perturbations are infinitesimal’ (Carlson & Doyle, 2002).

One of the variables that can be tuned is the topology of the system, for example the topology of communication structures. Barabási & Bonabeau (2003) have shown that scale-free communication networks are more resistant to random threat destruction than random networks, while they are more sensitive to targeted (malicious) attacks.

Scale-free (or scale-invariant) networks have a modular structure, rather than an evenly distributed one. Some of their nodes are highly connected, while others are poorly connected. Airlines ‘hubs’, and the World Wide Web (Figure 4.3) are example of such networks.

Recent work on network robustness has shown that scale-free architectures are highly resistant to accidental (random) failures, but very vulnerable to deliberate attacks. Indeed, highly connected nodes provide more alternative tracks between any two nodes in the network, but a deliberate attack against these nodes is obviously more disruptive.

Finally, Strongly Emergent (organizational) Resilience (SER) would refer to the (hypothetic) resilience features of an organization that cannot be explained by any combination of individual agent properties, even in principle. Although there is no evidence that such properties exist, recent developments in complexity science tend to prove that they are possible, provided a sufficient level of complexity is reached, which is undoubtedly the case for living beings (Davies, 2004). A candidate for this status may be the notion of culture, although it is disputable whether culture is a binding behavioural frame above individual behaviour, or merely a crystallised representation of actual behaviour. Another good candidate for this status would certainly be all the recursive self-reference phenomena, such as consciousness. Human social behaviour is increasingly self-represented through the development of mass media, hence partially recursive: anticipations modify what is anticipated (e.g., polls before a vote). This leads to circular causality and paradoxes (Dupuy & Teubner, 1990), and opens the door to some form of downward causation, or more accurately, to a kind of retroactive causation, so to speak: a representation of future can change the present. The certainty that a risk is totally eradicated will most probably trigger behaviours that will reintroduce the threat.

Image

Figure 4.3: Scale-free networks

A key notion to understand the stabilization of circular causal dynamics is the notion of an attractor. An attractor is a fixed point in the space of possible states, for example a state of public opinion that is stable when opinion is informed of its current state. A particularly interesting kind of attractor has been known as the ‘self-fulfilling prophecy’, i.e., a representation of future that triggers, in the present time, actions and reactions which make that future happen. From a risk management perspective, the key question is how to keep concern for risk alive when things look safe. What we need to do is to introduce ‘heuristics of fear’ in order to ‘build a vision of the future such that it triggers in the present time a behaviour preventing that vision from becoming real’ (Dupuy, 2002). In other words, a safety manager’s job is to handle irony: the core of a good safety culture is a self-defeating prophecy, and a whistle blower’s ultimate achievement is to be wrong.

Conclusion

We have seen that a hierarchy of complexity can be described, each level corresponding to a specific notion of emergence. Human societies and organizations exhibit an overwhelming degree of complexity, and obviously cover the whole hierarchy. As an emerging property of these complex systems, resilience should also be considered through a similar taxonomy. Hence we have touched on differentiating nominal, weak and strong organizational resilience and discussed what these notions would cover. Standing back a little bit, it seems that most of the methodological efforts devoted to improve the robustness of organizations against safety threats, particularly within the industrial domain, have focused on nominal resilience. Compared to the actual complexity of the real world dynamics, the current approaches to safety management systems look terribly static and linear. It may be time for safety management thinkers and practitioners to look beyond the Heinrich (1931) domino model, and seek inspiration from complexity science and systems theory. This is, among other things, what this book is modestly but bravely trying to initiate.

1  The complexity of an object can be expressed by the Kolmogorov measure, which is the length of the shortest algorithm capable of generating this object. In the case of weak emergence, the system dynamics are algorithmically incompressible in the Kolmogorov sense. In this case, the fastest simulator of the system is the system itself.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset