16
Why Current Tools Are Inadequate

Why are we feeling powerless in the face of the current grand challenges affecting our planet and the sustainable environmental development of humanity? This chapter addresses the issue in the framework of complex systems.

16.1. On the shortcomings of current tools: risk and probability

In the following, we will limit ourselves to tools dedicated to describing a phenomenon, including quantitative and qualitative statistics, multivariate characterization, modeling and dynamic simulation of an event, etc.

Probability theory provides a rigorous mathematical framework for understanding the notion of risk and eventuality. In finance, as in industry, it is customary to base statistical studies on the normal distribution. We then describe a set of possible states of the world and then assign each state a weighting that describes the probability that this state will occur. However, in practice, the hypothesis of using conventional laws (Gauss, Poisson, exponential, etc.) is increasingly disputed. Several phenomena occur: the information is generally asymmetric; the frequency of major events such as natural disasters or earthquakes does not correspond with what can be predicted in distribution law tails… For example:

  • – according to Philippe Henrotte [HEN 01, HEN 08], unlike games of chance, economics is a complex game where players’ expectations influence the odds of the next roll of the die;
  • – according to Pierre Massotte [MAS 06], the distribution of major failures in computers does not follow normal statistical laws, and it is necessary to use Weibull laws with appropriate parameters. Similarly, in the context of 6-sigma, these laws are practically useless and Levy’s laws have been used;
  • – according to Benoît Mandelbrot [MAN 97], different approaches and tools have been proposed to describe and analyze stock market phenomena, from Lévy laws to L-stable distributions, to the “Brownian fractal” and multifractal stock market time.

16.2. A thematic illustration

We have seen previously that the appearance of failures in large complex electronic assemblies does not follow normal laws. Furthermore, we have seen that this phenomenon also appeared in microsystems. In the last example given concerning disruptions, crashes or financial hype, the frequency and amplitude of these phenomena implies the introduction of unconventional descriptive models. We were there in mesosystems and macrosystems. But here again, and to our knowledge, only B. Mandebrot’s approaches make it possible to establish links between the micro- and macrolevels.

We wish to take a moment out of the professional framework of finance or industry because we want to show in a broader framework that conventional statistical laws are no more suitable to represent physical phenomena, in particular, Gaussian and normal law. Indeed, in Nature, examples abound, showing that the frequency and amplitude of random phenomena, disasters and risks are higher than expected by laws (it is for this reason that we had to use Lévy’s laws). This is the case for the occurrence of physical disasters such as avalanches or earthquakes.

The following two representative examples show this in the universe of the infinitely great: astrophysics. Two situations are described: collisions between galaxies and stellar collisions.

16.3. What regularities?

Industrial, economic or social phenomena are time-dependent. A set of states form a time series of events; this is called the trajectory of a stochastic process. In games of chance, for example, when a die is rolled successively, a sequence of numbers between 1 and 6 is created. Time is of little importance because the throws are independent so that past history does not tell us anything about future draws.

In reality, in an industry, the human being intervenes in all his or her choices. The approach is complex because everyone tries to predict the future by questioning the past. We try to analyze past variations to determine trends or look for causes to take them into account in future decisions. Many specialists have therefore developed statistical analysis models with memory effects, particularly in the context of defining stock price trends.

A logical approach will underpin some stock market transactions: if there is regularity in stock prices and if it is, for example, possible to predict a rise in the price of a stock, people who have this information will naturally buy the stock today and sell it tomorrow to pocket the increase in value. In doing so, they push up the daily price through their purchases and pull down the next day’s price by their sales. The supposed regularity will therefore self-destruct. On the other hand, since the law of large numbers helps (if human behavior is similar), it will compensate for these irregularities and give rise to a given order.

The first scientist who took an interest in stock price variations using an innovative approach was Benoît Mandelbrot [MAN 01]. He was able to show that the temporal change in stock market prices was multi-fractal. This is therefore a regularity, yet of a new type, difficult to exploit given the very nature of fractal properties. Finally, our purpose is finally to highlight structural properties whose scope is deeper than those of the descriptive type that we find in the statistics. This point is a major step forward which shows that prices are well between order and disorder.

16.4. Characteristics of rational expectations in economics

Unlike simple games of chance, economics is a complex game where players’ expectations influence, as we have just said, the probabilities of the next roll of the die. Economic agents change their behavior according to how they view the future and, in turn, their combined actions create the economic phenomena of tomorrow. A rational expectation equilibrium describes a probability distribution that takes into account this logical loop. We therefore create rules for ourselves, but using conventional statistical approaches.

This theory has had some success in the past, but the events or the major disruptions observed have never been reliably anticipated. Traditional statistical models therefore offer satisfactory tools in stable periods (permanent or stationary regimes) but are ineffective in the event of a crisis. With current approaches, disasters cannot be predicted; therefore, we navigate on sight, and this can lead to disastrous monetary policies.

A first correction therefore consists of using Weibull’s laws whose distribution law tails may be thicker than those resulting from normal and other laws. A second point is to look for models using other mathematical theories such as those defined by Mandelbrot that better integrate the notions of apparent discontinuities and can therefore lead to more reliable results, especially in crisis situations and with unpredictable and non-stationary phenomena.

The other approaches used in finance are alternatives that shift the problem from market risk to counter party risk or minimal risk management. The problem is therefore displaced by taking action, not to counteract an adverse event, but to integrate an event and mitigate its effect. We do not change anything about the risks involved. We can quote:

  • – value at risk or VAR. VAR measures the minimum amount that an institution can lose with a certain probability threshold, for example, 1%. This allows countries to be classified according to their reliability/political stability and to determine the potential loss they may cause;
  • – portfolio diversification, also a financial risk management tool. This is a robust and simple method available to all investors to reduce the effects of disruptions: each other’s fluctuations are diluted or offset in a well-distributed portfolio;
  • – derived products. These may offer an insurance contract that covers the price of one or more assets. In exchange for a premium, a financial institution undertakes to insure its client in the event of a loss, such as a fall in a security or portfolio.

16.5. Risk characteristics in the industry

Similarly, in industry, in terms of the occurrence of rare phenomena, the frequency of occurrence is always a major issue. To illustrate our point and show the importance of new analytical approaches with appropriate statistical or mathematical tools, we refer to a real case study on the manufacture of large computer systems several years ago in a factory at IBM. We consider a production system for the assembly and testing of highly customized and sophisticated products.

The process is under control; it is of the 6-sigma type, and the anomalies observed are infrequent. The computer has about 80 K components; it must be able to operate continuously, at the customer’s premises, for a fixed lifetime. These anomalies, discovered during very stringent tests, are very diversified and often non-critical. The SPQL (Shipped Product Quality Level) is therefore very close to 1.

In terms of test result, we obtain a series of real values giving the number of defects per machine, as detected at the end of the line, machine by machine. This series is: 1, 0, 0, 0, 1, 0, 7, etc. The average of the observed defects is about 0.7, but if we analyze the empirical distribution of the values and compare it to a binomial distribution, we might be surprised… This is because conventional statistical curves are not usable: from the Gauss curve to Weibull distribution (with three variables) or hyperbola. A different approach to quality and performance analysis is therefore needed. For this purpose, we have prepared Table 16.1, based on the series of figures described above.

Table 16.1. Anomaly indicators

Size N STD ó Skew ó.s Key figs Kurtosis ó.k Ratio k Q. factor Cp Cpk
20 1.97 2.23 0.51 4.36 4.29 0.99 4.33 0.79 1 1
40 1.94 2.14 0.37 5.74 3.59 0.73 4.9 0.79 1.01 1.01
60 1.84 1.97 0.3 6.38 2.94 0.60 4.84 0.79 1.07 1.07
80 1.74 1.86 0.26 6.94 2.72 0.53 5.12 0.79 1.13 1.13
100 2.5 4.99 0.24 20.71 33.65 0.47 70 0.8 0.78 0.78
120 1.68 1.82 0.24 7.56 2.61 0.47 5.46 0.8 1.17 1.17
140 2.05 2.59 0.24 10.76 7.09 0.47 14.83 0.79 0.95 0.95

Size N in the first column indicates the number of computers involved in the study. Indeed, we take more or less long series, knowing that each computer uses the same family of technologies, but that the configuration, like personalization, is often different: it is a “mass personalized” production. In the analysis of Table 16.1, we can make the following observations:

  1. 1) the standard deviation of the population does not converge when N increases. Consequently, the hypergeometric distribution does not apply (probably non-stationary assumption). In the case of a “normal” distribution, the standard deviation should be a decreasing function of N [LEV 80] since when N is increasing, we have more complete information;
  2. 2) the skew (measurement of asymmetry) is positive: the deformation is located to the right of the mean, and its value increases globally with N;
  3. 3) the kurtosis (which measures the degree of concentration of the values in relation to a normal distribution) has high values and indicates the presence of abnormal values in the history. More generally, when the ratio is greater than 3, the data are not Gaussian. “Outliers”, or non-standard individuals, are present (or different types of distribution exist);
  4. 4) the Q-factor is representative of a process whose specification limits are centered but exceed the values of 3–6 sigma, which is good… But the Cp index of the process capability shows that the number of defects is higher than expected, even with a relatively well centered production (Cpk compared to Cp).

In this example, we are not able, with a probability close to 1, to show that this is a non-Gaussian or chaotic distribution. In this sense, we are closer to the observations made by Mandelbrot on a series of stock prices: conventional statistical curves predict much too low failure densities and are not representative of reality. Indeed, exceptional cases, such as breakdowns or disasters with high failure/defect rates, are more frequent than those predicted by statistics.

We do know, however, that stable distributions such as Paul Lévy’s apply [WAL 91] – the latter is a generalization of the Gauss distribution, and its density is characterized by S(α, β, c, ), where α is the exponent, β represents the asymmetry, c is a scale factor and is a position parameter. Without calculating these parameters, α = 2 for a Gaussian distribution. Here, the coefficient is α ≤ 2, which means that there are large variations in distribution law tails and that conventional statistical analysis methods do not apply.

Thus, a methodology related to the analysis of time series and the identification of chaotic properties in processes could be developed and validated.

16.6. A philosophical summary: chance and necessity

Let us highlight two points: how a disaster manifests itself and how it can be represented to better study it. This exercise will be followed by positive results if it is possible to identify mechanisms, or organizational rules, to exploit and propagate hidden orders. This exploitation will be done through the interactions existing in the system under study. This also raises the question of how much determinism or chance is involved in the evolution of our systems. So, we cannot avoid addressing this aspect of things.

It is easy to understand that the existence of uncertain and/or unpredictable facts is generally not well accepted by decision-makers. However, as philosophers have often argued, one of the essential driving forces of evolution would be chance. However, it is now more than necessary to consider evolution within a more general framework, that of Nature. Evolution, in fact, affects plants, animals, humans, as well as their by-products, which are industry, the economy, services, transport, etc.

Erasmus, Darwin, and even Jean-Baptiste Lamarck, believed that progress, like evolution, was the result of the accumulation of knowledge and that we could use it to change our situation and our environment. Originally, our world was governed by simple and immutable laws that made it possible to maintain order. This notion of progress was taken up by Charles Darwin: it can be biological [DAR 59], as well as scientific, cultural and philosophical.

At that time, progress was a corollary of evolution and, importantly, it was mainly about progressive evolution. Of course, the notion of system dynamics was not widespread at the time, and there was still no mention of deterministic chaos, fractals or catastrophe, in the topological or mathematical sense.

On the other hand, evolution is from the simplest to the most complex and complicated. While the complex term expresses behavior, the complicated term is intrinsic and structural. This evolution is linked to the notion of progressive evolution and will not be explained here knowing that many books are devoted to these concepts. Thus, over the last billion years that concern our planet, progress has been prodigious: living beings have evolved in size, in functionality, in terms of activities generated, etc. Our techniques of defending ourselves to survive, acquiring food, then having goods to live, social organization to better control our environment, etc., have paved the way to the evolution of the species we know today.

But to reach such a level of evolution, it is necessary to consider complementary assets:

  • The need for progressive evolution, based for some on natural selection and the strengthening of certain biological properties. Thus, it is the most adapted being who survives best and who transmits by reproduction, or by another slower means, and who multiplies these newly acquired qualities (the term quality being taken here in the sense of ownership does not induce a majorative or pejorative connotation). This therefore contributes to the generation of individuals or systems with maximum benefits, in a given competitive environment and at a given time. This process becomes interesting if you are immersed in a stable, slowly evolving and programmed universe. This last point is important because it makes it possible to accumulate experience, knowledge and logic, to consolidate certain knowledge and to deal rationally with new situations encountered – i.e. computational, in the sense of information systems. It is therefore logical to want, in this case, to link the size of the brain (the element that calculates) to the level of intelligence of the being considered. Knowing that a brain has high protein needs to function, the most intelligent individuals are therefore the most carnivorous! Similarly, based on the fact that beyond reasoning (the level of knowledge) there is consciousness, then the unconscious, the subconscious and intuition, we do not know what the brain will become and how it will feed itself at a future time!
  • The evolution by “leaps”. We are referring here to the technological leaps and disruptions that our progress in research and development has brought us. It should be noted that these leaps have always existed in Nature and have been useful.
  • – For example, 65 million years ago, it seems that a cataclysm caused the disappearance of a dominant species (dinosaurs) whose evolution had become stable or flat. Perhaps, it would be a cosmic disaster that would have freed part of the world from certain constraints of life and survival and allowed it to evolve into a new, more adaptive world. This highlights new eras of evolution. They constitute bounded domains in which biological or morphological convergences can take place. This complements, and is not in contradiction with, the approaches put forward by Conway Morris [MOR 03].
  • – An industrial system always evolves according to the well-known S-growth curve before regressing. It is therefore essential to introduce stress or a major change in behavior, approach, purpose or organization in order to start again on a new basis, in a direction not necessarily favored a priori and progress further. It is a question of introducing a catastrophe in the sense of René Thom, an idea that was also taken into account in Stephen Gould’s work by highlighting the issues of chance and accidents in evolution [GOU 91]. He called these changes fierce evolutionary changes, and we see here that the brain does not perform everything…
  • – In industrial systems, it is routine to observe that the evolution of situations is not always optimal. Indeed, the models we develop have the unfortunate habit of converging towards a local optimum. For these reasons, researchers have developed optimization techniques based on regenerative approaches or statistical physics. The aim is to overcome certain topological constraints to allow the trajectory of a system to cross a “pass” and reach another basin of attraction offering a better overall optimum. Thus, said differently, we jump with a given probability from a known world into an unknown new world that could be… “better”. If the adventure does not offer better results, then we come back, otherwise we continue, etc. It should be noted here that in the case of genetic algorithms, these changes are obtained by the dissociation–recombination of data vectors. This approach is not related to the size of a brain – and therefore of a program – but to the way it functions. Still in this context, let us observe the young human generations confronted with complex systems such as computers. Let us note that they do not always reason (even sometimes a little) in a rational or analytical way, but rather in a trial and error mode! They do not seem to burden the mind with knowledge a priori; they function in case-based reasoning mode and use the brain in a different way, by conceptual unification (pattern matching), which basically corresponds better to its initial purpose [MAS 06].
  • – In Nature’s systems, the notions of deterministic chaos and fractal geometry are omnipresent. They are sources of unpredictable (random) deviation, sometimes unforeseeable, as well as of the generation of “unexpected” orders that are essential to the evolution of a system. The same is true for quantum physics, etc.
  • – In the field of computer, electronic or industrial technologies, the same applies to so-called technological leaps. These leaps make it possible to drastically change our world, as well as to offer new opportunities for solutions, and therefore to satisfy new needs. In turn, they make it possible, in all cases, to generate and create new needs and explore new worlds. We are in systems with positive feedback loops, and no one knows where or when it will stop. Indeed, based on past experience, we cannot predict which inventions will be activated in 50 years’ time. Will we still be here? How will we live? The evolution of the world is unpredictable!

Thus, in studies related to the evolution of the world and the approaches to progress, we see that Nature has been able to combine techniques of the progressive evolution type, where chance is not very present, with jump techniques introducing the notions of chance, uncertainty, deterministic chaos, etc., to form what some call fierce evolution. The two approaches are complementary, and each brings its own set of advantages and balances. They must therefore co-exist in a peaceful way.

We will draw a parallel and advance the same strategies in the areas of finance and risk. Wanting to model everything and/or store it in information and decision-making systems is attractive and reassuring but… ineffective! Indeed, we will never avoid all these phenomena and behaviors that we have explained in our books many times.

In stationary periods, it is certainly useful to have conventional tools and SIADs that allow you to act or not to act. But it is much more useful, in the event of unforeseen and uncertain circumstances, to develop the ability to react. These will make it possible, in a pro-active spirit, to anticipate. We see here that the notions of diagnosis are simplified because they lead to binary situations of the Go-Nogo type with reflex functions resulting from a relevant and coherent, simple and fast learning.

What has just been developed leads us to ask a question: are we (our society) ready to accept chance and survive the eventualities and disasters of life? Indeed, in everyday life, examples abound to show that very few people are willing to accept fatalism, chance and the risks. We are in an ultra-protected era (although the economic and social differences between citizens are very important).

On another level, the professional one, the error is not accepted although everyone knows the following precept: errare humanum est, perseverare diabolicum est (to err is human, to persevere [in erring] however [is] of the devil). Failure is not accepted as a coincidence or a tribute to evolution: the one who has failed is pointed out by society, banks and employees comfortably settled in offices and in the hidden state of mind of a predator.

We live in complex worlds, and chance, unpredictability and uncertainty are part of our lives and it is difficult to implement measurement and control systems because our environment, although evolving in a stable way, will always be subject to eventualities and disasters. It is not a question of remaining indifferent to what surrounds us, but of showing – and this has been repeated several times – common sense, discernment, intuition and emotion. It is therefore human qualities, which are more a matter of the “I” than of the “method”, that we need and that complement existing approaches.

16.7. The environment’s new challenge

We live in complex worlds, and chance, unpredictability and uncertainty are part of our lives and it is difficult to implement measurement and control systems because our environment, although evolving in a stable way, will always be subject to eventualities and disasters.

The situation can be clarified by the notion of “common goods” which is pervasive in our environment and unconsciously used by everybody. According to [DIM 15], common goods are defined in economics as goods, resources and/or products that are either prone to rivalrous behaviors or non-excludable. This definition does not allow unlocking the situation; therefore the notion of property (what is proprietary) should evolve.

With regard to a recent and actual example such as climate stability, we can consider conventional examples of common goods such as water, including oceans and air. Water and air be can easily polluted: water flows can be exploited beyond sustainability levels, and air is often used in fossil energy combustion, whether by motor vehicles, smokers, factories, wood and forest fires, etc. They are mainly caused by human beings’ activity, unconsciousness, greed attitude and laxity.

In a production process, natural resources and materials are transformed and changed into finished products such as food, shoes, toys, furniture, cars, houses and televisions. The activities leading to these products may be associated with pollution. This is simply due to the energy transformation processes (for the second law of thermodynamics, see [MAS 15b]). Thus, urban people are not in an idoneous position to criticize farmers, manage risks alone or instruct the rest of the population how to behave. In the meantime, the environment is degraded by product usage.

Another example is related to fish stocks in international waters and the difficulty in elaborating regulations, specifications, limitations… As soon as we are granted the possibility to consume many more resources every year than our Mother Earth can produce in the same time interval, the comfort provided to some contrasts with, for example, fishermen’s hungriness.

Good examples should always come from the top and benefits (if and when any) shared by the whole of society [MAS 17b]. Through the above examples, we described situations in which economic players withdraw resources to secure short-term gains (or conveniences) without regard for long-term consequences. For instance, how will we accommodate seven billion inhabitants under sustainable conditions and at the best societal cost? For these reasons, the expression tragedy of the commons was coined and nobody knows how to control and reduce the associated risks.

Going further, forest exploitation leads to barren lands, and overfishing leads to reducing an overall fish stock, both of which eventually result in diminishing the yields to be withdrawn periodically. The typically linear thinking (quantities, volumes, yields) must be replaced by a “frequency thinking” (ratio volume per period per regeneration faculty), which entails building more global economic models. No one can, for instance, earn the right to deforest at will a country or its private territory. Surely, a nation has the right to exploit its lands, but has no right to endanger the common goods of the (i.e. our) planet.

This conclusive discussion enables us to redefine the content of a risk and the way we have to manage it. We consider that common goods are an exploitable form of renewable resource such as fish stocks, grazing land, etc., in order to be sustainable. These common pool resources must be subject to sustainability and ethics, and require a widest institutional arrangement and consensus, as a shared and collaborative “common-pool resources” management.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset