This chapter is intended to provide an update on issues raised by a lot of project managers (some ones leading to the Project Management Institute (PMI)) working in the area of information systems and business intelligence: they often state that our decision support systems (DSS), in a broad sense, are continuously growing, and creating more and more information (that is to say that their related entropy is increasing). In addition, they consider this phenomenon as an irreversible one because technical advances require us to move forward.
This assertion is questionable: in any engineering task, intended to develop a new product or innovative service, “sustainability” has become the main factor to be considered to evaluate the relevance of the human activity. Indeed, the purpose of a “sustainable” development refers to an economy of technological development which preserves the resources and environment available to the future generations of people. Problems arise from the fact that a lot of people talk about sustainability but are unable to measure or compare it to reference values: it is of great importance to see in which direction progress develops.
Currently, the only way to evaluate and measure the sustainability of a system, and then its adequacy against the new societal constraints, is to measure the “entropy generation” of the system [ROE 79]. It will be expressed either in a qualitative way (positive or negative) or through a variation ΔS (S being the entropy of the system).
As a reminder, the entropy generation of our society, during the last centuries of the industrial era, is mainly due to:
In comparison, just to realize how people are thinking in terms of ecology, and thus of nature preservation and characteristics, we could say:
“Nature runs on sunlight
Nature uses only the energy it needs.
Nature fits form to function.
Nature recycles everything.
Nature rewards cooperation.
Nature banks on diversity.
Nature demands local expertise.
Nature curbs excesses from within.
Nature taps the power of limits.”
Currently, society makes judgments concerning our industry, economy and governance based on these above views, even if they are sometimes contradictory to their philosophy. As soon a huge disequilibrium appears, people do not perform a systemic analysis of the situation (e.g. human or economic development with 10 billions inhabitants); they just condemn a partial political decision which does not fit to these above constraints.
For these reasons, and to better develop sustainable systems, it is essential to explore some examples to see how these concepts can be applied, to analyze the underlying mechanisms and to restore certain phenomena and characteristics of these systems, knowing that in nature, as in life or in our information systems, the basic mechanisms are universal and need to make certain transpositions. Such an approach allows us to better understand and act in everyday affairs.
Right now, the only way to learn about the sustainability measurement of our systems under development is to go through the so-called “entropy generation”: the objective is to provide the society with “reduced entropy generation systems”.
It is neither a fashionable trend nor a business opportunity, since the future of all humans is involved. It is a paradigm change, a question of ethics and awareness, and lastly a set of drastic changes from standards, policies and practices, to our own values, consciousness and ways of life.
In this chapter, we will study some aspects only of this issue related to information and information systems and decision-making, by linking them to notions of time, quantum fluctuations and entropy.
This is especially important since we talk about worldwide collaboration, while everything is interdependent and involves each of us.
In order to make our information available to any people not familiar with physics, some examples will be used as illustrations to avoid theoretical and non-digestible demonstrations.
System sustainability concept is often linked with system complexity. In our life, “sustainability” expresses the fact that people are afraid of losing control of a complex phenomenon; this is also associated with the need to preserve a situation in the face of apparently irreversible changes.
Under these conditions, is sustainability a marketing trap? Is it a real concern? Considering what is happening in our world, we cannot be sure yet because complexity is the normal evolution of nature.
What we know is that all the systems surrounding us are now integrating some of these concepts in their design, engineering and development.
Hereafter, we are only interested in the evolution of technologies implied in the decision and control of our industrial and economic systems. Complexity is an invasive concept which requires a permanent adaptation of our DSS.
As we can see in Figure 11.1, there is the integration of two different ways of thinking and a progressive development of many associated sciences and technologies:
This last step could be quoted as “convergence theory”: it implies to work in a transdisciplinary and interdisciplinary way, to integrate and assimilate all the complementary sciences as defined above. This was the aim of the Advanced Technology Group (ATG) in IBM, devoted to the competitivity of European development and manufacturing centers, during the 1990s. It is the only way to understand global challenges, to prepare paradigm changes and to develop innovative and best-suited technologies. Now, this is partly covered with the so-called business intelligence technologies (but a too much conventional approach based on quantitative and qualitative databases (DB) approaches is still involved).
As stated before, and keeping in mind Figure 11.1, we will develop some aspects related to the sustainability, complexity and entropy concepts of any complex system. Indeed, questions we have in mind are:
In the context of our work, entropy measures the lack or loss of information, uncertainties, disorders and inconsistencies in the generated information, system complexity (in terms of resulting behaviors variety). It also addresses the indefinable number of information and disparate possible interpretations, or the loss of cognitive structures, etc.
In this section, we will formally introduce the role of brain in any information system. Right now, we can say that “brain” is the support of most of the thought mechanisms and processes. Here, entropy characterizes the knowledge we have about an object or the world; it thus defines the possibility that everybody, i.e. any living agent, may have a consciousness and a more or less developed thought. The more generated knowledge items are diverse, vague and scattered, the more entropy increases.
The well-known principle, “garbage in, garbage out”, is still valid: we cannot properly seek and apply our mind and consciousness if the entropy in a given system is too large. In this case, we can associate a kind of probability with the entropy enabling us to perform reliable predictions, to elaborate and make good decisions or to get storable and then reversible phenomena. Indeed, if everything is “well-ordered”, described and traceable, the evolution of a system can be followed-up and it is possible to go back in a process and to change its future track.
COMMENT 1.–
This first comment is of most importance in risk management. Several politicians and media leaders are now saying that it is unforgivable not to anticipate industrial disasters. This statement is quite inappropriate since unpredictable events cannot be anticipated. Moreover, we do not know whether to blame the bad faith of the some Chief Executive Officers or the ignorance of those who spread rumors and speculative information. This is based on comments related to big events such as the Apollo 13 syndrome, the 2010 BP oil drilling problem in the Gulf of Mexico, the Fukushima nuclear plant catastrophe in 2011 and even the AF447 air plane crash. It is quite easy to criticise post-disaster, especially when it is a replication of something already known. But, “just-doing-out-of-necessity” syndrome has to be revisited in any process where nonlinear dynamics and high level of entropy apply: under these conditions, a disaster is always an occurrence of a phenomenon without memory. Moreover, in terms of sustainability, we cannot ignore that anticipation is a costly process (about the entropy) whose cover ability and reliability is very low.
COMMENT 2.–
Consistency of DSS modeling. The evolution of a software application generally meets both Gödel’s incompleteness theorems [GÖD 31] related to the inherent limitations of axiomatic approaches either in mathematical logic or in modeling formal reasoning:
Both these theorems are directly related to the evolution of software applications and interactive systems for decision support. In fact, they indicate that:
In a convenient way, it has been known for many years that the systems are still evolving toward greater organization and complexity; we also know that mathematics, despite their very high power of abstraction, have limited capacity in modeling; finally, we know that systems called “expert” or intelligent (such as knowledge-based systems (KBS) cannot explain everything with a formal knowledge representation.
Moreover, the more we advance in this KBS approach, trying to represent, model and explain everything in a formal system, the more we will fall sooner or later in one of the following pitfalls:
By analogy, Gödel’s incompleteness theorem also shows that using formal logic, that is to say, the conventional approaches we apply, a formal machine cannot alone dynamically detect feedback loops and repetitive structures, already experienced in the past, but unexpected in the future, except if they were preliminary planned.
As a result, in a formal world, complication and complexification are a limit to sustainability.
COMMENT 1.–
When talking about the orderliness of a system of low entropy, it is question of an order which is clear and obvious. Thus, an industrial process, a fractal factory, a business organization and behavioral rules at an individual level (such as ethics) all form a low entropy process. This is because the number of arrangements or possible configurations, corresponding to the assembly of rules, components or elements, is compatible with the original structure of the system: we can detail, describe and model them easily. The more we have organization and information structuring, the lower the entropy: organization, knowledge and know-how on some specific areas do not vary in the same direction as the entropy.
COMMENT 2.–
A system including a high number of agents can be of higher entropy and, also, be very orderly: this is the case with cells in an organism, a group of people, a school of fish, consisting of interacting individuals whose motions are coordinated in a precise way. With regard to Schroeder studies [SCH 92], the energy dissipation of a complex system is an instance of scaling. For example, in nature, for warm-blooded species, the energy loss (W) depends on its weight or mass (M) according to a relation like:
where K and C are the constant values. The relevant graph is detailed hereafter in Figure 11.2.
What to keep in mind is the trend of the graph (indeed, the power law factor = 2/3 may slightly change: about 1 for the bats and around ¾ for a human).
Such a transformation is interesting as the entropy is directly related to volume (sometimes weight) and temperature of a dissipative structure. This macromodeling is quite common and of most importance in industry or electronic systems: it is possible to estimate the cost or the number of failures of a system consisting of a given number of components and energy consumption, much before a precise forecasting based on reliability models. This can give a good idea of the sustainability of a complex system: the more complex is a system, the more it is devoted to death.
COMMENT 3.–
Fractal structures in time and space optimize entropy production in complex dissipative systems. Indeed, in consummate dissipative systems, fractal structures are spontaneously created: they participate in the emergence of orders because they optimize entropy production and enable the optimal dissipation of energy gradients. To be more precise, the whole universe is in thermal equilibrium, i.e. in maximum disorder, and the life, as a developing system of order, is only possible in regions with strongly changing entropy: thus, ordered forms, such as a tornado or a highly dynamic funnel in a bathtub, or again Benard cells, continue to live as long as there are energy gradients to dissipate efficiently heat or energy. In a general way, complex systems, life and humans provide the quintessential example for the spontaneous creation of order through embryogenesis; they are remarkably stable, robust yet fragile, healthy creatures of fractal nature (with negative entropy) and function to produce a better level of entropy in the environment.
In a company, fractal structure cannot blindly pursue decreasing entropy, and maintaining a certain and low entropy increase: due to self-organization capabilities, it may have higher flexibility, adaptability and coordination and improve continuously skills. This is why knowledge is a kind of learning in dissipative systems. As for in any fractal structure, it has the advantage of lowering entropy more than in a traditional organizational structure.
Thus, the calculation of this related entropy is relatively simple [HAO 10].
COMMENT 4.–
Within this context, as mentioned before and roughly speaking: entropy allows us to measure a given disorder, a kind of diversification and dissipation, thus the ability of a system to perform complex tasks. But, this is a simplistic view of a concept. Indeed, in terms of disorder, this one has to be clear, visible and obvious [PEN 92].
Also, as per Figure 11.2, a system including a high number of agents, or elements, can be of higher entropy and, also, be very orderly: it is the case with a group of people, a school of fish, consisting of interacting individuals whose motions are coordinated in a precise way (by mimicry, recruiting and hiring, etc.):
It should be noted, as described earlier in this chapter, that the pendulum is perfectly reversible in time; it achieves the similar but opposite evolution curve of those observed with their time moving in the positive direction (from past to present). Its entropy remains constant: the past, present and future are combined together.
This is also what we observe in any real system: in life sciences and cognition, we have both “innate” and “acquired” information. A system that operates solely on innate information that is “genetically determined” (e.g. a financial control or management system) has a stable entropy: it is based on symptomatic or presupposed programs. On the contrary, the emergence of significant forms may also depend on in-information coming from an external source or process. In fact, we are discussing in terms of ontogenesis: ontogenesis describes the development of an organism or organization; its underlying mechanism can influence subsequent evolutionary or phylogenetic processes such as thought, reasoning, understanding or cognition. As the overall entropy increases (since entropy is generally the sum of its internal and external entropy), we will always be developing hybrid systems (e.g. with a mix of biological and cognitive features) that have to be globally regressing (in the sense of entropy). It is the same challenge we have in the real estate and construction sectors, when developers try to design positive energy buildings.
So, we come to consider the second law of thermodynamics. Here, we are dealing with isolated dissipative systems for which entropy increases over time. Three applications are now described:
Furthermore, ill-timed modifications in an application generate unplanned and unintended-induced effects such as hidden side effects because of the many existing interactions between the modules. A side effect can variously modify some functional states or some arguments in a given variable, raise an exception, write data to a display or a file, read data or call for other side-effecting functions. These disturbances, hard to detect and dissipative, require a lot of skill to diagnose, understand and debug them. Here, Gödel’s theorems apply and directly reduce the sustainability of an application.
The above considerations show the need to develop approaches based on different concepts to compensate the variances in entropy generation. In IBM factories, 20 years ago, we implemented decision tools based on “ondulatory artificial neural networks” for process control [JCP 96].
These devices were able to self-store information on their own functioning (obtained by self-observation). In fact, in any control or monitoring action, the most important purpose is to detect any “monotonous sequence of events”, symmetry breaking or monotony breaking so as to detect significant complex structure contingencies, even if they are mixed up with noise. For example:
The temporal evolution of a cognitive system can be represented by a variable like X(t). Here, X(t) is the level of acquired knowledge in a system, expressed as the space of possible states. As for the second law of thermodynamics, X(t) increases overtime: its representative curve naturally and progressively moves as the number and variety of knowledge is increasing, and the entropy also increases positively over time. This statement can be represented by a graph (Figure 11.3).
COMMENT 1.–
From basic knowledge, experiences and principles (the so-called initial information which is associated with a low entropy), we are able to develop innovative knowledge and new paradigms. This appears when the system evolves in the direction of the future, consistent with the behavior of systems in the universe we live and experience.
We do not know exactly what is the initial entropy value, at time TDate, that is to say the one which is before the “of source information” which we have spoken earlier. For these reasons, we have positioned a hypothetical entropy value at that time. Conversely, we know that the entropy was growing before TDate: thus, we can draw a curve (hence the left side of the curve on the graph in Figure 11.3, called “evolution”).
Note that, as per our level of knowledge, it is quite impossible to go below Planck’s time (10-43 second) at T ~ 0.
COMMENT 2.–
Currently, at the instant denoted by “TDate”, we are starting with a core of given information, corresponding to a certain entropy. This information allows a human to reason; three great opportunities are available:
In the above three cases, we are in evolutive processes applied to steady environments. This is a strong assumption because, globally, the entropy of the systems under study continues to grow following the time arrow, whatever its reversibility (negative) or irreversibility (positive). As we can see, it is not only a question of size scale: this does not only concern the either micro-/nanologic or cosmologic worlds.
Information is the basis of creation of our visible universe. Long before Planck’s time and the Big Bang, there was only information. In our living world, knowledge is the source of the thought, concentrated in a nucleus comprising some basic information associated with a very low entropy level. Indeed, in order to grasp reality, humans probably started from a more germinal, difficult to define state, therefore with an even lower entropy, perhaps excessively low. And they evolved from there to progressively construct the first seeds of knowledge – an initial set of facts and production rules – that could be activated to develop reasoning, consciousness and, finally, the many cognitive assets discussed above. This is a reason why entropy has increased considerably over time.
As a result, the current state of knowledge available worldwide is becoming colossal. What is striking is that there is consistent information between them, but that they are also inconsistent: when analyzing the content of some databases or knowledge bases about a specific topic, it is easy to find a lot of incomplete information, and contradictory or redundant facts (as mentioned in this book, it is not rare to reach up to 50% of information records that we cannot exploit in a consistent way). This can be considered to be increasing diversity; it is to be compared with an important disorder, thus akin to a high entropy level.
These aforementioned statements apply to the human beings at the time of his emergence; everything is contained in the two strands of DNA, corresponding to a minimum entropy: from there, a human being develops, comes and goes, and during a complete lifetime he will accumulate knowledge, skills and experiences. Thus, he will create and generate new ones until his death. The genetic code is also evolving (to include the initial but partly muting “innate” and also new “acquired” information) [CHA 10]. Again, keys to these new cognitive assets are partially incorporated into the new DNA that will be used to breed the next generation of living beings. As we can see, sustainability principles are indirectly devoted to the DNA program, and not essentially to the human species.
Analysis of Figure 11.3 shows that, in the absence of factors imposing an external constraint or state to our planet, the entropy increases in both directions of time arrow, from the TDate state. The entropy increase in the direction of future (positive time arrow) is obvious: the states related to a higher entropy correspond to the generation of many new and diverse knowledge; it follows a geometric growth rate (Moore’s law).
Conversely, the states located in a low entropy area (e.g. left side of the graph) are just plausible assumption: we do not know yet how so low an entropy, at the beginning of the living world, could generate as much knowledge. Why, how and what was the structure of the world, at the beginning of time, to have such a low entropy? We can only say that during the very fast and initial expansion of a world (it is the same for an enterprise), it is not possible to produce reliable forecasts about its sustainability.
This is why the process used by some “business angels” to participate to the development of innovative start-ups through seed capital assistance is difficult to implement: the required business plan and market projections have a very reduced meaning since their sustainability is questionable. In fact, only risk-prone and intuitive hunches based on values, with partners having vision, energy and experience, can make great business. Here, Gödel’s theorems and entropy theory fully apply. The only way to control entropy growth is to develop organization capabilities (product, process and production development, market, etc.).
Similar mystery surrounds the increasing level of entropy in such a short time (on nature’s scale) in a new human being. It concerns the evolution between the moment the DNAs from both parents are assembled and the moment the brain content of a mature individual is achieved; and finally when the DNA representing the final knowledge state of an individual is obtained, before he leaves all his achievements to his progeny. In this case, when observing how people evolve all along their life, we could be objecting that the entropy is not only continuously increasing in a regular way. We will now turn our attention to these considerations.
In quantum mechanics, the state vector follows an evolution in part governed by the Schrödinger’s equation. However, as soon a measurement is made, there may be an issue related to a lack of information, which causes a change in the state vector according to Figure 11.4 [PEN 92].
In the field of knowledge, we are observing similar phenomena when considering in Figure 11.4 the variable “entropy” in place of the one called “state”. This can occur in:
This last point is quite important: entropy changes have seldom discontinuities. There is always a legacy of the past and if temporarily, during a transition stage, there is a decrease in entropy, this is because the concept of evolution will help us to overcome entropy levels previously achieved. We are in an ondulatory-like evolution.
We know this problem of “knowledge assets” and “inheritance” in various fields is similar to that of the acquired and innate, in the DNA. It brings some comments on how knowledge is distributed and handled.
Both levels involve sophisticated mechanisms much more complex and complicated than people believe. For instance, most of the time, we cannot map a gene directly to a function. Conversely, there are strong interactions between the different constituents regardless of the assembly levels considered. This is what we have in the fractal structured networks (FSNs) architecture in the organizations. Here, we do not know how to measure the entropy, and then the sustainability of the structure. It is a new domain and we can just proceed by comparison, to say whether such solution is better or worse in terms of entropy generation.
In short, our overall “societal system” is progressing and moving toward greater complexity. It is a proven fact since we are now able to apply some principles related to fractals (invariance of scale), deterministic chaos (unpredictability of behaviors), or even network theories (collective intelligence), etc.
In each case, we see evolutive and progressive approaches, which tend toward an equilibrium (thermodynamical or not, self-organized with attractors, etc.); the key factor is related to the presence of many actors or agents interacting together. The problem is that we do not know how to extrapolate a mini-event occurring at a scale level (n) by projecting it at another level (n +1). Again, this argues for a systemic approach because it is the only way to change our vision of the world and overcome the limitations related to reductionism and a Cartesian approach. It is a paradigm change for many decision-makers whose culture is not prepared to that technology.
In what follows, we will consider in a “global” system, where entropy increases in time, in a nonlinear and often intermittently way, and we will focus on the sustainability of our creations, emerging structures and technologies, etc. Indeed, in our occidental world, we are faced with an existential question: what is the purpose of our activities? What are the global objectives? Is the finality of our economy oriented toward the well-being of the populations? Is sustainability of the humanity a key success factor?
We may distinguish three main phases in computer sciences evolution:
Now, we are evolving toward the so-called “society informatization” which is a wider concept where everything or everyone is an object; it is based on the Web 4.0 – the Internet of Things or Objects.
We can notice that preserving the quality of an information system requires ongoing design and development work (e.g. data management, configuration management, ontologies and strong definition of concepts to improve data consistency and the use of repositories, and also tools and methods for decision-making, etc.). These tasks are all the more difficult as we have to set up a formalization or modeling of a wide variety of processes. Concerning the sustainability of these systems: if such a work is not provided, the information system will continue to deteriorate as a phenomenon of entropy similar to that observed when creating disorder in a physical system [VOL 02]. As a reminder, before addressing the notions of entropy, and simply to show that the underlying mechanisms are almost the same regardless of the application fields considered, we will consider three interesting processes.
No sustainability can be reached without a global motivation of all the stakeholders. In this book, we have highlighted insights arising from studies in decision-making. It shows that in any rationale and systematic approach, a decision generally follows several steps:
The above process results from skill acquisition: it begins by a conscious and deliberate analysis of the situation, becoming capable of automatic operation as soon as a frequent use of the same expertise is required. Thus, there are evolving substrates which we used to call “false expertise”. Here, a high-skilled manager or specialist will be able to reason and rapidly take a decision.
This is sometimes called “post-conscious automaticity”.
In parallel, with this rationale approach, we can say that we call on socio-cognitive processing based on moral, perception and social judgments, emotions, motivation and goals, behavioral contagion, etc.
Much of our social-cognitive processing is believed to occur automatically only according to some consciousness. This explanation is not enough: the relative automaticity of the brain systems, thus in decision-making process, however, is also a function of unconscious perception, thinking and decision-making. Indeed, some unconsciousness defines the way we think and organize our lives.
Indeed, our learning mechanisms, motivation and behaviors depend on conscious or subliminal reward levels [BAR 12]. For instance, unconscious stimuli can induce a person to achieve a goal. This is because the unconscious helps not only to act, but also to find a specific motivation to act. It is the same in society: people with a dominant position may adopt a selfish and corrupted behavior, just because they feel above any suspicion. They unconsciously put their own interest ahead of the public one, and are little impressed with reproaches regarding sectarianism and anti-social feelings they may have. Similarly, some people, such as parents, who put the interests of their children ahead of their own, are altruistic: such basic behaviors when becoming predominant will drive implicit protective attitudes. It is the basis of a so-called preconscious or natural automaticity.
Such contexts are forms of unconsciousness that also populate our dreams and explain why we interact differently, monitor and develop some specific emotions when we are faced with difficult situations.
Structuration of facts and information is necessary to perform a best suited knowledge and know-how acquisition. It comprises the following steps:
These approaches cannot accept inconsistencies such as contradictions and redundancies; here, we can hope to detect false information, and also structure a network linking the new information to a given number of references and ontologies. This indicates how the concepts of perception, apprehending situations and assimilating new information must evolve;
It is obvious that in these couplings, both quantitative and qualitative approaches are involved:
In summary, we can say that new information with weak links to a corpus will lead to a confirmation, a validation or a tautology, while information provided with more scattered but strong ties is likely to cause an innovation. Thus, sustainability of an evolving system is not just the result of a random process.
In decision-making, we use a methodology that can broadly be summarized into three stages:
A question arises: can we measure this? We have very few examples available to measure the pertinence and complexity of a decision system. Here, we will just mention what has been done in an IBM manufacturing plant in the 1990s [MAS 06]. A tool called LMA [BEA 90] enabled us to improve the planning and scheduling of some new computer technologies. A complete analysis of the decision rules taken in conducting the manufacturing line over 2 years lead to a surprising result: only 23 different decisions were taken. This system can be considered as a sustainable one, but: how to characterize this fact?
In terms of complexity, we decided to use the complexity measurement technique as defined by Lange, Hauhs and Romahn [SCH 97] to measure the complexity of terrestrial water ecosystems. In this approach, decisions were considered as a set of about N=500 data collected during the real-time series and distributed on an arrow of time. The method used is from symbolic dynamics: metric entropy has been calculated and is able to characterize the complexity of the decision system. Unfortunately, we did not perform additional studies in varying the window length, just to evaluate the intercorrelation factors between the chronological sets of decisions. We think this is a promising measurement technique.
In each case, the problem of design and development arises in the databases and repositories:
The problem of evolution is how can we to optimize the flow of information and enrich the basic model, while minimizing management costs? Developments of the theory for conceptual modeling provide managers and users with all the elements on a given methodology: the interpretation of available databases subsets as part of a context can, in fact, improve the management of large and complex information systems, subject to challenges and conflicts between the homogeneity of formal representations and heterogeneity of empirical categories.
According to Boydens [BOY 00], it is important to explore, with both technical and historical approaches, the production practices and interpretation conditions of databases. Indeed, a database is never a “simple” object, either in terms of quality or representativeness, relevance, clarity, etc.
This study reveals what is never said or written in many documents: informal mechanisms used for interpreting data are always done within the context of an operational implementation of rather framed culture, politics, laws and regulations. They evolve over time and require a specific reading by those who are willing to spend time thinking about how things really work in a real environment with usual practices.
Whatever the methods used: Merise, UML, etc., the disorder, incompleteness or loss of control will arise in any information system; it is similar to the entropy that is born and grows in matter, as and when changes are made into applications to complexify and enhance them. For instance, at higher organizational levels in business, database repositories related to support services, institutions and local production centers will be useful and usable if integrated or embedded into each individual process.
As a conclusion, when designing an information system:
To avoid a general deviance of an application, unfitted functions and emergence of many disorders, strict design and development rules are needed. When unable to control everything in detail, we act differently: for instance, we will implement certification for developers in given fields and provide them a degree of freedom in their work. The synchronization and control, which reside at a higher level, will be set up at project management level using meta-management rules.
When multipartnership is involved, this has already been developed and will not be detailed again; except to point out that project management should be based on how a living organism or human body is controlled. This is to cover the organizations with low granularity (but in large numbers of granular cells), as they are existing on the Internet with open sourcing.
In the case of merging several companies provided with different information systems, technical files and repositories, several issues may come from cultural approaches between the people working in different entities, benchmark results, power struggles (80% of time spent) and compatibility problems during the integration, occupy 90% of the time of project managers: much energy will be spent in coordinating and motivating the troops. It is not technical skills that we need, same as for time, but leadership and compromise management (as defined in a thermodynamic equilibrium).
Part of the developments included in this book bring on the following: “engineering a sustainable world economic through mass planetary collaboration”. This requires exploring items involved in interacting systems, as already mentioned in this book. Some will be considered again because of several purposes:
Items we consider in this chapter are expressed and modeled according to the transpositions of system dynamics concepts, as shown in Figure 11.5.
A few decades ago, many scientists tried to design and develop computer systems based upon the structure of the brain. Artificial intelligence was often considered as a mature technology, and artificial neural networks (ANNs), computational algorithms and “thinking machines” are supposed to work in a similar way to our brains.
Even if some differences still exist between the computer and brain, the gap is being reduced over time. First, new models of brain operations are likely to inspire the information systems designers, and second, people are investigating to what extent the architecture of current computers may help us better understand the organization of the brain circuitry and its functioning. Within this framework, international programs have been set up. Nevertheless, we are not ready to emulate the brain because every day new discoveries are being made. For instance:
To explain and complete that phenomenon: if a person is able to manage two activities at once, this is more much complicated with three simultaneous tasks.
According to a scientific study performed by Etienne Koechlin and Sylvain Charron, published in the Science Journal in 20101, the human brain struggles as soon more than three tasks have to be performed at the same time.
The findings of the study show (through medical imaging) that, when a person is subject to a single activity, the two frontal lobes of the brain are active. More specifically, when a subject performs a single task associated with a single goal (e.g. winning an award), his frontal lobes of both hemispheres are activated simultaneously. Regardless of the lobe considered, a part of the frontal lobe is directly processing the task, while the other part is working on the goal.
But when the brain has to handle two tasks simultaneously, each frontal lobe is then assigned to a specific task. Brain imaging shows that the two frontal lobes are independently activated: while one is responsible for processing a same task attached to a given purpose, the other will process the task #2 associated with the goal # 2. Thus, the two frontal lobes are assigned to each specific task (distributed work). Each one provides a single task associated with a single goal; the time delay required to ensure the transition from one task to another is so small (about 100 ms) that we are not conscious of any sequencing and the two tasks are quite simultaneous. However, when a third activity is launched, the scientists found a strong increase in the number of errors (in about 30% of cases) and a decrease in responsiveness, that is to say, a worse response time.
For these reasons, our own physical capabilities are limited: our brain seems unable to concentrate on three simultaneous activities without making mistakes. It is not fully necessary to carry out several activities, at once; this requires us to give up unnecessary tasks and concentrate on one or two of the most important ones.
In an enterprise, brain multitasking is a myth and this poses the problem of multitasking constraints that managers are submitted to do a business: parallelization of decisions cannot be reliable, they take time, and results, because of possible errors, are time-consuming. In terms of entropy, that is to say in terms of creation of disorders, it is not a good for the system evolution.
In the area of DSS, ANNs were developed several decades ago, mainly for pattern recognition purposes (handwritten characters recognition, predictive modeling, vision, speech recognition, etc.). They were supposed to have the same structure as biological systems, where billion of nerves collectively perform these tasks more efficiently than a similarly powerful computer.
It is a good idea but there is a huge gap between several mechanisms sought in ANNs and the human nervous system, even if about 100 B neuronal cells or weighted switch relays are used. But where do these differences come from? It is not a question of hardware components but of architecture and organization.
As a reminder, a neural network is an information processing system composed of a large number of interconnected processing elements, arranged in several layers, where the input layer describes an input event, while the output layer corresponds to a separate pattern classification. In this area, we can quote many works and achievements from J.A. Anderson, L.N. Cooper, T. Kohonen, J.J. Hopfiled, G. Paillet (general vision), etc. Many variants were developed with or without feedback loops to integrate different learning capabilities, and some industrial developments were made available (ZISC: standing for Zero Instruction Set Computer, in IBM).
Here again, it was said that ANNs were a copy of the brain architecture and were working in a similar way, however:
However, the difficulties encountered in understanding and reproducing the operations and behaviors of the brain are related not to the global architecture of the brain, but to the nature and design of the neuron itself which is much more sophisticated than expected:
All these results have a strong impact on the design and development of the so-called DSS. Indeed, when considering the ductility, flexibility, plasticity and flexibility of our brain, we can just show that decision-making, performed in the brain of a human, is a very complex process which can conduct to many possible rational, or irrational and disordered, solutions with a big entropy impact on the system under study.
The purpose of this section is to briefly describe some principles related to these two notions and to measure their impact in terms of sustainability and, consequently, entropy generation.
Worldwide collaboration is a recursive and interactive process where two or more people or organizations work together in a self-similar way to realize shared goals (this is more than the intersection of common goals seen in cooperative ventures, but a deep, collective, determination to reach an identical objective) by sharing knowledge, learning and building a consensus.
In fact, collaboration is a working technology which is based on some specific approaches:
Conversely, collective intelligence is a wider and higher level concept often used in elaborating more global solutions. Collective intelligence can be defined as the capacity for a group of individuals to envision a future and reach it in a complex context.
Collective intelligence is becoming a full discipline, with its formal framework, theoretical and empirical approaches, etc., based upon collaborative and communication tools, associated with a shared ethics.
More specifically, the Cartesian mechanistic thought process has fractioned the universe into three complementary fields: matter, life and mind, which are part of our eco-biosphere. To catch the meaning of these global items, transdisciplinary approaches have to be implemented. Indeed, physics alone cannot explain poetry, neither can psychoanalysis explain cellular division or group technology. In fact, we have to imply various sciences such as social and human sciences, arts and structure in nature, mathematics, theology, biology, religion, and even politics, etc.
Indeed, in this world, everything is connected to everything; it is a kind of global integration that we have to implement, where each thing possesses at the same time an inner and subjective dimension (that has to be interpreted), an outer dimension (that we perceive), an individual dimension (the agent) and a societal dimension (the population and the whole society). From this whole, properties at community level will emerge.
For many people, and managers, collaboration is a panacea: it is able to integrate groups of people, and make them participate in a common goal.
More globally, when talking about collective intelligence, there are underlying impacts in terms of universal governance (global, local, transversal, transcultural, etc.) while developing practical and immediate know-how for today’s organizations, through an ethics of collaboration. Thus, it is a good way to get the right and accurate information or decisions with a minimum entropy generation.
In terms of control architectures, worldwide collaboration based on peer-to-peer mechanisms can be represented by a heterarchical working structure as shown in Figure 11.6. Compared to other structures, heterarchy is an advantage for the following reasons: adaptability, robustness in the answers, consensual decisions and autonomy in the operations. On the other hand, a heterarchy has a drawback, that is to say a “cost” or a counterpart:
This means that entropy generation, under this environment, will be higher than in other approaches (for instance, centralized control). As such, entropy generation is higher on the left side rather than on the right side of Figure 11.6.
Holonic systems, for instance, are an intermediate stage between hierarchical and heterarchical; recursiveness introduces a kind of structure and involves nodes (agents or group of agents) with less abstraction level: thus, control is easier and more efficient (lower entropy generation).
Finally, we can easily conclude how global approaches are positioned in terms of sustainability: either with the consistency of a solution and decision or in terms of control. The difficulty lies in finding the right compromises.
Figure 11.6, however, is not as idyllic as we like to think: our society is an exclusive one, whatever the statements of good intent expressed by the human resources managers in many companies. Cooperation and collective approaches have a hidden side. Hereafter is a detailed example of such a situation.
The problem of loneliness is a kind of rejection and entropy generation. In most of our current societies, one-third of the population lives in solitude. Loneliness consists of being alone when faced with a problem: unemployment, loss of salary, illness, stress, etc. In this case, the person is not able to solve his/her problem alone. Such a person can become an outcast of society because they feel unable to defend and protect themselves, thus to overcome their own problems and recover from the situation. Loneliness can be considered as the result of a mismatch between a given person, his/her surrounding environment and the behavior of the people with whom he/she interacts. Loneliness evolves into six steps, as in a “vicious” circle:
Loneliness reinforces clustering of a population and then its diversity, complexifies its management and is energy-consuming. Moreover, in terms of ethics, loneliness affects indifferently any kind of potential resources: young people, workers, skilled seniors, elderly and retired people, etc. This topic has already been discussed in Chapter 8, assigned to the survival and perpetuation of the species, with phenomena related to eusociality. But we must be aware that this is a common problem: many companies, organizations, team leaders, etc., are acting under the pressure of the competition and the financial greediness: they tend to divert from resolving “hard” problems; this is left to the charge of a state, a nation or a society at large.
Loneliness is a topical problem; it is growing in parallel with the evolution of our society. Some usual causes can be described as follows:
As such, it is a mess: a generation of disturbances and situations unsuitable for a sustainable environment. The tragedy is that loneliness is just related to an oversight problem:
In this so-called “collaborative” world, it is clear that cooperation principles, and the society as well, do nothing for most of excluded populations. Within the framework of an integration process, cooperation is not enough.
As already stated, any inclusive society is based on several factors:
We are all members of society, but each of us has to fulfill this role. At company level, it is said that its greatest asset is related to its human capital. But, is the enterprise sincere? As soon a company, and any human organization, dedicates more respect for employees and a better understanding about a more “consecrated” vision of life, approaches to loneliness would be different. There would be less blah blah blah, fewer ghettos, fewer barriers between communities and a greater homogeneity in the population, and hence less entropy.
We must remember that in any business, a good manager (who must also be a good leader) must do what he can, as best he can, with what he has.
To conclude, worldwide collaboration is aimed at reducing the entropy generation and creating a more sustainable global system. Unfortunately, it also creates a significant entropy generation: this is fully in agreement with the principle of duality in nature. In terms of governance, the difficulty will consist of managing and giving adequate priorities to some of these equilibria.
Darwin’s theory, devoted to the evolution of species, tells us that species change over a long period of time. They evolve to suit their environment, and species that survive to changes in the environment are not the strongest or the most intelligent ones, but those that are more responsive to change. Thus, the manufacturing companies better prepared to survive are those that respond better to emergent and volatile environments.
For these reasons, reconfigurable manufacturing systems (RMS) are designed for rapid changes in their structure, as well as their hardware or software components, in order to quickly adjust the functionalities and production capacities to sudden market changes, and intrinsic or failure system changes. Consequently, they require the implementation of characteristics such as modularity, integrability, customization, scalability, convertibility and diagnosability.
This supposes a specific structure and architecture, and a particular control system software. Biological systems and nature are suitable sources of information to be transposed for the development of reconfigurable and sustainable manufacturing systems.
To fulfill such requirements, holonic system architecture is best suited. Biosystems also suggests we implement distributed controls based on autonomous and cooperative agents (as we have in living organisms).
It is an organism comprising a holarchy of collaborative components, regarded as holons. Holon is a term derived from the combination of two words: “holos”, a whole, and the suffix “on”, which means a particle, an item or a subsystem. Thus, a holon is made up of subordinate parts or a part of a larger whole. These holons (agents) are provided with local autonomy and proper propagation mechanisms.
Holarchies are not holons – or physical systems of holons – but are an organization or conceptual arrangements of holons that represent the basic formal entities for a holonic interpretation of the structures and dynamics of “reality”.
The best-known examples of what a holonic organism consists of going back to the fractal organizations as detailed by H.J. Warnecke in the “fractal and agile company” and Massotte and Corsi in [MAS 06].
In this observational context, a holon is viewed as an entity that is at the same time autonomous, self-reliant and dependent; interactive vertically as expressed in Figure 11.7, as well as cooperating horizontally with other holons, and characterized by rules of behavior (DRDC Valcartier TR 2008-015 41). Thus, we are in and between a hierarchical and heterarchical organization. We can explain a little bit more what these characteristics are:
In the present systems, a holon has the capability to create and control the execution of its own plans and/or strategies (and to maintain its own functions). In IS, each holon has local recognition, decision-making, planning and action taking capabilities, enabling it to behave reactively and proactively in a dynamic environment.
The process used to arrive at a decision is only as complex as necessary for that class of holons and its level within the holarchy. For simple systems, the decision process for a given holon is a set of fixed rules that govern its behavior. The flexibility displayed by holonic systems is the result of the combined behavior of the holarchy and not the actions of an individual holon. Thus, within this context, we can define:
The notion of functional decomposition is another important ingredient of the holonic concept. It can be explained by Simon’s observation when he says that “complex systems evolve from simple systems much more rapidly if there are stable intermediate forms than if there are not”. In other words, the complexity of dynamic systems can be dealt with by deconstructing the systems into smaller parts.
As a result, holons can be an object, an agent or a group of agents, and they can contain other holons (i.e. they are recursive). Also, problem solving is achieved by holarchies or groups of autonomous and cooperative basic holons and/or recursive holons that are themselves holarchies.
Holonic systems are partly based upon biological and social systems, thus:
A transposition of these concepts was done a decade ago within the international program intelligent manufacturing systems (IMS). The holonic organization was extended to the so-called holonic production paradigm at an intraenterprise level. This paradigm was also extended to the hardware (physical machine) and software (control and communication) level. Now, everybody realizes that a global and open systemic approach applies and is more suited to the development of sustainable production systems. Thus, what is recommended is to switch (from these holarchies, we will continue to keep in mind) toward a more elaborated and structured model. Indeed, these models are those encountered in any complex system (such as in the Web and biology). They are characterized by three invariants:
This is of key importance to structure the methodology to be implemented in the area of sustainable systems. More precisely, an illustration of these concepts, in four different application fields, is represented in Table 11.1.
Table 11.1. Characteristics of some complexified systems
Any System: |
Molecule |
Town |
Company |
Society |
1 – is made of: |
Atoms |
People |
Employees and investments |
People |
2 – is organized or self-organized to adapt: |
Cells; then, Humans |
Governance |
Market |
Morale and rules. Population behaviors |
3 – and react against changes and disturbances: |
Virus |
Unemployment |
Concurrence, Social changes |
Economic crisis, earthquakes |
4 – thus, to develop themselves and survive: |
Species reproduction |
Counties growth |
Profit, wellbeing |
Economic and cultural influence |
5 – while improving underlying capabilities of its sub-complex structures (holons): |
Brain |
Logistics, urban public structures |
Holarchies and/or heterarchical organizations |
Society knowledge and consciousness. Basic theories and sciences, etc. |
What is not said in this table is that in each area potential energy and resources (sometimes scarce or expensive) are used in order to transform raw materials and services (through working procedures that are aimed at transforming “disorders” into orders/organized patterns) into more complex systems, with respect to added value constraints and sustainability (i.e. with a minimum entropy generation).
To achieve these aims, holons could call for the so-called “swarm intelligence” concept, also inherited from Biology. It is defined as the emergent collective intelligence of groups of simple and single entities (like holons). It offers an alternative way of designing intelligent systems, in which autonomy, emergence and distributed functioning replace control, preprogramming and centralization approaches, as usually done in conventional systems. This is often associated with the concept of “artilects”. This last one, however, will be more often used in heterarchies to conduct auctions, negotiations and evolutive decisions.
In terms of implementation, holonic systems will be shaped as in Figure 11.8, for instance through the Petri Nets technique.
In Figure 11.8, a global behavior can emerge from the behavior of each individual holon. This is because we will converge toward an attractor (working pattern, work organization, skills and tasks distribution, etc., with regard to self-organization mechanisms).
Self-organization is not a new concept, being applied in many different industrial and economics domains. It can be defined [LEI 08] as the integration of autonomy and learning capabilities within entities to achieve, by emergence, global behavior that is not programmed or defined a priori. A possible way to integrate self-organization capabilities is to move from fixed and centralized architectures to distributed ones, according to the perception of an environment that does not follow a fixed and estimated organization.
In the holonic manufacturing system (HMS), the adaptive holonic control architecture for distributed manufacturing systems (ADACOR) project has been proposed [LEI 08].
It is a holonic control architecture which addresses the agile reaction to disturbances at the shop floor level, being built upon a set of autonomous and cooperative holons, each one representing a factory component which can be either a physical resource (robots, pallets, etc.) or a logic entity (orders, etc.). The manufacturing control emerges, as a whole, from the interaction among the distributed collaborative ADACOR holons, each one contributing with its local behavior to the global control objectives.
One of the major concepts introduced by ADACOR is the adaptive control approach, being neither completely decentralized nor hierarchical, but balancing between a more centralized approach and a flatter one, and passing through other intermediate forms of control. ADACOR adaptive production control shares the control between supervisor and operational holons, and evolves in time between two alternative states, stationary and transient, trying to combine the global production optimization with agile reaction to unpredictable disturbances. This dynamic evolution or the reconfigurability of the control system is supported by the presence of supervisor holons in a decentralized system, and the presence of self-organization capability associated with each ADACOR holon (expressed by the local autonomy factor and proper propagation mechanisms).
In the stationary state, holons are organized in a hierarchical structure, with supervisor holons coordinating several operational and/or supervisor holons. The role of each supervisor holon is the global optimization of the production process. In this state, each operational holon has low autonomy, following the proposals sent by the supervisor holon.
The transient state, triggered by the occurrence of disturbances, is characterized by the reorganization of the holons in a heterarchical-like control architecture, allowing the agile reaction to disturbances. This reorganization is performed through the self-organization of holons, through the increase in their autonomy and the propagation of the disturbance to the neighbor holons using ant-based techniques. After disturbance recovery, the operational holons reduce their autonomy, evolving the system to a new control structure (often returning to the original one). As we can see, the restructuration of the control system is done so that the energy consumption is a minimum one. As a result, the integration of these technologies would bring a greater efficiency for manufacturing applications [XIA 08, PAR 10].
More generally, when dealing with reconfigurable systems, in which structural reorganization and emergence of new patterns play key roles, it is crucial to have regulation mechanisms that react quickly and introduce new orders and stability against the increase in entropy and, consequently, chaotic or instable states. Here, the second law of thermodynamics that states the total entropy of any isolated physical system tends to increase over the time approaching a maximum value, and this is the point we have to focus on.
However it is viewed (at a physical-reactive, biological-active, human-cognitive or formal-logical level), the holon cannot be considered as the panacea of evolution. In system modeling, it is a useful concept to represent some behaviors and describe some individual strategies directly related to autonomy. For instance, our experience in heterarchical approaches, through VFDCS [MAS 06] and PABADIS, shows that peer-to-peer mechanisms, game theory, negotiation, etc., as deployed in Web applications, are not sufficient to drive toward sustainable societies.
However, what is happening today is quite important: all our management and control systems, either in economy or industry, have been influenced by the Web; they are at the origin of new paradigms and practices which reinforce the emergence of business models based upon NLDS, systemic approaches, chaos and self-organization. To summarize all the sciences behind these terms and environment, we will call this “network sciences”.
Thus, “network sciences” is the present paradigm: it is in front of the so-called “bio-inspired” sciences, even if some embryos relevant to biomimicry are already implemented in evolutionary algorithms and regenerative approaches. To better understand where we are going, in terms of sustainable development and entropy generation, we have to recall a few basic principles behind the so-called term “evolution”.
Evolution has been widely developed in this book. In order to see how it applies in our current working life and to highlight its contribution to entropy, and then sustainability, we have to summarize again some of its attached characteristics. In summary, in our subject matter, we will address the five following points covered by the “systems evolution”:
The second law of thermodynamics involves the expected outcomes of diversification which then conducts to differentiation: we may expect a great strategy of “economic” value (growth and profitability) or, first and foremost, great coherence complementary to their current activities (exploitation of know-how, more efficient use of available resources and capacities). This is the case both in nature and industry.
In the same context, autonomy applied to industrial systems concerns a device (agent or entity) that would need to have a longer leash being able to complete complex missions without human intramanagement. For instance (as defined in [WIK 14]), autonomy can take the following aspects:
For example, we have eyes because we require eyesight; not that we developed eyesight because we happen to have eyes. This is of most importance: teleological interactions are like social interactions; they are the result of purposeful goal-directed behavior in both biological and technological systems.
Applying these aforementioned evolution principles to advanced manufacturing systems is the equivalent to thinking in terms of “evolution of manufacturing systems”. Within this context, we will be ready for implementing concepts related to “network sciences”. In fact, this is a transitional step to something more evolved. This is a way to introduce the era of “Intelligent Manufacturing Systems” as specified in the IMS program, to prepare a new paradigm shift toward bio-inspired systems. Indeed, when considering that our knowledge in biomechanisms is not even a millionth of what we should know about life science or nature, it would be pretentious to claim that human beings are able to carry out “bio-inspired” systems. There is a huge gap between our intentions and the reality of facts.
Presently, as per our level of knowledge and experience, the final capabilities required for these IMSs can be expressed in four different ways:
Again, as far as we can see, the main concept behind all these characteristics is “self-organization”. This concept is strongly related to the activity of so-called programmable networks. In addition to the fact that such networks, because of their dynamicity, can generate new patterns, the characteristic on which we intend to return is related to the interactions, that is to say to network feedbacks or, said differently, communications.
Communication helps us to reduce uncertainty, and thereby, like a form of entropy, doubt and ambiguity may eventually creep back into a relationship if there is no reinforcement over time. This is where the true value of phatic communication exists, as it helps maintain these connections (by reinforcing the trust in future interpersonal interactions) until more significant interactions occur.
However, it must be remembered that networked systems and NLDS have converging attractors valleys, which limit the systems divergences (in terms of potential dissipation): the evolution of the systems is mostly maintained in the bottom of these valleys and contributes to its stability.
So, as already stated, networking and self-organization are the contributing factors for reducing entropy generation, but this generation is not equal to zero since it is a dissipative process.
In this section, to avoid any misunderstanding, we are discussing two points:
The first trend in production system development consists of applying global and systemic approaches relevant to “ecosystems” and “network sciences” to provide our society with more sustainable systems.
The second trend will consist of introducing autonomous behaviors and life mechanisms inspired from biology. Before measuring the impact of global bio-inspired technologies, we will study how they are implemented into some existing approaches, such as regenerative methods, which have been applied, in recent years, for carrying out IMS.
Here, research and development can be classified into two groups:
In this area, two main classes of fitness functions exist: one where the fitness function does not change, as in optimizing a fixed function or testing with a fixed set of test cases; and the other where the fitness function is mutable, as in niche differentiation and coevolution.
As an applicative example, the technical product data could determine which processes are required by the product transformation while the process data would specify which tools and equipment on which corresponding machines are operated. However, the BN-type information consists of the rules for cooperating machines in order to carry out a given process. Machine tools, transporters, robots and so on should be seen as biological organisms, which are capable of adapting themselves to environmental changes.
In order to realize such bio-inspired models, agent technology is generally used for carrying out the intelligent behaviors of the system such as self-organization, evolution and learning [UME 06, MAS 08]. The reinforcement learning methods, case based reasoning or even pattern recognition techniques, can be applied for generating the appropriate rules that determine the intelligent behaviors of the machines.
As we can see, the contribution of biomimicry is just a follow-on step of the so-called “network sciences”. Indeed, research and development activities based on bio-inspired technologies require first to control the underlying self-organization principles to be implemented in the models. Then, it introduces and integrates the notion of swarm of cognitive agents. Here again, swarm can be associated with collective intelligence (interactions among the agents in a programmable network) while cognitive agents could use evolutionary algorithms for generating their own knowledge about rules to be applied in process planning and control, for instance. All of these concepts cooperate for generating the whole schedule of the system.
The advantages of the existing concepts are inherited and integrated into the IMS-BP concept.
The way bio-inspired mechanisms are used in production systems is still conventional: in the case of no unexpected disturbances or changes on the production shop floor, a raw material becomes the final product by the combining the DNA or BN-information which are generated from the first generation: in fact, in life sciences, every day we discover that our knowledge and assumptions are wrong since the induced mechanisms are much more complicated than expected: they always combine several underlying principles but also antagonisms; at last, the decision is not the result of a computed algorithm but the emergence of a pattern. What we use is just a simplified mimic of a distant reality: there is still a lot of room for enhancements.
Our way of life is entirely based on vital networks. They have been significantly influencing our lives for several centuries. Nowadays, however, we are used to refering only to corporate networks, social networking, e-marketing, etc. Without them, there is no economy, but more importantly, there is no life: no water, no electricity, no power, no transportation and logistics, no telecommunications, information or collective intelligence: our modern world has been shaped around the networks.
The first characteristic of modern networking is that all emerging global networks (physical and informational ones) are intertwined, interconnected and interdependent. Any tiny problem propagates across many different networks and causes, in turn, a cascade of disasters and catastrophes, each one having an economic impact associated with a social crisis:
The second common and global characteristic is that networks are essential for meeting basic and vital needs of the population, economy, security and local or global governance. The human species is now dependent on these ubiquitous and virtual networks: we use them but do not control anything.
Indeed, the development of networks has been done gradually, insidiously, from holonic networks: these holons (or specific agents) are highly interconnected and generate global behaviors that are qualitatively prognosticable but not quantitatively predictable.
Concerning risk management, both the Internet, the cultural addiction to the Web, and the reliance on economics have created unpredictable phenomena and caused and amplified unpredictable disasters – whose impacts remained unassessed.
The third characteristic of the networks is related to the sustainability of the any human achievements: in terms of entropy, what can we say? Possible answers are discussed in the following.
Complexification was first done by activity sectors, that is to say, uniformly and consistently, electrical power systems, and then energy transportation networks (e.g. oil) and information networks, etc.
In the second stage, there was an integration and a growing interdependency within these sets of heterogeneous networks: for instance, information networks were coupled with those of the electricity (information is circulating in electrical wires while these wired networks are managed by informational networks), etc.
Information networks, even if they are issued from different backgrounds and based on very diverse structures, are relatively homogeneous: there are few technology providers; the development tools are quite compatible and common. Organizationally and conceptually, this means that diversity and disorder are contained within acceptable limits: in terms of sustainability, this is a good indicator.
On the other hand, those who develop and implement Web applications are a crowd of small and private operators who act independently according to their sensibilities and interests, without any meta-control except the one, but limited, provided by the service providers. In fact, their activity is relatively well-framed and aligned with Web technology. All new features made possible by the development of these global networks, therefore, conduct to a limited, unless low, generation of entropy.
“Disaster” or “bifurcation” effects (as defined in NLDS) can be very important and follow the law of power that characterizes complex systems. Their frequency, itself, follows a James–Stein distribution as was shown in the field of high-tech computer networks [MAS 08]. Moore’s law is also applicable for these complexification processes (not only for computers but everywhere in nature).
As a summary, the networks in which we are embedded regulate our lives and are such that any local, minor and unexpected failure (and this is not a seldom event) often has a global impact on the world (SCI of NLDS).
These facts, now discovered by some specialists in the network area, have been evident for a long time, such as in IBM [MAS 06]. Indeed, people talk about the assembly of critical systems, the combination of several minor incidents that overlap in unexpected ways and cause the emergence of major and widespread failures and so on.
But the risk management concept, a quarter of a century ago, was different. The phenomena related to migration and propagation of faults in electronic circuits were considered inevitable and unavoidable. Computer technology did not have the reliability that we know today. When a large computer consisted of more than 80,000 components, we were faced with this problem daily: we tried to improve the overall reliability, we tried to integrate that risk into our procedures (the computer crash with a major customer was a disaster that could considered but not be planned). It is a kind of anticipation which is not accepted today.
On behalf of sustainability, increasingly less risks are allowed; so, a major disaster or industrial accident is not well accepted in our western countries. This objective is sometimes puzzling since there is now a confusion between “hazard” and “risk”. This can be summarized as follows:
Our society is in a fully greedy approach: risk management is guided by money and social management, and not by technical and entrepreneurship considerations. Indeed, a “risk” is seen as the probability of a situation that could seriously affect the physical integrity of a person or physical goods (it is associated with possible “damages”). For instance, it is said that we run the risk of catching a cold when going out, bareheaded, in a cold weather, while that it is hazardous (we are endangered) when crossing a street without looking at car traffic. Hazard creates fear and calls for caution: fighting causes of hazard requires more skills, courage and sometimes subconsciousness. In NLDS, unexpected situations require unexpected decisions.
As we can see, the notion of “entropy” cannot be the deep concern of the decision-maker; sustainability has become a societal concern and its definition has still to be refined. Our society, through the network theory, has to rediscover what the term “system vulnerability” means. We have disregarded the experiences and methods of yesteryear, as if thrown into the garbage.
For example, when we state that “creating a dependency relationship between two networks, even if each of the infrastructure is super strong, leads to weaken the global system”, this is disturbing for many people and can be wildering. Indeed, the Internet constraint is simply related to the fact that it is an assembly of computer networks systems often designed and developed to run in a local and self-sufficient way.
To build the complete Web network, there has not been a no “dynamic system” approach:
In brief, to better control these networks, it will be necessary to design and develop systems even more complicated because, to recall a simple observation, “we do not control the human brain with a single neuron”.
In returning to sustainability, the objective is not to add more entropy to the system because it will not so far increase its durability or survival, but simply to change our way of thinking with a new paradigm.
These two concepts, developed in industry, are not fully similar since they address complementary problems and objectives [ELM 97] in planning and scheduling. They are relevant to “lean” approaches and can be easily applied, however, in our context: reliability of the global network is directly depending on the fact that all of the subnetworks are used and growing in JIT.
In practice, the growth most specific networks is always a posteriori, under the pressure of demand, according to changing needs, and finally to fulfill some stability requirements. Indeed, cost and performance factors are important goals reflected in international competitiveness, and the choices, in terms of investments, are very tight. Continuous dynamic simulation will be used to adjust parameters of such strategies, as well as to assess the risks and limit backorders.
Thus, when choices are made by decision-makers, they are perfectly aware of the situation. For this reason, and by comparison:
Service providers, in parallel, cover the logistics and are in charge to delivering the right amount of information, goods and services at the right place, right time and right cost. As a producer, it limits its investments and production costs.
As often mentioned in this book, the world exists in duality: it enables us to find the good equilibria (convergences) and avoid a complete energy/temperature dissipation, which is the signature of a complete disorder (high entropy). In this sense, speciation, as described here above is a good way to reduce entropy generation and ensure a better sustainability.
Nowadays, in the networks, we conduct preventive storage only when strategic coverage is required (to protect against competition problems) or to satisfy precautionary principles (critical components, information or product supplies, etc.).
This is an unsustainable situation because when stocks and inventories are allowed, as when setting up costly solutions to control the dysfunctioning of our networks, we accept imperfections and regard the presence of poor quality and lack of reliability as normal, in the design and development of human achievements.
Here, the real question is how and why NLDSs evolve and diverge? What is the type of behavior to which we can converge? What are the long-term effects of such a situation?
Indeed, all systems issued from complexification are unpredictable; it is important, as already described [MAS 06], to further explore possible approaches to the simplexification of a subject system rather than complexifying control procedures which can only contribute in increasing its entropy, thus further reducing its sustainability.
In order to reduce the lockout effects, the differences and impacts of disasters, we can proceed with the introduction of internal chaotic disturbances. This is well known in automation and consists of:
What is important to know is that these networks are always subject to self-organization; we can focus them in a given direction: to not control complexity, and also, to not try deglobalization. In the Internet or Web networks, as in nature, they always evolve irregularly and in a non-reversible way, without possible backtracking.
Many scientists focus on network theory to study and understand what is happening with the networked world which surrounds our society.
In order not to rediscover already-known approaches or to knock against some still unknown paradigm, such as those relevant to bio-mimicries, it would be advisable to highlight some underlying principles and mechanisms already used in economic and social sciences (already applied in enterprise engineering).
Indeed, when analyzing the effects of some major disturbances occurring in a network, we may report that:
Lastly one very important effect of network implementation is related to the standardization and unification of concepts:
As a result, the developments and evolutions of our society are carried out with a minimal entropy generation, knowing that the entropy in some specific areas has decreased.
To specify some of our statements related to the entropy creation, it is useful to introduce some additional concepts to be applied in the networked information systems implying technologies such as the Internet, social networks, the World Wide Web and other imbedded applications. In many recent studies, it is said that the utility of information depends on Metcalfe’s law.
The number of arrows (hence links) in a complete graph comprising n nodes is equal to:
which can be approximated by n2 / 2 as soon n is increasing. Metcalfe’s law stipulates that the more there are links within a network (that is to say possible “pair of connections”), the more people can be interconnected and the more it is valuable.
Nevertheless, Reed’s law is more suitable for this kind of analysis: it is an assertion formulated by David P. Reed which states that the utility of a network, like a social network (Facebook, etc.), can scale exponentially with the size of the network. Indeed, the number of possible subgroups of network participants is:
where N is either the number of participants or nodes. It is quite normal to reason like this because people are often catering to several groups of interest. This utility grows much more rapidly than Metcalfe’s law, so that even if the utility of groups available to be joined on a peer-to-peer group basis is very small, the network effect of potential group membership can dominate the overall economics of the system:
Consequently, the networks described above, with their pros and cons, are the best way to develop our capabilities, when being associated with a minimal entropy generation. For all these reasons, we will say that networking of our society is a rather sustainable process able to develop the human species, and hence DNA, at the lowest thermodynamical cost.
Many people and scientists talk about bio-inspired systems. With regard to Benyus [BEN 97] and Paulo Leitao, it is time to start developing bio-inspired systems. It is necessary, however, to point out some actual practices related to bio-inspiration. A lot of scientists and architects, etc., implement innovative processes shapes or patterns in their work. The problem is that they are sometimes poor mimics of reality, where they simply crated nice solution to a given problem or to develop an alternative to a computerized algorithm, etc. Moreover, they are applied in a static way: do we know how and why such or such pattern has emerged, where it comes from and what it will become in future? What are the global interactions, constraints and embodiements associated with this pattern? What kind of dynamicity does this shape entail? What can we do with this solution for the living? Does mimicry give better sustainability over time? Why? It is not just the perpetuation of a situation or a system (this is a static and defensive position) but a plan for switching toward a new paradigm.
The first case study is one related to the development of either rough materials (e.g. specific iron+carbon alloy), a new functionality (e.g. a new medicine to fight an illness) or an alternative solution (e.g. salt extraction performed by some living organisms). Large databases have been set up within this framework. They also enable geologists and engineers to work together, develop transdisciplinary skills and generalize the system analysis approach which is useful to address problems related to global and sustainable contexts.
Even if each scientist or engineer is at first only concerned about their own problems, it is important that they remain open-minded to discoveries at the border of different disciplines and to transpose them in innovative areas.
Here is a fundamental choice set in managing skills in an enterprise:
For over a decade, several examples have existed in our daily lives, where engineers are exploiting the reusability of products and services to improve the sustainability of the whole economic and industrial world. For instance, as defined in the IMS program:
The second application field is related to organization and complex systems management. Indeed, conventional optimization, monitoring and control are dissipative approaches, and then entropy generative. Another way to manage such systems is to look at nature, since, for several billions of years, it evolved in implementing innovative and sustainable complex systems. Before developing this point, we must analyze the evolution of the physical implementation of a DSS or information system.
Figure 11.10 details the evolution of the main information systems architectures. Even if some terms such as “intelligence” seem to be inappropriate (“smart” being worthwhile), we can see that it successively integrates:
We suggest, however, to complete such a graph with an additional step related to the worldwide usage of the Internet. Indeed, the conventional planetary networks will still evolve and generate mutations leading to cultural and governance paradigm changes. Indeed, AmI will still develop due to pervasive computing. However:
As we can see, most of these changes are limiting the generation of entropy. Indeed, we are heading toward more harmonious and consistent environments. However, as stated in the last sentence (last step), we are heading toward a complexification: the Internet of the Minds will be followed by an explosion of the knowledge. New thinking will emerge, new ideas, concepts, models, intellectual assets, cultures and spiritualities will be generated, and this disruptive step will be associated with a new diversity, that is to say an increased entropy.
Compared to network sciences, bio-inspired technologies mainly bring two additional and sophisticated capabilities:
In the natural environment, collective intelligence is carried out by simple interactions of individuals. Swarm intelligence is established from simple entities, which interact locally with each other and with their environment. Nevertheless:
In manufacturing systems, which is seen as a community of autonomous and cooperative entities, self-organization is carried out by reorganizing its structure, through a local modification and matching between machine capabilities and product requirements [MAS 08, LEI 09]. Each machine has a pheromone value for a specific operation and the machine with the shortest processing time for a specific operation has the highest pheromone, without external intervention.
In the bio-inspired concept, swarm intelligence technology can be applied in the integration of manufacturing scheduling and control where the manufacturing architecture is a swarm of agents. Each agent represents a manufacturing resource such as a robot, a machine tool and a workpiece. These agents use the ant colony algorithm for generating better operation planning, and then negotiate to generate the whole scheduling for the system. The embedded intelligence and learning skills for each agent determines the flexibility degree of its behaviors. This is the same approach which is going to be implemented in the next generation of air traffic control: more autonomy will be assigned to each plane and the routing control will be performed by the planes themselves in interaction with the other ones in a given neighborhood.
Here, swarm intelligence technology optimizes resource use efficiency in collectively improving the solution; it is concomitant with waste minimization since, over time, the proposed path (or logical solution) produces increasingly less waste at the individual level, providing a positive action. Indeed, according to the second law of thermodynamics, high entropy wastes are incompatible with the low entropy generation inherent in nature’s biosystems. The solutions based on the system’s integration enable us to capitalize on embodied energy (experiences and errors) in previous wasted solutions. This waste thus becomes realized as feed streams, or assets, for new solutions and reusable experiences: then, the production of solutions entropy, especially due to inappropriate strategies, will remain low.
In order to increase the intelligent behaviors of agents, cognitive capabilities are equipped for agents by using the cognitive technology. Concerning the swarm intelligence aspect, manufacturing systems are considered as a swarm that shows collective intelligence by interactions among the holons or agents. In order to implement this approach, agent technology can be used [LEI 02]; a quite evolved approach, however, based on BDI [MAS 06] – belief, desire and intention concept – has been implemented in the PABADIS European project in 2004 to improve the autonomous characteristics of conventional agents.
Here, the difficulty is to integrate cognition and smart behaviors in these cognitive agents to ensure the flexibility of the manufacturing system for adapting it to the changes and unexpected disturbances [PAR 10]. This requires an agent to use its own knowledge and experience to make a decision that is suitable for the status of the resource, and then to face with an unfamiliar status: here, self-learning and inheritance capabilities must be provided for the agent. This is why in our model of BDI agents, a hybrid approach based on knowledge technologies (CBR) and ANN were planned.
COMMENT.–
In industry, and more specifically in industrial automation, many similar applications are implemented using a wide variety of software tool sets directed at a number of different operating systems with varying degrees of commercial success.
This unchecked OS variety, however, has significantly increased automation system entropy. Herein, we refer to a measure of the complexity of software interfaces in industrial systems, their performance cost and overall lifecycle economics.
Another aspect of sustainability pertains to software control and administration processes. Regarding reliability, and facing an increasing system entropy (in finance, environment, effluents, emissions control, etc.), the systems must continue to perform their designed functions flawlessly. No matter how much people are assigned to such a control or their physical and logistic means invested in it: the objective is to continue working correctly to ensure viable and secure solutions. This sometimes means independently of a consistent and economical performance and with no consideration of a sustainable competitive advantage for the organization. Indeed, each organization develops their own formulations of management dissipative structures containing some positive and negative entropy flows. By solving these formulations, or by comparison, a best suited structure can be estimated in order to implement a more sustainable solution with a better acceptance from the population [ZHE 10].
A question may arise concerning how entropy can accurately characterize algorithms performance in a DSS. According to our experience, it seems that entropy alone cannot characterize the performance of any (or best) algorithm to be used. On the other hand, by comparing two proposed solutions, through simulation, we can give a reliable statement. Some interesting research in different areas is conducted to explore whether entropy gives good performance bounds for some online problems known in the literature. In addition, a method of determining the linear combination weights based on entropy, using optimization theory and Jayne’s maximum entropy principle, has been studied to deal with the problem of determining the weights in multiple attribute decision-making. These are improvements that we will no longer develop in this chapter.
Cloud computing is a well-known concept widely used in today’s strategies. It consists of using a distributed information system among several servers and computers via a digital network, as though they are a single computer. It provides computation resources, software, data access and storage: these services do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electric grid, wherein end-users consume power without needing to understand the component devices or infrastructure required to provide the power service. This concept dates back to the 1960s, when John McCarthy stated: “computation may someday be organized as a public utility, like in the electric industry”. Figure 11.11 (from [WIK 15]) illustrates quite well the apparent structure of such a concept.
The first characteristic of cloud computing is that the computing is “in the cloud”: as stated in this encyclopedia, the processing (and associated data) is not located in a specified, known or static place(s) but in a network of servers, or in the cloud of a service provider like Google: as in a virtual production system, the processing takes place in one or more specific servers that are known.
Another characteristic in “cloud storage” is that, for instance, data are stored on multiple virtual servers, in general hosted by third parties, rather than being hosted on dedicated servers (as it is the case with private cloud networks, for quality or security reasons); the task processing is done on the Internet via a WiFi or 3G connection. Some IS companies can operate large data centers, and people who require their data to be hosted buy or lease a storage capacity from them and use it for their storage needs. But in a physical sense, the resource may span across multiple servers.
As we can see, the cloud is a destructuration of the heterogeneous constituents included in many diverse applications (as shown in the “five boxes” icon of the J.B Waldner graph in Figure I.4); then through the cloud, we proceed to the restructuration of homogeneous contents to get wider, consistent, maintained and secured distributed clusters. Applications are also directly maintained and updated on a server where they are centralized.
The third characteristic of the cloud is that all the ingredients are there to create collective intelligence: applications are distributed in many servers and can be shared by several people working together; the database can be commonly shared; we can count on the emergence of synergies through clashes of different thoughts and sharing of reasoning results.
The only difference is that in the cloud, the collaborative technology is already implemented and ready for use either at individual or collective levels.
Entropy is associated with a pseudo randomness which is generated by an information system and made available through different applications.
In software, a source of entropy is not as random as expected: it can be found in the weakness that was introduced into an application. Indeed, a single wrong line of code in the open source of a software package may cause a side effect. This was the kind of problem we encountered when developing complex operating systems on telecommunications systems, in IBM, a long time ago on our manufacturing sites.
The same happens with the analysis, interpretation, speculation and generation of rumors in the area of collective thinking: these operations may generate unexpected troubles, deviances and societal disturbances, etc.
When a subject and unique information system or application is developed, owned and used interactively in an arbitrary manner by a wide population of humans, we may encounter huge entropy generation at the application and results level. However, concerning the entropy generation of the support system itself, the energy balance is different since this structured organization is low in entropy generation.
More precisely, in a collaborative environment, such as collective thinking and cloud computing, we generally work from a virtual and secured hardware server and with simpler common applications shared by many users. Thus, the rate of entropy generation is lower. We also rely more heavily on unattended events like network and database dysfunctioning; potential problems may also arise from the underlying hardware and generate entropy.
Thus, it is again a dual aspect of sustainability: some parts of any complex system are entropy generators, while others are entropy reductors. As we have in NLDS with positive and negative feedback loops, we have to perform global and systemic analysis of the entropy, considering positive and negative entropy generation mechanisms, that is to say, entropy and anti-entropy.
In terms of sustainability assessment: as soon a set of virtual machine instances run within a cloud-based virtualization service, they can potentially share a same source of entropy, issued from specific errors in the underlying information system. If we are able to predict the stream of entropy that might be utilized by an application on one of those instances, we can target the entropy generation related to a specific customer.
As a result, in terms of sustainability, the generation of entropy would be lower than in a dedicated private information system (without considering the security problems which are supposed to be solutioned by an encryption or whatever, and knowing that they are never 100% reliable).
In this section, to avoid any misunderstanding, we are discussing two points.
In the previous chapters, the underlying mechanisms of physical, industrial and organizational events were studied to improve and enhance the design of some products and services.
As a reminder, it is suggested to work in a multidisciplinary and transdisciplinary way in order to integrate new capabilities and achievements coming from completely different fields.
Within this framework, bio-inspiration and networking sciences were often quoted.
The first weakness of the design approaches and methodologies is that mimicry is applied in a partial and static way. Indeed:
Indeed, in the first point, it is important to mention that we cannot consider a system independently of its environment. Integration of new concepts requires “system analyses”. The second learned lesson is related to the people’s behavior. Some reactions and statements such as “I don’t care, it’s not my problem” issued by a new generation of managers, are completely irrelevant: we cannot work independently from others. A firm is a living system and the notion of “general interest” is innate in our ways of thinking and acting. We are heading towards a same global entity: greed, individualism and selfishness are luxuries that we can no longer afford.
The next lesson is related to the fact that the introduction of fundamental physics in our bio-inspired processes is a necessity. Indeed, reversibility of time, quantum effects, etc., are emerging in the design of living organisms, not only at micronic level or cosmic level but also at meso-and macrolevel of our enterprises. Nature has developed and exploited such properties because they are able to bring an advantage: even if the amplitude of some physical phenomena is small under normal conditions, they are there. We can mention:
The second weakness is that universality principles always apply, whatever the fields considered and whatever the physical size of the agents. Here, the main property we are interested in is that nature is a dual one: this is valid for a given behavior or a functionality to be integrated in the design of an assembly and to get a better sustainability. This is essential in any design for sustainability (DFS): when developing a new product or information system, the presence of antagonisms is a must; when considering the contribution of physics, each of involved part in a system generates more or less entropy, thus, this has to be included in any global sustainability analysis.
The behaviors of each constituent are the results of interactions between agents which are relevant of NLDS theory and network theory. Positive and negative feedback loops are at the heart of these complex behaviors. But most important, it is mandatory to get antagonistic effects at each level of each characteristic. It is a general fractal construct based upon complementary and contradictory properties which allow ensuring a constant and sustainable evolution. Otherwise, deviances and irreversible divergences could be a danger for the equilibrium and evolution of a global system.
The third weakness is related to the fact that behind innovative paradigms and sustainability, thermodynamics principles apply. Indeed, life is the characteristic of autonomous agents that are energy-consuming and dissipative, able to reproduce and adapt by themselves. So, in a business intelligence, or even in information systems, it is clear that bio-inspired features can bring some enhancements: the discovery of DNA, the interacted role of proteins and enzymes, their underlying mechanisms, etc., provide obvious advantages in our decision and organizational systems; this is why decision-makers and scientists try to include their properties in our snippets of solutions, to develop more sustainable solutions.
But, we cannot ignore the various contributions of physics: as soon we are introducing changes of configurations, assembly of living agents, emergence of new orders, converging attractors, transformational processes, dissipative and chaotic behaviors, etc., thermodynamics and its associated entropy is there. Problem is to introduce the notion of entropy in our processes and to use it as a main factor able to measure the sustainability.
The fourth weakness is related to the lack of consideration about the DFS. In the area of sustainability, thermodynamics is able to account for a number of phenomena related to the self-organization, transfer and processing of information, but limits are quickly reached when we look at the concept of information taken in its integrity. In fact, a first restriction was put forward by Shannon himself about the content and meaning of a message: this would be of no interest.
When several pieces of information can react together, new concepts can emerge through a process of integration, assimilation and propagation in a cognitive corpus, as we have in cellular automata or informational thermodynamics.
As a conclusion, we can state that:
Like in physical phenomena, or physical systems, there may be a “wear” phenomenon because, as a molecule only acts in a limited number of reactions, information is transitory. We can interpellate once, 10 or 100 times more than other people, but strength and efficiency of such a measure are progressively decreasing, as in a dissipative process to get a uniform “temperature”. Moreover, the difficulty in DFS is to integrate dynamic properties of the objects and agents and to show the absurdity and irrelevance of an object, piece of information and decision. This shows, in terms of sustainability, that information system entropy is a basic indicator which always evolves over time.
In information systems and business intelligence, it is a necessity to explore alternative approaches, even if they can be iconoclasts. Within this framework we can quote Jean Pierre Bernat, in reference to an excerpt from article [BER 99] published in the 1990s, the newspaper “Le Monde”: we have to think and act in quanta, keeping in mind that equilibria and sustainability are always depending on thermodynamic considerations.
There is no difference concerning bio-inspired systems and DSS, but again, what we have to keep in mind is that human beings will never be able to mimic and emulate the brain:
As we have a brain, the problem to be solved is how to make compatible the action of an intangible event (thinking, conscious or unconscious) on material, organs and final goods (for instance, real or ANNs) with the laws of energy conservation imposed by classical mechanics. In this area, uses of theoretical physics and comparisons of this “field of consciousness” with some fields of probability or plausible situations are described in quantum mechanics.