This chapter addresses the concepts seen before in the domains of management and the organization. We describe a number of cases and applications with a view to facilitating the art of concretely applying complexity concepts.
This first section focuses on the design level, which should always be the initial concern for practitioners.
As a generic example, a company as a whole can be considered a complex system, as it interacts with its suppliers, clients, the economic and financial context, shareholders and the needs of society. While the principles developed in the previous chapters directly apply, nevertheless, attention will have to be paid to the following multiple points and basic rules:
If the design of a complex system must cover these many criteria, constraints and cautious approaches, it is because we are in a world of unpredictability, uncertainty and risk that we cannot directly control. We are experiencing disruptions and must deal with them with chaotic systems by adapting as best as possible in the most reactive way possible and by imagining solutions that conventional approaches are not able to provide us.
Currently, in economics or industry, we are faced with two problems related to the “mass customization”:
One of the answers to these problems is called “speed to market” and, in both cases, the competitiveness criterion put forward is that of product or process flexibility. At first sight, the second criterion seems to be the best mastered and often makes use of common sense. The first criterion is more “cumbersome” to implement, both conceptually and in terms of investments.
The question is how to respond to these on-demand design and configuration requests in the shortest possible time. Product development time and the time it takes to obtain these products through their manufacture and delivery must be considerably reduced. The objective is not to achieve reductions of 5 to 10% but of 50% or more, to remain globally competitive. In the automotive and aviation industries, interesting approaches and results have been achieved. Without precisely describing the strategic approaches of manufacturers such as Renault and Boeing or Airbus, we can nevertheless highlight key elements:
As part of the design and development of mass customization, several approaches are analyzed; they mark a rapid evolution in know-how. Describing them gives us a better understanding of where we are heading. Regardless of the cost and the size of the final products or services, the following cases will be discussed in turn:
These approaches are based on an essential element: technical data management (TDM). These data were managed using appropriate tools, such as IBM’s PDM (Product Data Management) or ThinkTeam from Think3. They have been coupled with 2D/3D computer-aided design (CAD) tools such as ThinkDesign by Think3 or CATIA by Dassault Systèmes (which offers a global solution such as PLM – Product Lifecycle Management). These approaches provide manufacturers with complete and efficient solutions for on-demand design. In a simplified way, and depending on a specification or customer specifications, they are able to:
Once the products have been defined, they can be validated based on business expertise [MAS 95b] and simulations. This makes it possible to detect impossibilities of adjustment or manufacture, to eliminate functional or structural misinterpretations, unnecessary or undesirable modifications, coreq and pre-req problems, etc. It is then possible to automatically generate technical data, plans, ranges and bills of materials (FBM – Field Bill of Materials; FFBM – Field Feature Bill of Materials) for the new product and associated processes, with minimal cost and risk. The gains obtained with such automated processes are about 50–75% in time and money (this is in fact “avoidance”). In this case, we carry out “intrinsic differentiation in design”.
This approach has been in place for two decades. During the design and manufacturing phases, it consists of assembling off-the-shelf products. This is of course an image, but illustrates the fact that we will focus mainly on the processes applied to a standard process that can be found in all application fields. In order to remain competitive, Quantum Leap approaches promoted by J.R. Costanza [COS 96] have improved the situation in a number of companies. This approach is intended to deliver customized products as quickly as possible (speed-to-market). It is based on a few essential points relating to a manufacturing process:
Depending on what we have just seen, priority is given to work done to a high level and to the implementation of concepts such as TQC (Total Quality Control) with a global approach.
However, at R&D level, it is possible to highlight some concepts related to self-organized systems. To achieve this, the complexity factors of the processes are limited in width and depth by way of modifying the design of the products. One way to do this is to have an “off-the-shelf” bank of standard components or sub-assemblies and design the final product by assembly, according to the customer’s specifications or model. This is intended to limit efforts at the level of scale, classification and the design of associated processes.
Traditional multi-level classifications are replaced as much as possible by flat single-level FBMs. Indeed, in conventional systems, the product design is functional and basic parts and components are considered and grouped into sub-assemblies, then assemblies and so on. In our case, sub-assemblies and components are considered as independent entities, monitored, purchased and controlled as full-fledged entities directly from suppliers or parts manufacturers. This approach is tantamount to destructing the classification and decoupling production operations. Thus, components, parts and assemblies are managed directly according to demand, in linear production lines and without the “upstream” impacts and fits and starts associated with the dynamics of multi-level nomenclatures. In particular:
The clarification of classification and the deletion of lists of material and multiple-use classification are intended to simplify processes. A supply or quality problem case is managed by the supply department and is not directly integrated into the central process of the final product because it results in a replenishment order, with a well-established “client–supplier” contract. This reduces costs and delays in dealing with problems.
When using the final product, some problems may occur. They concern its functional aspect, i.e. malfunction, as well as improvements in performance or use. In these cases, two modes of product evolution are introduced. In a simplified way:
This approach simplifies the technical evolution of complex systems by limiting the number of change implementations, producing, testing and planning them as if they were an independent production process.
The approach described here concerns “differentiation in product assembly development”. Here, the development process of a product is similar to that of a manufacturing process and constitutes a continuous flow. However, the synchronization and sequencing of tasks in continuous flow processes are problematic on another level: in such workshops, these phenomena depend on SIC, nonlinearities and discontinuities will appear. The principle is to design the product in such a way that it is modular or modular according to functions and options and can be assembled from standard components. The decoupling of the process into independent sectors or independent and communicating autonomous entities makes it possible to structure the production system into profit centers (operationally and financially) organized in a network. Apart from the fact that the management of such systems becomes less complex than for traditional systems, the system is predisposed to the emergence of stable orders and operating states for greater reactivity.
Product reconfiguration is a solution to mass customization and adaptation of products to a given problem. The examples we will consider come from the information processing and microelectronics industry. They are the result of real experiences and cases over several decades and, although they may have been either successful or doomed to failure, they certainly constitute a valuable knowledge base for the design and development of new products, regardless of the fields of application considered. The principle is to design generic systems containing a sufficient number of pre-assembled and tested devices, then to customize at the end the added value – as late as possible.
The first example concerns the redundancy of circuits to deal with failures and improve the reliability of an electronic system (consisting of all circuits). This point will not be developed as it is relatively well covered in reliability manuals. However, and from experience, having participated in the development of high-powered top range mainframe computers in Poughkeepsie (NY, USA), the great difficulty lies in the development of models to make reliability forecasts over time, to optimize the number of redundant circuits and how to activate them dynamically plus in a timely manner. Thus, it is currently possible to ensure the proper functioning of a computer, without functional failure (fault tolerance) with remote maintenance, for 10 years, thanks to hardware or software reconfiguration. This reconfiguration is deterministic and follows specific rules designed to reduce the risks and uncertainty associated with random disturbances. In light of experience, we will say that this reconfiguration is of the “meso-granular” type.
The second example concerns the mass customization of computers under cost and time constraints. The initial principle is simple because it is based on a widespread economic observation nowadays: net income is related to the service provided by the system and not to the weight of the hardware included in the product. In this case, the principle is once again simple: everything is based on the assembly of a “machine memo”, i.e. a computer whose configuration corresponds to an average demand in a given segment. This “machine” is assembled and tested to its final stage, long before its assignment to the customer is made. In general, several types of machines will be launched in production, with given capacities and performances, to limit the difficulties of subsequent assignment or reassignment. When a specific request arises, we can:
In both cases, configuration changes are calculated, planned and managed by a central control system. This is a reconfiguration of the “macro granular” type.
Computers operate according to microprocessors. These can be generalist (multipurpose mainframes): IBM develops its own integrated circuits, as well as with other manufacturers (this was the Power PC operated with Motorola) and this is also the case with Intel microprocessors. These microprocessors are very economical, integrating more and more components (according to Moore’s law), but the difficulty of implementation and their performance remains linked to operating systems.
However, in order to perform very specific tasks (vector calculation, security, cryptography, pattern recognition, etc.), coprocessors are used that will do the work in 10 or 100 times less time than a general purpose processor. These particular circuits are ASICs (Application-Specific Integrated Circuits). Given the lower volumes produced, their costs are higher and the difficulty here also lies in their integration into larger electronic modules (TCM – Thermal Conduction Modules, Air Controlled Modules, etc.).
All this leads to the design of complex electronic systems due to the diversity of components, their redundancy and the numerous connections between them. But how can we combine low cost, flexibility or versatility and speed? Two approaches are adopted:
The basic circuits contained in the logic blocks may have a more or less fine granularity, but, proportionally speaking, the technology remains of the “microscopic” or “mesoscopic” type. With field-programmable gate array (FPGA) technologies, specific functions of a microprocessor or an assembly of functions can be easily associated with them to get a more complex processing unit.
This makes it possible to combine universality and performance, but, knowing that “nothing is free”, simplexity in terms of hardware is replaced by significant complexity in terms of programming, i.e. software. In 1984, our team, interested in application parallelization, supervised a thesis on OCCAM [PAU 85] for low granularity, and on the structure of parallelizable algorithms [MAS 91] with microprocessor assemblies, for high granularity. The objective was to use the computing power available in a plant to better solve production management problems (which, by then, had not yet been defined as complex!). Of course, compared to the work carried out more recently by the RESO project at the Ecole Normale Supérieure in Lyon, France, or by GRID Computing, now used on a large scale by IBM and other large companies for, for example, the study of protein folding and proteomics, these results seem derisory; however, they have contributed to a better understanding of the problems associated with automatic circuit configuration. This pioneering work carried out with the late IBM Scientific Centre in Paris [HER 86] and the CNUSC in Montpellier has not been followed up, given the compatibility problems encountered and the unavailability of industrially reliable software. Between theory and practice, many years are, rightly, necessary.
Before moving to new paradigms, we must first try to improve what exists, optimize it and finally solve technological problems, whether they are hardware, software or organizational in nature. In the case of configurable circuits, several points must be resolved beforehand. For example:
Gradually, in terms of design and architecture, the boundary between programmable and configurable processors will blur. This will allow generic or specialized tasks to be carried out by pooling available resources. In the context of the Internet, for example, which is only an extension of this concept, this is already being done with IBM Grid Computing since 2002, to solve the major problems of our society.
We wish to consider here the design of a distributed autonomous system. For example: a network of microprocessors for scientific computing, relocation as part of the electrical wiring of computers. The question is “how can we organize logistics and task assignment in an open e-business system?” The difficulty comes mainly from interactions and diffuse feedback loops. We will try to apply the concepts developed in this chapter. The initial methodology can be supplemented by proposing the following approach, based on dynamics and unpredictability:
When the decision-maker is confronted with a complex system, it is essential to use modeling and simulation to infer its behavioral trends or validate options. Moreover, the overall behavior of the system can only be approximated. Indeed:
To counter the “global dynamic” associated with programmable networks, it is important to draw the attention of specialists to the fact that we can only think in terms of improvements and not in terms of optimization. However, the processes of continuous improvement of a process or of the behavior of a system come up against the fact that the notion of “dynamics” can be considered as contradictory to that of “continuous evolution”, which implies stability.
Similarly, learning involves collecting information about what is being done and observed. This takes time, especially since it is sometimes a matter of making test plans to determine which actions should be modified or promoted in light of contradictory, recoverable or recurring local objectives. The problem of dynamic adaptation of the control has not been solved to date.
In the cases studied above, the automatic reconfiguration of circuits or processors is predetermined. We have not yet reached the stage of self-organization itself and this is what we propose to consider now by quoting the work of Daniel Mange, professor at the EPFL [MAN 04]. The basic mechanisms are those found in the living world, and the objective is to design circuits or programs capable of self-replication (principle of reproduction) and self-repair (e.g. DNA).
Self-replication is a commonplace operation in the living world, and its mechanisms are beginning to be better understood in the world of information systems. They are based on the work of von Neumann as early as the 1940s, through Langton’s work in the 1980s [LAN 84], the overview in Wolfram’s work [WOL 02] and the Little Thumb (from Charles Perrault’s Hop-o’-My-Thumb fairytale) algorithm developed in 2003 [MAN 04]. Self-replication is the use of physical, chemical or software processes to reproduce an object or program in accordance with a plan (program) and to multiply it in a given number of copies to form a community or collection of objects. Self-replication, which is an identical copy of an object or program, is always done in two steps:
In this way, a more complete computer system can be created and its objects assembled to enable complex operations to be carried out. These techniques are already used on prototypes at IBM. It is therefore a “differentiation by use” since the computer program will generate its lines of code according to the needs of calculation or information processing.
In his research, Professor Langton developed a process capable of replicating a pattern in an eight-state cellular automaton. This is interesting insofar as an agent is able to reproduce itself in the same way or to self-replicate itself according to a program, i.e. by a well-defined sequence of rules, as we observe in Nature.
In terms of exploiting the structures or assemblies thus obtained, we can refer to the theory of cellular automata in the field of computing. Indeed, when we want to exploit the properties of cellular automatons, we find that the cells of an automaton are capable, based on simple elementary rules, of evolving their states and generating stable, periodic or chaotic forms (state configuration), depending on the nature of the interactions. The advantage of such an automaton is to show how a network of similar objects can converge to a given shape and generate a global order. The properties of self-replication are not explained here, but the underlying mechanisms of collective behavior can be better understood. The complexity that we have already addressed does not always translate into the generation of a complex global function.
The purpose of these concepts is to develop autonomous systems that can operate without the direct presence of humans or a centralized control center, because the temporal or functional limits are important (spatial, polluted or dangerous environment, etc.). Thus, a robot will be able to reproduce itself from off-the-shelf components according to the resources and capacities required by a process (increase in production capacity) or according to its operating state: in the event of a failure, it is possible either to replace a failed component or to recreate a complete system. Similarly, in an information system, it is possible to generate computer programs from initial functional specifications and a library of standard software components.
Another application is related to the detection of periodic cycles or phenomena in an industrial, economic or financial process; to the extent that we are able to control the replication of objects, we know how to compare them directly and master the notions of differentiation (cellular differentiation, shape separation, signal separation, analysis of stock market variations, etc.). These applications have an immense scope in the fields of economics and interactive decision support systems.
In this chapter, we have focused our attention on examples from the industry. It should also be noted, as mentioned in the initial chapter of the book, that “everything begins with the Organization and ends with the Organization”. This means that the design of a product or service goes hand in hand with the design of organizations. When designing an organization, we start from an advantage, namely that the human resources required in any process are comprised of multiple skills, intelligence and autonomy. The aim here is to design approaches in the field of organizations to obtain adaptable and configurable systems. In this way, we rely on the experiences encountered at IBM France during the various restructuring operations, as well as on the work of R.A. Thietart in the field of strategic management [THI 00].
There are many examples in the publications of large companies such as Renault, Unilever, Microsoft, Danone and IBM. These companies have made radical changes in their purpose, activity and structure to quickly become compatible with the new economic challenges. They have always been able to adapt, seize emerging opportunities and support them. Radical changes were often implemented quickly, but were accompanied by extensive preparatory work.
In other cases, changes are gradual and subject to continuous adjustment as they follow developments, such as technologies. However, in all cases, a strategy is made up of total or partial questioning, movements, readjustments or disruptions. On a practical level, and to better control their strategies, these companies have a strategic plan that runs over a period of 3 to 5 or 7 years. However, the analysis of the internal or external context remains difficult. It is necessary to carry out benchmarking, detection and identification of “low noise”. Indeed, in a world full of disruptions, singularities and subject to chaotic phenomena, the emergence of forms or new orders is barely perceptible and must be detected as quickly as possible. It is an essential competitive factor.
Several coherent and rational techniques for the design and development of complex systems have been reviewed. To summarize, it is appropriate to make some comments and draw some useful lessons for the design and future evolution of complex systems. All the current approaches described make little or no use of the desired paradigms. There is therefore a significant potential scope for further improvement of our processes. Let us review a number of characteristics, transposed from the life sciences field, in order to take them into account in future development models.
Living beings evolve under the influence of genetic mutations from very diverse origins, and this is an internal process of evolution. It is worth mentioning Charles Darwin who stipulated that natural selection from random mutations is one of the main mechanisms of evolution (genetic modification). On the other hand, morphogenesis and physiological or behavioral experience, which have often been considered external to evolution, also have an influence on the genetic heritage of living beings due to the progressive heredity, in the genetic heritage, of the acquired traits (genetic assimilation). Thus, morphogenesis, physiology, behavior, mutations, etc. are an integral part of a living being’s evolutionary process and therefore of its ontogenesis (everything that contributes to its development). In addition, and as we will discuss later, these phenomena, by their very effects, provide organisms with a feedback mechanism to adapt to changing environmental conditions, while maintaining orderly structures and continuity in their lives.
The same applies to all of humanity’s creations, created in their own image. This poses a fundamental problem in terms of the methods used to design a product or service:
In addition, for a natural system to be able to undertake a sequence of coordinated actions, and thus a strategy or approach to follow a trajectory and then reach a desired end state, it must have a program (ordered set of instructions). It must also have a model of the relationships between the system and its objective. Finally, it must have the means and methods to be followed to achieve it and have a map of the types of disturbances to be controlled, through ARA capability:
These above elements are criteria to be taken into account when designing products, services or even complex systems. Indeed, there are no artificial intelligence features that are of key importance, but instead the collective intelligence that must emerge from a society or a network of agents. This is first and foremost based on agents’ adaptive and evolving capacities. As mentioned before, the main advantage in developing and controlling a complex system does not only come from the so-called “intelligence” of each individual agents or people involved but also from the social intelligence of the team who are able to realize some specific tasks through more empathy and listening.
In fact, as observed in numerous artificial intelligence-based applications, the mere use of algorithms leads to dead ends. Social transformation is based on new foundations and paradigms whose rationalized expression takes the form of a new modeling of technologies based on the dynamic representation of the systems and the nature and culture of the people who use them.