3
Engineering and Complexity Theory: A Field Design Approach

This chapter addresses the concepts seen before in the domains of management and the organization. We describe a number of cases and applications with a view to facilitating the art of concretely applying complexity concepts.

3.1. Design approach for a complex system

This first section focuses on the design level, which should always be the initial concern for practitioners.

3.1.1. Methodological elements for the design of a complex system

As a generic example, a company as a whole can be considered a complex system, as it interacts with its suppliers, clients, the economic and financial context, shareholders and the needs of society. While the principles developed in the previous chapters directly apply, nevertheless, attention will have to be paid to the following multiple points and basic rules:

  • – In order to make significant progress, transdisciplinary approaches must be implemented. We will not discard transposing solutions already existing in other fields such as the life sciences or economic and social sciences.
  • – The notion of unpredictable behavior must be taken into account and integrated into the system’s ability to react, which implies a differing understanding to the issue of adaptation. Here, the notion of postdiction will be used as much as possible. Yet, in terms of prediction, it will be replaced by the detection of low noise – because minor events can have a great influence and impact on the evolution of the system – and by the intuitive anticipation of future events.
  • – Considering the system’s interactions with its environment, their identification and control are essential. This requires a global approach and the dynamics of the system then becomes a priority much more than the realization of any operational function in a stable or laminar context.
  • – The problems to be solved are non-decomposable and there is no need for complicated system modeling. On the other hand, an analysis of aggregate factors different from the detailed system variables (representative of dynamic behavior and related to organization or configuration) is required and will be based on common sense rules (CYC type). This analysis will use simplified algorithms: the notion of trend and approximate and rapid estimates are a key factor for success.
  • – In any complex system, decision-making is difficult because it requires “realexpertise and multi-criteria. The action plan must therefore be clear and agreed by all stakeholders, which implies a validation system and an effective and efficient cooperative approach are set in place.
  • Risk management is again an essential element of success. It will be based on the uncertainties affecting the entire system, whether due to the fact that:
    • - we are in an inductive and non-deductive environment;
    • - emotional and impulsive situations prevail over calm and reason; or
    • - the sensitivities of the leaders are more oriented on the defensive than on the offensive and that the social and societal aspects take precedence or not over the economic or financial ones.
  • – The company’s organization must be based on criteria of autonomy, modularity and adaptability, as encountered in network theory and polarized crowds [BAI 19].
  • – The evolution of a complex system, as we have already seen, is always done in three stages as follows:
    • - a stage of disorder that is caused by a disruption, singularity or disaster that is sometimes random and results in deterministic chaos. At this stage, it is necessary to identify the amplifying factors of the system;
    • - a combination and self-organization step bringing the system into a state of equilibrium or configuration;
    • - a stage of development, adaptation and continuous evolution.
  • Creation and innovation are major factors in the creation of new values, new wealth and new products. They are also based on chaos (in terms of thinking) and must be able to express themselves. Hence, the important place that must be given to personal initiatives, while containing them within a pre-defined managerial framework.
  • – Tactically, and taking into account the risks mentioned, the approach is conducted in a global manner, but the implementation will be partial and then gradually extended (using the TBRS motto: Think Big, Realize Small). At each step, we will proceed to validation and correction phases to find ourselves on a good “trajectory” (as a reminder: a complex system is not controllable a priori!).

If the design of a complex system must cover these many criteria, constraints and cautious approaches, it is because we are in a world of unpredictability, uncertainty and risk that we cannot directly control. We are experiencing disruptions and must deal with them with chaotic systems by adapting as best as possible in the most reactive way possible and by imagining solutions that conventional approaches are not able to provide us.

3.1.2. Example: how can we propose a “customized product”?

Currently, in economics or industry, we are faced with two problems related to the “mass customization”:

  • – customers are requiring increasingly customized products (up to personalization). Nevertheless, manufacturers are reluctant because specific products cannot be easily automated and require heavy intervention from the design office;
  • – in the context of the Internet, with business-on-demand strategies and the relevant market opportunities, more and more people are demanding customized solutions. Again, this involves dealing with a large number of requests for specific products within a given time frame. Moreover, this approach is limited by two facts: the cost (or price) of the product and the volume and weights of the product to be delivered.

One of the answers to these problems is called “speed to market” and, in both cases, the competitiveness criterion put forward is that of product or process flexibility. At first sight, the second criterion seems to be the best mastered and often makes use of common sense. The first criterion is more “cumbersome” to implement, both conceptually and in terms of investments.

The question is how to respond to these on-demand design and configuration requests in the shortest possible time. Product development time and the time it takes to obtain these products through their manufacture and delivery must be considerably reduced. The objective is not to achieve reductions of 5 to 10% but of 50% or more, to remain globally competitive. In the automotive and aviation industries, interesting approaches and results have been achieved. Without precisely describing the strategic approaches of manufacturers such as Renault and Boeing or Airbus, we can nevertheless highlight key elements:

  • – design engineering cannot be carried out by isolated groups of specialists whose organization is too rigid, compartmentalized or cut off from the world. Hence, the Special Research and Development Centers set up for this purpose;
  • – the product specification must be cost-effective, designed to ensure a high level of quality with high volumes and finally modular and scalable, to minimize the cost of version changes (maintenance and upgrading);
  • – process engineering must be integrated into a global concept of a demand-driven company (DFT – Demand Flow Technology [COS 96]);
  • – once the product is launched, the way in which documentation (or information on the product, its use and maintenance) and technical changes (EC – Engineering Changes) are made available, user-friendly and simple is an important element of success. Indeed, customers’ advices and claims are of key importance, either for correcting a system or for information purposes to prepare the future needs of the customer.

3.2. Applications and solutions

As part of the design and development of mass customization, several approaches are analyzed; they mark a rapid evolution in know-how. Describing them gives us a better understanding of where we are heading. Regardless of the cost and the size of the final products or services, the following cases will be discussed in turn:

  • – the design of specific products on demand;
  • – the development, assembly of products on demand from standard components;
  • – the adaptation of complete products, generic configurations on demand;
  • – the auto-configuration of products during use;
  • – designing self-propagating computers (see case 5 in the following).

3.2.1. Case 1: current approaches based on “design on demand”

These approaches are based on an essential element: technical data management (TDM). These data were managed using appropriate tools, such as IBM’s PDM (Product Data Management) or ThinkTeam from Think3. They have been coupled with 2D/3D computer-aided design (CAD) tools such as ThinkDesign by Think3 or CATIA by Dassault Systèmes (which offers a global solution such as PLM – Product Lifecycle Management). These approaches provide manufacturers with complete and efficient solutions for on-demand design. In a simplified way, and depending on a specification or customer specifications, they are able to:

  • – design a product using as many existing components as possible and combine them with fewer new ones, in order to minimize delays and costs;
  • – configure a new product from a pre-tested maximum configuration, by performing a reconfiguration or configuration degradation followed by a minimum test;
  • – using a machine memory, by making configuration adaptations based on assemblies of FRUs (Field Replaceable Units), off-the-shelf by-products or new components.

Once the products have been defined, they can be validated based on business expertise [MAS 95b] and simulations. This makes it possible to detect impossibilities of adjustment or manufacture, to eliminate functional or structural misinterpretations, unnecessary or undesirable modifications, coreq and pre-req problems, etc. It is then possible to automatically generate technical data, plans, ranges and bills of materials (FBM – Field Bill of Materials; FFBM – Field Feature Bill of Materials) for the new product and associated processes, with minimal cost and risk. The gains obtained with such automated processes are about 50–75% in time and money (this is in fact “avoidance”). In this case, we carry out “intrinsic differentiation in design”.

3.2.2. Case 2: “design by assembly according to demand” approach

This approach has been in place for two decades. During the design and manufacturing phases, it consists of assembling off-the-shelf products. This is of course an image, but illustrates the fact that we will focus mainly on the processes applied to a standard process that can be found in all application fields. In order to remain competitive, Quantum Leap approaches promoted by J.R. Costanza [COS 96] have improved the situation in a number of companies. This approach is intended to deliver customized products as quickly as possible (speed-to-market). It is based on a few essential points relating to a manufacturing process:

  • – manufacturing is demand-driven: the flow of products is subject to the timing of client orders and not to prior sequencing;
  • – production orders are defined on a daily basis, over a very short period of time to adjust as closely as possible to demand;
  • – financial management is modified to adapt to the value-added chain and not to cost and traceability monitoring;
  • – concurrent engineering is carried out simultaneously on the design of products and processes from a cohesive and homogeneous team;
  • – the staff is stimulated and adheres to the company’s culture, aiming for perfection in the work accomplished.

Depending on what we have just seen, priority is given to work done to a high level and to the implementation of concepts such as TQC (Total Quality Control) with a global approach.

However, at R&D level, it is possible to highlight some concepts related to self-organized systems. To achieve this, the complexity factors of the processes are limited in width and depth by way of modifying the design of the products. One way to do this is to have an “off-the-shelf” bank of standard components or sub-assemblies and design the final product by assembly, according to the customer’s specifications or model. This is intended to limit efforts at the level of scale, classification and the design of associated processes.

3.2.2.1. Classifications

Traditional multi-level classifications are replaced as much as possible by flat single-level FBMs. Indeed, in conventional systems, the product design is functional and basic parts and components are considered and grouped into sub-assemblies, then assemblies and so on. In our case, sub-assemblies and components are considered as independent entities, monitored, purchased and controlled as full-fledged entities directly from suppliers or parts manufacturers. This approach is tantamount to destructing the classification and decoupling production operations. Thus, components, parts and assemblies are managed directly according to demand, in linear production lines and without the “upstream” impacts and fits and starts associated with the dynamics of multi-level nomenclatures. In particular:

  • – systems engineering with a simplified classification is reduced to a minimum: the sequencing and the removal of TQC elements are limited to operational sectors (we are no longer talking about picking, kitting or high-level sub-assemblies but about components or FRUs – Field Replaceable Units). The notion of “in-process” production is eliminated, which considerably reduces the weight of MRPII tools that had become more complex in the meantime;
  • – the concept of an FRU has long since developed in avionic, automotive and computer applications, to provide simplified option management and maintenance in complex systems. These replacement components – or options – have their own classification, but the design and production systems are independent. Indeed, they are considered as purchased parts or components and production, test and logistics times are separated from those related to the final product.

The clarification of classification and the deletion of lists of material and multiple-use classification are intended to simplify processes. A supply or quality problem case is managed by the supply department and is not directly integrated into the central process of the final product because it results in a replenishment order, with a well-established “client–supplier” contract. This reduces costs and delays in dealing with problems.

3.2.2.2. Technical changes

When using the final product, some problems may occur. They concern its functional aspect, i.e. malfunction, as well as improvements in performance or use. In these cases, two modes of product evolution are introduced. In a simplified way:

  • – temporary modifications or corrections intended to provide a functional solution, partial or not, to a problem, can be applied immediately (a “fix”). In this case, the modification will be more formally resumed at the next release or upgrade of the product;
  • – non-functional improvements are planned and grouped with other technical changes;
  • – in the long term, all modifications are integrated and monitored in a well-identified technical change (EC – Engineering Change) and launched into production with the engineering services, using an ECO (Engineering Change Order).

This approach simplifies the technical evolution of complex systems by limiting the number of change implementations, producing, testing and planning them as if they were an independent production process.

3.2.2.3. Consequence: decoupling and process division

The approach described here concerns “differentiation in product assembly development”. Here, the development process of a product is similar to that of a manufacturing process and constitutes a continuous flow. However, the synchronization and sequencing of tasks in continuous flow processes are problematic on another level: in such workshops, these phenomena depend on SIC, nonlinearities and discontinuities will appear. The principle is to design the product in such a way that it is modular or modular according to functions and options and can be assembled from standard components. The decoupling of the process into independent sectors or independent and communicating autonomous entities makes it possible to structure the production system into profit centers (operationally and financially) organized in a network. Apart from the fact that the management of such systems becomes less complex than for traditional systems, the system is predisposed to the emergence of stable orders and operating states for greater reactivity.

3.2.3. Case 3: product reconfiguration and on-demand adaptation

Product reconfiguration is a solution to mass customization and adaptation of products to a given problem. The examples we will consider come from the information processing and microelectronics industry. They are the result of real experiences and cases over several decades and, although they may have been either successful or doomed to failure, they certainly constitute a valuable knowledge base for the design and development of new products, regardless of the fields of application considered. The principle is to design generic systems containing a sufficient number of pre-assembled and tested devices, then to customize at the end the added value – as late as possible.

3.2.3.1. Circuit redundancy

The first example concerns the redundancy of circuits to deal with failures and improve the reliability of an electronic system (consisting of all circuits). This point will not be developed as it is relatively well covered in reliability manuals. However, and from experience, having participated in the development of high-powered top range mainframe computers in Poughkeepsie (NY, USA), the great difficulty lies in the development of models to make reliability forecasts over time, to optimize the number of redundant circuits and how to activate them dynamically plus in a timely manner. Thus, it is currently possible to ensure the proper functioning of a computer, without functional failure (fault tolerance) with remote maintenance, for 10 years, thanks to hardware or software reconfiguration. This reconfiguration is deterministic and follows specific rules designed to reduce the risks and uncertainty associated with random disturbances. In light of experience, we will say that this reconfiguration is of the “meso-granular” type.

3.2.3.2. Mass customization of computers

The second example concerns the mass customization of computers under cost and time constraints. The initial principle is simple because it is based on a widespread economic observation nowadays: net income is related to the service provided by the system and not to the weight of the hardware included in the product. In this case, the principle is once again simple: everything is based on the assembly of a “machine memo”, i.e. a computer whose configuration corresponds to an average demand in a given segment. This “machine” is assembled and tested to its final stage, long before its assignment to the customer is made. In general, several types of machines will be launched in production, with given capacities and performances, to limit the difficulties of subsequent assignment or reassignment. When a specific request arises, we can:

  • – either remove a device (part of the hardware or components) or deactivate it with manual or software operation;
  • – or complete the machine configuration by “mounting” an additional device or option or by downloading an additional program to adapt it to the client’s needs.

In both cases, configuration changes are calculated, planned and managed by a central control system. This is a reconfiguration of the “macro granular” type.

3.2.3.3. The design of reconfigurable computers

Computers operate according to microprocessors. These can be generalist (multipurpose mainframes): IBM develops its own integrated circuits, as well as with other manufacturers (this was the Power PC operated with Motorola) and this is also the case with Intel microprocessors. These microprocessors are very economical, integrating more and more components (according to Moore’s law), but the difficulty of implementation and their performance remains linked to operating systems.

However, in order to perform very specific tasks (vector calculation, security, cryptography, pattern recognition, etc.), coprocessors are used that will do the work in 10 or 100 times less time than a general purpose processor. These particular circuits are ASICs (Application-Specific Integrated Circuits). Given the lower volumes produced, their costs are higher and the difficulty here also lies in their integration into larger electronic modules (TCM – Thermal Conduction Modules, Air Controlled Modules, etc.).

All this leads to the design of complex electronic systems due to the diversity of components, their redundancy and the numerous connections between them. But how can we combine low cost, flexibility or versatility and speed? Two approaches are adopted:

  • – the first is to manage the system’s resources “intelligently” to adjust its functional capabilities, bandwidths and performance. In IBM’s zSeries computers, self-optimization and self-correction functions were introduced to automatically perform resource allocations and direct them to priority tasks. The reconfiguration is carried out using a software module called IRD (Intelligent Resource Director). In addition, thanks to another module called “Sysplex Distributor”, it becomes possible to carry out the balancing at the level of the calculation loads in the network;
  • – the second involves using configurable logic circuits called FPGA (Field-Programmable Gate Arrays); they are fast and inexpensive high-density components. The objective is to ensure precise functions based on a set of replicated and pre-wired logic blocks. The connections between these blocks are modified by software. This almost instantaneous reconfiguration is dynamically modified during its use, i.e. according to the inputs or the computing environment.

The basic circuits contained in the logic blocks may have a more or less fine granularity, but, proportionally speaking, the technology remains of the “microscopic” or “mesoscopic” type. With field-programmable gate array (FPGA) technologies, specific functions of a microprocessor or an assembly of functions can be easily associated with them to get a more complex processing unit.

This makes it possible to combine universality and performance, but, knowing that “nothing is free”, simplexity in terms of hardware is replaced by significant complexity in terms of programming, i.e. software. In 1984, our team, interested in application parallelization, supervised a thesis on OCCAM [PAU 85] for low granularity, and on the structure of parallelizable algorithms [MAS 91] with microprocessor assemblies, for high granularity. The objective was to use the computing power available in a plant to better solve production management problems (which, by then, had not yet been defined as complex!). Of course, compared to the work carried out more recently by the RESO project at the Ecole Normale Supérieure in Lyon, France, or by GRID Computing, now used on a large scale by IBM and other large companies for, for example, the study of protein folding and proteomics, these results seem derisory; however, they have contributed to a better understanding of the problems associated with automatic circuit configuration. This pioneering work carried out with the late IBM Scientific Centre in Paris [HER 86] and the CNUSC in Montpellier has not been followed up, given the compatibility problems encountered and the unavailability of industrially reliable software. Between theory and practice, many years are, rightly, necessary.

Before moving to new paradigms, we must first try to improve what exists, optimize it and finally solve technological problems, whether they are hardware, software or organizational in nature. In the case of configurable circuits, several points must be resolved beforehand. For example:

  • – applications must first be made “parallelizable”, and it is a preparatory and organizational task;
  • – many programs still require large external memory to operate with configurable circuits, as data transfer between circuits and memories slows down the overall computation speed and consumes energy, not to mention computer security issues;
  • – for a long time, computers with dynamic instruction sets [WIR 95] have made it possible to overcome shortcomings in the performance of a function, but this approach is based on the activation of circuits, based on pre-programmed and stored configurations;
  • – the switch from one configuration to another must be possible in one cycle time, without deleting partially processed data, hence the integration of resources and means at the base circuit level that will give it autonomy.

Gradually, in terms of design and architecture, the boundary between programmable and configurable processors will blur. This will allow generic or specialized tasks to be carried out by pooling available resources. In the context of the Internet, for example, which is only an extension of this concept, this is already being done with IBM Grid Computing since 2002, to solve the major problems of our society.

3.2.4. Case 4: product auto-configuration and adaptation for use

Prerequisites

We wish to consider here the design of a distributed autonomous system. For example: a network of microprocessors for scientific computing, relocation as part of the electrical wiring of computers. The question is “how can we organize logistics and task assignment in an open e-business system?” The difficulty comes mainly from interactions and diffuse feedback loops. We will try to apply the concepts developed in this chapter. The initial methodology can be supplemented by proposing the following approach, based on dynamics and unpredictability:

  • – Rather than focusing on the overall objective you want to achieve, you need to define a set of overall objectives and outcomes that the system is likely to achieve. This is important because, as mentioned above, controlling the attractor to which the system converges will be difficult!
  • – Since it is impossible to control the system a priori, it is impossible to set the initial and optimal values of the parameters. Indeed, the system is neither decomposable nor reversible. However, monitoring the evolution of the system is a key factor. It is important to detect if it diverges, if it is “contained” within certain limits to try in some cases (only) to bring it back into a field of possible and desired solutions. The implementation of sensors and measurement indicators is therefore important because it makes it possible to collect information on the state of the system, its environment, its positioning in relation to the various objectives and on trajectory deviations. These are of course local values, taken in real time, and concerning limited actions. A synthesis and aggregation work is then necessary to find the right hyper-plan (which describes the situation, the global evolution of the system and its trends).
  • – It is now appropriate to consider an action plan (strategic or tactical) to make the system as flexible and adaptable as possible, i.e. to take into account different possible options and thus improve certain performances or criteria. As we can see, the notion of flexibility is a priority: we try to adapt to the system, to guide it and make it evolve to achieve an overall objective rather than to determine, in a static way, the operating framework, a priori and in a rigid way. Thus, the system is allowed a great freedom of action and change, which is essential since it is made up of autonomous agents and flexibility is intrinsic to it.
  • – During the design phases of the system, the fundamental concepts of interaction and feedback will be addressed. They make it possible to accentuate or reduce certain influences, to amplify or inhibit actions, thus directly influencing the behavior of the elements of a neighborhood and its conditions of stability. In this way, it is possible to modify the price of the transactions, taking into account the options chosen and the importance of their relationships. Thus, we will have to maintain or eliminate certain interactions, to modulate them through weighting factors. It will also be possible, under the theory of programmable networks, to choose a given K-connectivity in order to play on the diversity, stability or flexibility of the system, and to limit or not the number of attractors, i.e. the number of emerging orders.

When the decision-maker is confronted with a complex system, it is essential to use modeling and simulation to infer its behavioral trends or validate options. Moreover, the overall behavior of the system can only be approximated. Indeed:

  • – too much detail and data leads to “noise”, and this makes it difficult to extract weak or significant signals;
  • – complex phenomena sometimes generate deterministic chaos and the accuracy of digital computers is insufficient to represent their evolution in a fair and secure way. We will then be satisfied with the identification of typical behaviors;
  • – diffuse interactions and feedback loops make the system unpredictable and non-calculable beyond a very limited time horizon;
  • – the definition of the values of certain parameters, and for the reasons mentioned above, requires the use of “reformulative” techniques such as genetic algorithms.

To counter the “global dynamic” associated with programmable networks, it is important to draw the attention of specialists to the fact that we can only think in terms of improvements and not in terms of optimization. However, the processes of continuous improvement of a process or of the behavior of a system come up against the fact that the notion of “dynamics” can be considered as contradictory to that of “continuous evolution”, which implies stability.

Similarly, learning involves collecting information about what is being done and observed. This takes time, especially since it is sometimes a matter of making test plans to determine which actions should be modified or promoted in light of contradictory, recoverable or recurring local objectives. The problem of dynamic adaptation of the control has not been solved to date.

3.2.5. Case 5: designing self-propagating computers

In the cases studied above, the automatic reconfiguration of circuits or processors is predetermined. We have not yet reached the stage of self-organization itself and this is what we propose to consider now by quoting the work of Daniel Mange, professor at the EPFL [MAN 04]. The basic mechanisms are those found in the living world, and the objective is to design circuits or programs capable of self-replication (principle of reproduction) and self-repair (e.g. DNA).

Self-replication is a commonplace operation in the living world, and its mechanisms are beginning to be better understood in the world of information systems. They are based on the work of von Neumann as early as the 1940s, through Langton’s work in the 1980s [LAN 84], the overview in Wolfram’s work [WOL 02] and the Little Thumb (from Charles Perrault’s Hop-o’-My-Thumb fairytale) algorithm developed in 2003 [MAN 04]. Self-replication is the use of physical, chemical or software processes to reproduce an object or program in accordance with a plan (program) and to multiply it in a given number of copies to form a community or collection of objects. Self-replication, which is an identical copy of an object or program, is always done in two steps:

  • – interpretation, which is the decoding of the construction rules (the plan or program) specific to the initial object;
  • – copying objects, which involves transferring information from the constructor (initial object) to the clone.

In this way, a more complete computer system can be created and its objects assembled to enable complex operations to be carried out. These techniques are already used on prototypes at IBM. It is therefore a “differentiation by use” since the computer program will generate its lines of code according to the needs of calculation or information processing.

In his research, Professor Langton developed a process capable of replicating a pattern in an eight-state cellular automaton. This is interesting insofar as an agent is able to reproduce itself in the same way or to self-replicate itself according to a program, i.e. by a well-defined sequence of rules, as we observe in Nature.

In terms of exploiting the structures or assemblies thus obtained, we can refer to the theory of cellular automata in the field of computing. Indeed, when we want to exploit the properties of cellular automatons, we find that the cells of an automaton are capable, based on simple elementary rules, of evolving their states and generating stable, periodic or chaotic forms (state configuration), depending on the nature of the interactions. The advantage of such an automaton is to show how a network of similar objects can converge to a given shape and generate a global order. The properties of self-replication are not explained here, but the underlying mechanisms of collective behavior can be better understood. The complexity that we have already addressed does not always translate into the generation of a complex global function.

The purpose of these concepts is to develop autonomous systems that can operate without the direct presence of humans or a centralized control center, because the temporal or functional limits are important (spatial, polluted or dangerous environment, etc.). Thus, a robot will be able to reproduce itself from off-the-shelf components according to the resources and capacities required by a process (increase in production capacity) or according to its operating state: in the event of a failure, it is possible either to replace a failed component or to recreate a complete system. Similarly, in an information system, it is possible to generate computer programs from initial functional specifications and a library of standard software components.

Another application is related to the detection of periodic cycles or phenomena in an industrial, economic or financial process; to the extent that we are able to control the replication of objects, we know how to compare them directly and master the notions of differentiation (cellular differentiation, shape separation, signal separation, analysis of stock market variations, etc.). These applications have an immense scope in the fields of economics and interactive decision support systems.

3.3. Application: organization and management in companies

In this chapter, we have focused our attention on examples from the industry. It should also be noted, as mentioned in the initial chapter of the book, that “everything begins with the Organization and ends with the Organization”. This means that the design of a product or service goes hand in hand with the design of organizations. When designing an organization, we start from an advantage, namely that the human resources required in any process are comprised of multiple skills, intelligence and autonomy. The aim here is to design approaches in the field of organizations to obtain adaptable and configurable systems. In this way, we rely on the experiences encountered at IBM France during the various restructuring operations, as well as on the work of R.A. Thietart in the field of strategic management [THI 00].

There are many examples in the publications of large companies such as Renault, Unilever, Microsoft, Danone and IBM. These companies have made radical changes in their purpose, activity and structure to quickly become compatible with the new economic challenges. They have always been able to adapt, seize emerging opportunities and support them. Radical changes were often implemented quickly, but were accompanied by extensive preparatory work.

In other cases, changes are gradual and subject to continuous adjustment as they follow developments, such as technologies. However, in all cases, a strategy is made up of total or partial questioning, movements, readjustments or disruptions. On a practical level, and to better control their strategies, these companies have a strategic plan that runs over a period of 3 to 5 or 7 years. However, the analysis of the internal or external context remains difficult. It is necessary to carry out benchmarking, detection and identification of “low noise”. Indeed, in a world full of disruptions, singularities and subject to chaotic phenomena, the emergence of forms or new orders is barely perceptible and must be detected as quickly as possible. It is an essential competitive factor.

3.4. Main conclusions related to the first three chapters

Several coherent and rational techniques for the design and development of complex systems have been reviewed. To summarize, it is appropriate to make some comments and draw some useful lessons for the design and future evolution of complex systems. All the current approaches described make little or no use of the desired paradigms. There is therefore a significant potential scope for further improvement of our processes. Let us review a number of characteristics, transposed from the life sciences field, in order to take them into account in future development models.

Living beings evolve under the influence of genetic mutations from very diverse origins, and this is an internal process of evolution. It is worth mentioning Charles Darwin who stipulated that natural selection from random mutations is one of the main mechanisms of evolution (genetic modification). On the other hand, morphogenesis and physiological or behavioral experience, which have often been considered external to evolution, also have an influence on the genetic heritage of living beings due to the progressive heredity, in the genetic heritage, of the acquired traits (genetic assimilation). Thus, morphogenesis, physiology, behavior, mutations, etc. are an integral part of a living being’s evolutionary process and therefore of its ontogenesis (everything that contributes to its development). In addition, and as we will discuss later, these phenomena, by their very effects, provide organisms with a feedback mechanism to adapt to changing environmental conditions, while maintaining orderly structures and continuity in their lives.

The same applies to all of humanity’s creations, created in their own image. This poses a fundamental problem in terms of the methods used to design a product or service:

  1. 1) It is common practice, for example, to use value analysis to design a product. This is acceptable insofar as we want to create and develop a product at the lowest cost, limited to its essential functions! But what about our genetic material, which only has 25,000 to 30,000 genes? The presence, role or functionality provided by the genes contained in the Soma has not yet been studied, and it is likely, according to Grassé [GRA 94], that this will allow the creation or activation of new genes. Moreover, if the first living creature had only had a minimal genetic heritage for its survival, how would it have adapted, evolved and generated diversity? Since living organisms are self-adaptive and are not externally driven (they do not receive information from the outside), it is in terms of DNA, and therefore the internal program, that initiatives must be taken. Thus, in products designed and intended to be “intelligent”, reductionism and simplifying approaches have no reason to exist. It is therefore necessary to simplify the product, but not to reduce apparently unnecessary or redundant functions.
  2. 2) Most of the experts working on the evolution theory believe that living organisms must have feedback between behavior, ontogenesis and evolution. When these various feedback are positive, they contribute to genetic assimilation. When they are negative, this will be called stabilizing selection. A living organism needs both of these mechanisms. Thus, in any product design process, as well as at the product level, hierarchical structures associated with top-down decision trees do not allow information to flow in both directions. The result is systems that are inflexible and lacking in self-adaptivity.
  3. 3) In any production system, we are involved in product/services flow processing and control. Here, we cannot ignore that the throughput performance is widely linked with the modeling of crowd dynamics [MOT 18a]. Here, bottlenecks and saturation rates in the flow is inversely proportional to the density of products under transformation. Through simulation, the Advanced Technology Group of IBM (ATG) based in Montpellier found that the best production performance was reached with a saturation ratio lower than 0.8. In fact, with crowd dynamics theory, we can state now that the “friction” observed between products is of key importance. Then, to speed up a flow and reduce the risks of conflicts, it is necessary to consider a networked production system as a cellular automata and to apply the rule “faster is lower” (a faster cycle time requires a low density of products).

In addition, for a natural system to be able to undertake a sequence of coordinated actions, and thus a strategy or approach to follow a trajectory and then reach a desired end state, it must have a program (ordered set of instructions). It must also have a model of the relationships between the system and its objective. Finally, it must have the means and methods to be followed to achieve it and have a map of the types of disturbances to be controlled, through ARA capability:

  • Adaptability. Complex systems with self-organization are similar to living beings. They are self-regulated cybernetic systems capable of maintaining their own stability (homeostasis). The immune system, like the genome, is self-regulated and able to maintain its homeostasis in the face of internal and external environmental pressures. In contrast, an externally designed and directed system cannot be stable; indeed, it is developed independently of its relations with the outside world and will be oriented towards a goal that satisfies particular interests regardless of its effects on biological, social or biological systems.
  • Reactivity. Living organisms have a very high capacity for adaptation. This ability is linked to the ability to improvise and imagine in order to meet the challenges and problems posed in their environment. Analysis of the behavior of insects and living beings in general shows that, thanks to their reasoning, very diverse strategies can be developed to adapt to unforeseen situations. These solutions concern morphogenesis, protection or adaptation elements (hair, fur, shells, camouflage coloring, modification of the life cycle, etc.), as well as the creation and development of new strategies (escape, association, cooperation, competition, etc.). All of them have in common the fact that they must be innovative, oriented towards the satisfaction of a “cost” objective or function, and rapid implementation, especially if it is a question of the survival of a being or species. This is partly why our brain is equipped with neural networks, i.e. classifiers (shape recognition) and highly efficient “reflex” decision-making systems, once learning has been achieved.
  • Anticipation. Natural organisms must turn to efficiency in order to survive. It is therefore normal that forecasting and anticipation are characteristics that complement the skills described above. For example, vegetation that adapts to its environment is able to accumulate water reserves to better withstand drought. Animals, at the time of migration, as winter approaches, have also made reserves to support long transhumance. Similarly, according to Pavlov’s experiments, animals are able to salivate when the time for their food approaches. Thus, prediction is an essential component of perception, which is the first step in self-regulation [GOL 94]. The prediction of events is possible thanks to the information, traces and histories recorded in the memory of individuals or agents, as well as thanks to those implicitly organized in the genome (by genetic assimilation). Thus, evolution depends on prediction because selection or adaptive strategies would not have the means to support an organ in its draft form whose usefulness is a priori very low or even zero.

These above elements are criteria to be taken into account when designing products, services or even complex systems. Indeed, there are no artificial intelligence features that are of key importance, but instead the collective intelligence that must emerge from a society or a network of agents. This is first and foremost based on agents’ adaptive and evolving capacities. As mentioned before, the main advantage in developing and controlling a complex system does not only come from the so-called “intelligence” of each individual agents or people involved but also from the social intelligence of the team who are able to realize some specific tasks through more empathy and listening.

In fact, as observed in numerous artificial intelligence-based applications, the mere use of algorithms leads to dead ends. Social transformation is based on new foundations and paradigms whose rationalized expression takes the form of a new modeling of technologies based on the dynamic representation of the systems and the nature and culture of the people who use them.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset