The art of engineering complexity can heavily draw on a well-known practice called business process engineering. It actually enriches the latter by a significant measure, which, in turn, broadens its scope and applicability. This chapter focuses on the ways and means to so perform and obtain better control of complex systems.
This section focuses on process redesign and improvement. This, also known as BPR (Business Process Reengineering), is an effective and efficient way to change paradigms in an organization. Indeed, the objective is to change the quality and performance of an existing organization, knowing that the context is continuously evolving.
Although BPR is often associated with reducing or refocusing activities or other changes, this concept actually focuses on the notions of:
This approach often increases product quality or customer satisfaction by a factor of 10 or 100, as while respecting the principles developed in this chapter. We will first look at BPR with a “conventional” view to highlight its essential characteristics. The technologies used in BPR are based on an analytical and methodical approach. Indeed, any profound modification of an organization aims to maximize the result but is never without risks: it is therefore a matter of reducing and controlling these risks. These technologies make it possible to review several key aspects and elements of an organization such as:
On a practical level, we will endeavor to apply some simple rules of conduct to recognize:
Challenges therefore require constant communication with employees, the management line, and partners, plus at all levels. BPR is also about allocating the necessary resources and effort to the analysis and redesign activity to solve the transformation challenge.
However, taking complexity into account leads us to review the methodology somewhat. Indeed, the actions undertaken will have to take into account the causal factors of complexity. Depending on whether we want more or less reactivity or self-organizational skills, we will focus on interactions and communication (nature and importance of exchanges, etc.), and therefore on the very structure and architecture of the organization.
It shall be said process reengineering responds to a need to improve existing processes, with the intention to increase the quality and also the performance of a process. In the context of this book, the complexity issue arises, hence control and the control of the system itself. For this reason, it is necessary that you use approaches that are widely used in the industry and adapt them to your own context. So we need to use common sense. Whatever the methodology used and the goals pursued, the reengineering process involves a number of points that must be remembered.
Indeed, it is common belief that when faced with a problem, using advanced technology will solve everything. This concerns, of course, and for example, IT or robotics (technical or administrative). But these technologies only speed up a process, or solve a complicated algorithm more easily. We have been involved in a large aerospace environment where one dominant motto was to straight design a robot for a highly complex process at hand, despite the arduous process complexity. It took us an entire method design workshop to circumvent a management fixation towards a given robot and, thanks to the conceptual results obtained, show the feasibility of a series of different redesigns opening up entire new engineering avenues with new value. According to the saying, “Garbage in ➔ Garbage out”…Faster and faster is not a solution.
The implementation of new management rules and paradigm shifts involve a number of constraints and working methods that we will now describe. The methodology implemented, called RECOS (REengineering of COmplex Systems) consists of 10 steps. It includes a subset of steps specific to the traditional BPR approach, and also incorporates new concepts underlying complexity that are the subject of this book. One of the difficulties in exposing RECOS comes from the need to have previously created a common and appropriate theoretical framework for the method.
The RECOS method consists of 10 key steps, as follows.
It should be recalled here that time management is crucial. Indeed, complexity is a dynamic phenomenon, and changes over time are the most difficult to understand. As we have always been told, these phenomena are unpredictable, which forces us to react over very short horizons.
This coordination is carried out using meta-rules whose purpose is to allow the development of local patches or autonomous and coherent entities. The resulting core is a fixed one, in the sense that it constitutes a stable set of polyvalent entities. The purpose of this step is therefore to find the right level of globality.
If this is not possible, then efforts must be made to find the right subsidiary level in coherence. It is indeed important to maintain meaning, unity and coherence in each and every BP.
Any change should be global in scope, and require a general mobilization. But the more we globalize, the more inertia gets important and change necessarily consumes time. The local and the global are inseparable because while the complex structure of a system is defined at the local level, the global imposes constraints and methods at the local level, while local agents are the entities that bring about an order at all.
Thus, the success of such a BPR-based reengineering approach in complex environments will lead to implementing new paradigms, these resulting mainly in a new organization that is even more effective, efficient, competitive, sustainable and profitable.
An important problem of the process remains to be addressed: while the models used to study and redesign a process are hopefully very useful, they are inherently simplified and incomplete. The fact is when trying to get highly complete models, they tend to generate more noise than relevant information. These models are therefore limited and what counts, when studying the complexity of a system, is to explain its aims, its trends, in short, to predict the nature of certain behaviors and to set priorities. Therefore, when developing a model, attention should be paid to the following points:
This stage is intended to determine the domain of interaction, competitive advantages, the resources potential and their characteristics, etc. Specifically:
This point has already been discussed. As a reminder, simulation models make it possible to understand and apprehend a complex system. The important thing is to try to achieve a good balance of flows and a “good” use of internal and external resources. Specifically:
In the current state of science, and perhaps fortunately so, we cannot do much better than above!
The above-mentioned considerations are not intended to reduce the role and benefits of IT but to highlight the purpose of IT tools, of modeling techniques, and finally introduce Information Systems concepts in our so-called complex organizations. In this section, we will specifically address artificial intelligence since AI is just considered as an enabling technology.
First, computer science arose from the needs encountered in the computing world, then was aimed at determining solutions to operational or scientific research problems. Gradually, the tools and methods have been extended to all the areas of activity that affect us. Thus, the resulting information systems have become omnipresent; they now constitute a means of controlling operational systems and information flows, physical flows and workflows. Finally, they are at the root of changes in the organization, reshaping structures, the size and functioning of complex organizations.
The range of changes (or generations of innovation) brought about by information technologies extends over four orders:
These process improvement or redesign activities involve closely correlated risks and benefits [LAU 01]. Indeed, two deficiencies regularly populate conventional systems:
In a complex system, the prevailing logic stands no longer the same as in a non-complex system: it is about running a community of agents. They do interact, i.e. they exert a mutual influence on their close neighbors. The generated influences are positive or negative, linear or nonlinear, and will spawn complex behaviors such as chaotic or SIC (Sensitivity to Initial Conditions). Thus, even a minor change that has been introduced at local level (e.g. the easing of a bottleneck at a given local workstation) may have an unpredictable effect on the whole system, often known as the pumping phenomenon. By pumping we mean a resonance effect which amplifies and propagates some anomalies along the supply chain of a manufacturing production line. This was demonstrated by simulation, in the DAPS model of an enterprise modeling [MAS 99, MAS 02]. Pumping becomes a critical seed of deviance in any “sensitive” dynamic system.
Even though the following remark may sound trivial at this stage of the discussion, it should be stressed that the complete reconfiguration of a production system in the broad sense may not be followed by any effect if the working methods are left unchanged. In such a situation, working conditions have changed, but not the links and relationships between employees, resulting in the superposition of two operational systems: one official and the other underlying – what a dissonant mixture lurking for yet unknown failures.
In the mid-1980s, a new industrial management approach called OPT (Optimized Production Technology) made it possible to question a number of ideas on the process improvement approach [GOL 84]. This technology also shows that it is not always necessary to use complicated Operational Research approaches to address the problems of continuous product flow in a complex production system; a few simple rules are sometimes sufficient. In the following, we will recall some of the most significant ones and see that they simply make it possible to better control the complexity of the systems at hand. Bearing in mind the principles developed in the book Le But [GOL 84], we can first see that the method is based on the management of bottlenecks, also that it aims, by means of decoupling and simple capacity calculations, to avoid loops and ensure maximum fluidity of product flows. Second, the technique used is compatible with the Theory of Constraints (TOC), which has proved its worth in industry.
To achieve a fluid and “harmonious” system, we see that the detection of bottlenecks and the decoupling of the line into separate assemblies allow the system to be simplified and “manageable”, i.e. efficient and, under certain conditions, effective.
Increasing complexity is the absolute characteristic of a complex dynamic system or organization. It is generated by its internal dynamics; heterogeneity is born from homogeneity and order emerges from chaos. According to Darwin [LEW 94], complexity resulted only from the natural selection mechanism, but today it is no longer considered to be the only cause of emergence. On the contrary, it should be noted that, from generation to generation, or from selection to selection, such systems always move towards the frontier of chaos, by successive stacking or assembly of sub-assemblies, by increasing their capacities, or by adding new functions of adaptive or co-evolutionary nature. Is this the hidden secret of ever ongoing, unwavering innovation in our business systems?
The measurement of complexity should be based on the several approaches that we have already developed. It is worth recalling once again that in the field of complex systems, the sum of local optima is not equal to the overall optimum. In the context of organizational sciences, we know how to measure the intrinsic and behavioral complexity of a set of actors.
As a reminder, we will limit ourselves to the following three most common types of complexity, hence the measurement corresponding methods:
Much advice and many experiences and recommendations have been shared in the domain of reengineering complex systems and various approaches were noticed. In terms of diversity, which is what this chapter discussed, having often said connectivity should remain low, an optimum level of diversity can be calculated from the Pareto optimum formula that corresponds to an almost constant communication optimum. Its dimension is fractal (Zipf–Pareto–Mandelbrot). We can also quote [MAN 13] in controlling epidemiology in complex systems.
As a conclusion, and in the same way, innovation, which we would handily define as the expected result of a disruption, then the emergence of a new order, corresponds to the same principle and can only be produced by small groups (i.e. the reduced nucleus) and poorly interconnected.