Appendix

Measurements from the Theory of Constraints

The Theory of Constraints allows us to further simplify our management efforts by providing the fundamental set of measurements (Throughput Accounting) that allows us to understand how our system as a whole is performing:

Throughput (T) is the rate at which the system produces units of the goal (through sales, in For-Profit organizations). Throughput equals the sales revenues (S) minus the totally variable cost (TVC) of what it pays (often out to suppliers) to produce the goods or services sold (T = S − TVC).

Inventory (I) is the money tied up in the system to be transformed later into sales. This money is, in most part, in the form of what is generally understood as raw material. Inventory is valued at the cash outlay associated with its procurement. It does not include any allocation from overhead or fixed expense.

Operating Expense (OE) is the money the system spends/invests in generating units of the goal, such as rent, utilities, taxes, payroll, ­maintenance, advertising, training as well as investments in buildings, machines, etc.

Throughput accounting, unlike traditional accounting, recognizes that time is an important element in Throughput generation. Its measurements provide a meaningful report of cash in and cash out (no accounts receivable/payable are considered until they “materialize” as cash intake/outlay), and it supports a worldview based on consistently increasing performance as opposed to cost reduction. As such, it is completely interconnected with all other activities within an organization and provides an important support for decision making in every aspect of how the system operates.

Statistical Process Control: Control Charts and Reducing Variation

Control charts are a tool devised to measure and improve the variation of processes. They were developed by Walter Shewhart in the 1920s as a result of his work at Bell Laboratories. The control chart offers a much fuller picture than commonly used comparison of data to specifications or to average values of performance. These measurements will only tell us if performance is in or out of specs, above or below average. They tell us nothing about the process which produces these values.

At first glance, a control chart resembles a time series graph. In such a graph, monthly sales, for example, we plot months of the year along the horizontal axis and the number of products sold along the vertical axis. However, as a time series graph only allows comparisons between single values, it does not give us sufficient information about the behavior of a process.

The control chart, which in every respect is a process behavior chart, instead puts this information into a context by adding three horizontal lines. The central line acts as a reference against which we identify trends. The other two lines are control limits—the upper and lower control limits, or natural process limits. (These are calculated with the help of coefficients using the average values from a time series graph and moving ranges, i.e., the differences between contiguous individual values of the time series). They are based on the concept of “3 sigma,” sigma being a measure of the spread of data around an average value. For further information on 3 sigma and control charts in general, see Don Wheeler’s unrivaled ­explanations in Understanding Statistical Process Control, SPC Press 1992.)

In Figure A.1, we can see a control chart that contains data regarding the percentage of on time shipments made by a manufacturing company between January and May.

Image

Figure A.1 A process that is in statistical control

This chart shows a process which is likely to be statistically predictable in its evolution (“in control”) because none of the points of the chart are above the upper control limit or below the lower control limit. The importance of this graph is that, unless major changes occur in the ­execution of the shipment process, next month’s percentage of on time shipments will be, approximately, between 63 and 81 percent.

Let’s look now at a chart (Figure A.2) related to the process of accumulation of work in progress in the same company we looked at before.

Image

Figure A.2 A process that is not in statistical control

This figure shows a process that is not predictable in its evolution over time (out of control); two points lie above the upper control limit, and more than eight consecutive points are below the average-central line. (The rules for detection of out-of-control processes can be found in Wheeler, Understanding Statistical Process Control.)

What can we say about this process? Not Much. Simply, we have no rational basis to predict how much work in progress we are going to have going forward.

Control charts are the tools that enable us to take meaningful actions. Depending on whether a process is in control or not, the action we take will differ radically. It is the manager’s job to understand the kind of ­variation in the processes in order to take the appropriate actions to improve them.

We can use Deployment Flowcharts (DFC) to visualize the input-output network that defines our activities and to measure and improve the statistical predictability of the overall system by applying control charts to the main processes. Flowcharts allow us to to identify the best points for gathering data and building control charts.

According to Shewhart, the variation of a process can be either within limits or outside these limits: “While every process displays variation, some processes display controlled variation, while others display uncontrolled variation.” Controlled variation is variation that is statistically stable hence consistent over time. It is due to common causes, causes which are intrinsically part of the process. A variation which is controlled makes the process predictable. Conversely, uncontrolled variation is not consistent over time. It is due to special causes, causes which are external to the process and its evolution is not predictable.

Executives must absolutely understand these concepts and make them the cornerstone of their decisions.

Failing to identify the source of variation, special or common, leads to taking inappropriate actions on the system that may worsen the situation. Deming called that “tampering with the system.”

For Deming, the role of leadership is to pursue Quality by constantly reducing the sources of variation that undermine predictability, hence triggering what he called “The Chain Reaction”: Improve Quality, costs decrease, productivity improves, capture the market, stay in business, ­provide jobs and more jobs.

To recap:

  1. The data we need to analyze the system and support decision making must be presented in a suitable way. 

  2. The first problem to face in order to construct continuous improvement in our organizations is to understand the kind of variation which is affecting our processes. 

  3. The actions we take to improve our processes differ radically ­depending on the nature of the variation that affects them.

The Dilemma of Quantum Mechanics

In our chapter for Springer called “Managing Complexity in Organizations Through a Systemic Network of Projects” (see Bibliography), we indulged in a cultural divertissement with the purpose of exemplifying how a breakthrough can be generated as a result of framing correctly conflicting positions, needs, goal, and assumptions. We picked one of the most revolutionary and paradigm-changing, seemingly unsolvable conflicts that arose at the beginning of the last century.

The “conflict” between particles and waves and the solution provided by Quantum Mechanics can be represented using the Conflict Cloud as we have done in Figure A.3; QM is what we could call “an Injection” to the dilemma of a “particle worldview” versus “wave worldview.”

Image

Figure A.3 The “Quantum Mechanics conflict”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset