Chapter 9
An Integrated Model for Trade-Off Analysis

Alexander D. MacCalman

Department of Systems Engineering, United States Military Academy, West Point, NY, United States

Gregory S. Parnell

Department of Industrial Engineering, University of Arkansas, Fayetteville, AR, United States

Sam Savage

School of Engineering, Stanford University, Stanford, CA, United States

 

Interactive Simulation Connects the Seat of the Intellect to the Seat of the Pants

(Sam Savage, Author of The Flaw of Averages)

9.1 Introduction

System engineers often use value modeling to capture a composite perspective of multiple stakeholders with conflicting objectives to help understand value trade-offs among several system alternatives. There are many uncertainties involved with designing a system, and we believe they must be considered in all system decisions throughout the life cycle. These uncertainties include stakeholder needs, technological maturity, adversary and competition actions, scenarios, costs, schedules, and many more. Uncertainty and risk analyses are often performed independent of the value model. As a result, our decisions become biased toward deterministic solutions, and system decision-makers may not understand the key uncertainties and risks. To eliminate this cognitive bias, we propose an approach that integrates uncertainty modeling with value and cost modeling in order to help understand value and risk while we analyze alternatives. Chapter 7 introduced value modeling, Chapter 4 introduced a number of cost modeling methods, while Chapter 3 introduced uncertainty modeling. In this chapter, we use these methods to model the uncertainties associated with both value and cost in order to demonstrate our integrated approach. By propagating the uncertainties of the independent system variables through the value and cost models, we can examine stochastic Pareto charts and cumulative distribution functions (cdfs) and identify dominant solutions. In addition, we can use tornado diagrams to identify which value measures and cost components explain the majority of the alternatives' value and cost variations. Figure 9.1 shows the many elements that contribute to performing a trade study and the key relationships between them; these concepts will be discussed in detail in this chapter.

Overview of the concept for the integrated trade-off analysis.

Figure 9.1 Concept diagram for the integrated trade-off analysis

To demonstrate our approach, we use a notional illustrative example to explain the types of trade-offs and uncertainties system engineers typically face when designing a system. Our next section will provide a brief description of this example; Section 9.3 will introduce an influence diagram with nodes that represent the types of decisions, uncertainties, and values involved in a system trade-off study; we will explain each node in detail as it applies to the system decision and provide examples from our notional trade study. Section 9.4 will discuss other types of trade-off analysis that can be performed using our approach. Section 9.5 will discuss the types of Monte Carlo simulation tools available, and Section 9.6 will summarize the chapter.

9.2 Conceptual Design Example

Our conceptual design example is a defense system design problem that involves the development of new Infantry squad technologies that will enhance their effectiveness (see Section 7.9.1). An Infantry squad is a nine-person organization that consists of a squad leader and two teams of four persons each. Each team has a team leader, a rifleman, an automatic rifleman, and a grenadier. The Infantry squad technologies enhancement problem provides an opportunity to invest in new systems that will increase the squad's capability to overmatch the current and future adversaries in complex environments. The system is the collection of integrated technologies that consists of soldier, sensors, weapons, exoskeletons, body armor, radios, unmanned aerial vehicles (UAV), and robots. We use the squad enhancement design example to highlight trade-offs across multiple types of costs, performance, schedule, risk, and scenario considerations. In addition, it allows for a wide variety of alternatives that are comprised of different combination of six system technology components.

9.3 Integrated Approach Influence Diagram

We use an influence diagram, a decision analysis technique (Buede, 2000) to present our integrated model for trade-off analysis. We chose the influence diagram because it is offers a probabilistic decision analysis model that identifies the major system variables and their probabilistic dependencies. This influence diagram assumes a multiple-objective model using the additive value model that has a functional layer above the objectives. If net present value is used, the diagram could be easily modified. Figure 9.2 shows the integrated approach influence diagram that displays many of the system decision elements and how they influence each other. The subsequent sections will define each node and the arrows that represent how the nodes influence other nodes.

Overview of Integrated approach influence process.

Figure 9.2 Integrated approach influence diagram

9.3.1 Decision Nodes

A decision node is a rectangle that represents the set of choices the decision-maker must make. For a system design problem, the decisions include establishing the system functions, the objectives, the requirements, and system alternatives. Adding the functions, objectives, and requirements as systems engineering decisions is very important, especially in the system concept, architecting, and design decisions. In later life cycle stages, once decisions are made, the functions, objectives, and requirements may be known constraints or constants.

9.3.1.1 Requirements

Requirements describe the technical capabilities and levels of performance that a system must have and any constraints on the system or system design. System engineers translate stakeholder needs into clear requirement statements specific enough for domain engineers to implement and test in order to verify that they are satisfied. Requirement engineering is a critical specialty that involves extensive analysis and management, especially when there are requirement changes in later stages of the life cycle. Requirement changes typically have the highest impact on cost and schedule due to the redesign and rework needed to implement the change. In order to mitigate a change's impact, it is important to establish a logical requirement structure with auditable records that can be traced to functions, objectives, and system features. Understanding which functions, objectives, and system features satisfy requirements will reduce the impact of a requirements change when they occur. The Model-Based Systems Engineering (MBSE) approach is a new paradigm that supports the specification, analysis, design, and verification of a complex system using an integrated system model with a dedicated tool. The integrated system model effectively manages the auditable records of a system design by defining a system element once to be used throughout the model. As a result, once a change is made to an element in the integrated system model, the dedicated tool will instantly identify how the change will impact the system. The MBSE approach is gaining popularity and is expected to become a common state of practice in the near future (National Defense Industrial Association, 2011).

Each requirement can be classified as either a desired capability or a constraint that must be met (Parnell et al., 2011). The arrow from the Requirements decision node to the Functions decision node in Figure 9.2 represents how the desired capabilities inform what functions the system must perform in order to achieve the objectives. Requirements influence the value functions indirectly through the Functions and Objectives decision nodes in two ways. First, the desired capabilities inform the threshold and objective values used to define the shape of the value function. Second, the constraints are those that define the screening criteria, minimum acceptable value, or walk-away point that eliminates alternatives from the decision.

9.3.1.2 Functions

A function is an action that transforms inputs and generates outputs, involving data, materials, and/or energies (SEBoK, 2015). The INCOSE Systems Engineering Handbook (SEH) defines a function as a characteristic task, action, or activity that must be performed to achieve a desired outcome (SEBoK, 2015). Functional analysis is a key systems engineering task that identifies the system functions and system element interfaces required to perform the performance objectives (Parnell et al., 2011). The arrow from the Functions decision node to the Objectives decision node in Figure 9.2 represents how functions are used to identify the fundamental objectives we want to achieve; see Section 7.8 for a discussion on how to develop a functional hierarchy.

Functions are allocated to structural elements of the system alternative that performs them. Depending on the system decision, these system elements may be subsystems, components, or parts. Each system element has system features that define the element's characteristics. System engineers use models and simulations, subject matter expertise (SME), operational testing results, and data from legacy systems to decide which functions will be allocated to each system feature. The arrow from the Functions decision node to the System Alternatives decision node in Figure 9.2 represents the functional allocation to the system features; these allocations depend heavily on the perspective of how functions will be satisfied. In Figure 9.3, we show an example of a functional allocation to subsystems from the squad enhancement design example.

Overview of Functional allocation to subsystems.

Figure 9.3 Functional allocation to subsystems

9.3.1.3 Objectives

An objective is a statement of something that we desire to achieve (Keeney, 1992). Identifying the objectives we use to evaluate our system alternative is the most critical step in the decision process; see Chapter 7 for a detailed discussion on the development of the objectives. Figure 9.2 shows two outgoing arrows from the Objective decision node: one to the Scenarios uncertainty node that represents the objective's influence on the scenario context and another to the Value Function constant node. Each objective has one or more value functions that assess a system alternative's achievement of that objective.

9.3.1.4 System Alternatives

This decision node involves the selection of alternatives within the decision space that will be considered in the system decision. The decision space involves the exploration of creative alternatives that span the opportunity space as much as possible; Chapter 8 will discuss in detail methods system engineers can use to generate these creative alternatives. The alternatives within the decision space are what the SEBoK refers to as the physical architecture. System alternatives each have a collection of elements with system features that define the characteristics of the alternative. The settings of the system features distinguish one alternative from another. The arrow in Figure 9.2 from the System Alternative decision node to the System Feature uncertainty node represents how the alternative has system features that define its characteristics. For the squad enhancement design example, we have seven alternatives. Table 9.1 shows each alternative for our example, along with their system features that define their unique characteristics; the types of UAVs are from the example in Chapter 5.

Table 9.1 Squad Enhancement Alternatives and System Features

image

9.3.2 Uncertainty Nodes

Uncertainty nodes are ovals that represent uncertain information relevant to the decision; they could be a single probability value, a random variable, or a vector of data. For the system design problem, major uncertainties include the following: stakeholder needs, scenarios, priorities, adversary actions, competition actions, technological maturity, system features, system models, data, value measures (system performance), and resources (cost). An outcome space or a probability distribution can be assigned to the independent uncertain variables. The uncertainty can then be propagated through the value and cost models by Monte Carlo simulation.

9.3.2.1 Stakeholder Needs

In the early stages of the conceptual design, stakeholders develop need statements that express what they think the system should be able to do from their perspectives. The arrow in Figure 9.2 from the Stakeholder needs uncertainty node to the Requirements decision node represents the need's refinement into a system requirement. The uncertainty associated with stakeholder needs is the result of an unclear problem statement within the opportunity space and ill-defined values in the objective space. To address this challenge, we use a value hierarchy that captures a composite perspective of multiple stakeholders with conflicting objectives. For the squad enhancement design example, the key stakeholders include the decision-makers responsible for the system decisions, the acquisition personnel who manage the development, the technologists that develop the technology, the specialty engineers who design the system, the contractors that build the systems, the logistical personnel who distribute and maintain the system, and the soldiers who operate the system.

9.3.2.2 Adversary/Competition

This type of uncertainty node involves the external factors that influence the outcome of a scenario; this relationship is represented by the arrow in Figure 9.2 from the Adversary/Competition uncertainty node to the Scenarios uncertainty node. In defense, information security, and homeland defense type problems, we typically have adversaries while in the private sector, we have competitors that influence the outcome of a scenario. For the squad enhancement design example, the adversaries are the enemy forces that fight against the squad.

9.3.2.3 Scenarios

Scenarios involve political–military situations, missions, and the environment. Scenarios allow systems engineers to understand how a system will operate in different environmental conditions. In order to evaluate the system, system engineers can leverage operational simulations that model the system performing its intended purpose within different scenarios. Because there are many situational outcomes for each scenario, we should use stochastic models that simulate multiple scenarios or run trials. In Figure 9.2, there are two outgoing arrows from the Scenarios uncertainty node to the Priorities and System Models uncertainty nodes; these arrows represent the following two influences: first, each scenario typically has different priorities that depend on the scenario's context; second, scenarios influence the development of system simulation models by specifying what should occur during the conduct of the simulation. In the squad enhancement design example, we use two scenarios that represent the assault and defend missions.

In the assault scenario, the squad conducts an assault to seize terrain and destroy the enemy. The squad moves toward the enemy with their robot in front and UAV flying overhead. The squad calls for indirect fire as they approach the enemy. The enemy is positioned on high ground with Improvised Explosive Devices around their perimeter. The squad establishes a support by fire position with the Automatic Weapons and Grenadiers while the Rifleman assaults the enemy. The enemy calls for another enemy element to reinforce their position. The squad calls indirect fire on the reinforcements once they identify their location.

In the defensive scenario, the squad is in a defensive position in a combat outpost with 10 ft walls, two gate entry points, and fighting positions around their perimeter. Robots are positions outside the perimeter to act as forward sensors. Six enemy insurgents approach the combat outpost wearing suicide vests and attempt to detonate at the gate to breach into the combat outpost, make entry, and detonate additional suicide vest within the perimeter. The enemy then calls indirect fire from a mortar position that is beyond line of friendly sight. Enemy crew serve weapons open fire on the combat outpost from a long range distance while the enemy approaches the combat outpost from three different directions. The squad calls for indirect fire once they identify enemy force locations beyond line of sight. A UAV will loiter over the area of operations in order to provide early warning to the squad.

When developing a scenario, we must ensure that the entire system has an opportunity to perform all of its intended functions in order to properly evaluate each alternative. For example, in the assault and defense scenario, the UAV and robot have an opportunity to identify the enemy and provide earlier warning to the squad beyond line of sight. In addition, both scenarios have enough hostile enemies that fire at the squad in order to evaluate the system effectiveness with regard to the protection function.

9.3.2.4 Priorities

In a multiple-objective decision model, priorities are instantiated using swing weights (Chapter 2). If we have one scenario, the priorities would be constant. In general, when there are multiple scenarios, the priorities will be different for each of them (Parnell et al., 1999). The swing weight matrix is a useful way to assess swing weights (Parnell et al., 2013). In the squad enhancement design example, we evaluate the system with respect to the assault and defense scenarios; Tables 9.2 and 9.3 show the swing weight matrices for each scenario, respectively. Within each matrix, the value measures are placed in different importance columns depending on whether they are mission-critical, enable, or enhance the system's capabilities. The row of the matrix classifies the value measure's impact on capability and is based on the range of the value measure scale (large, medium, or small capability gap between the walk-away and the ideal levels). Placing the value measures within the columns and rows of the swing weight matrix provides both a subjective importance criteria and an objective criteria based on the impact of the value measure scale's range. The scenario swing weights quantify the trade-offs between the value measures differently, which have an impact on our decision.

Table 9.2 Swing Weight Matrix for the Assault Scenario

Capability Impact Mission-Critical Enables Capability Enhances Capability
Matrix Weight Swing Weight Matrix Weight Swing Weight Matrix Weight Swing Weight
Significant impact Lethality 100 0.14 Weighted mobility 70 0.10 Power 20 0.03
Lethal mitigation 90 0.13 Logistical impact 20 0.03
Beyond LOS 90 0.13
Kinetic protection 70 0.10
Medium impact Bandwidth 50 0.07 Chemical bio protection 15 0.02
Secured connectivity 65 0.09
Communication range 60 0.09 Detection distance 50 0.06
Minimal impact Nuclear radio protection 5 0.01

Table 9.3 Swing Weight Matrix for the Defense Scenario

Capability Impact Mission-Critical Enables Capability Enhances Capability
Matrix Weight Swing Weight Matrix Weight Swing Weight Matrix Weight Swing Weight
Significant impact Kinetic protection 100 0.15 Beyond LOS 70 0.11 Lethality 30 0.05
Lethal mitigation 90 0.14 Power 20 0.03
Detection distance 90 0.14 Weighted mobility 20 0.03
Logistical impact 20 0.03
Medium impact Chemical bio protection 65 0.10 Secured connectivity 50 0.08 Communication range 10 0.02
Bandwidth 40 0.06
Minimal impact Nuclear radio protection 60 0.09

It is important to understand how changes in the swing weight assignments impact the results of the alternative values. A common approach to visualizing the impact of swing weight changes is with a line chart that varies the weight from 0 to 1; see Section 5.3.15.

9.3.2.5 Technology Maturity

Technological maturity is one of the most difficult uncertainty nodes to assess and typically has the highest impact on each alternative's risk. The Department of Defense has established nine Technical Readiness Level criteria (TRL 1–9) that describe categories of system readiness acquisition managers use to classify system technological maturity (Mankins, 1995). Each subsystem, component, or part has system features with their own unique level of maturity. The technology drives the uncertainty in the system features and the data used to populate the value measures. Existing system alternatives that are already in use may have features with high levels of technical maturity while nonexistent, conceptual systems may have low levels of maturity. Low levels of system feature maturity can cause higher risk of achieving system value, cost, and schedule. System integration is another challenge that we can classify within the Technology Maturity uncertainty node. Functional analysis identifies the component and part integration requirements that are often the leading cause of system redesign and rework during the life cycle. Depending on the technological maturity of the component or part, there may be significant uncertainty in its ability to integrate with other components or parts in the system. These integration challenges add an additional layer of complexity that increases the risk of achieving system value, cost, and schedule. The arrow in Figure 9.2 from the Technology Maturity uncertainty node to the System Features uncertainty node represents the risk technology imposes on the system feature settings that define each alternative.

9.3.2.6 System Features

System features are the characteristics or design parameters that define each alternative. The types of features coincide with the type of system decision within each phase of the life cycle (concept, architecture, design, or operations). A system feature is analogous to what is known as a local property within physical architecture: a property that is local to a single system element (SEBoK, 2015). In Figure 9.2, the System Features uncertainty node influences the System Models, Value Measure Data, and Resource (Cost) uncertainty nodes. System Models model alternatives within a scenario by setting the model inputs so that the model run represents the alternative of interest. As a result, the system features that define the characteristics of an alternative become the inputs to the simulation model(s). When models are not used to evaluate the whole system or a subset of the system, we acquire value measure data from subject matter experts, development testing, operational testing, and legacy system architectures. System feature settings drive the cost of the alternative. Typically, costs are decomposed into different components that are allocated to subsets of system features. As a result, we can attribute the cost drivers to system features of the alternative. Table 9.1 has examples of system feature settings for each of the seven alternatives in the squad enhancement design example.

9.3.2.7 System Models

System models are critical to understanding the behavior of a system, especially in the early stages of the life cycle. There are many types of models that represent different domains and perspectives; they include operational simulations that model a system performing a mission, models that estimate life cycle costs, physics-based computational models that help understand system feasibility, and many more. Because models are abstractions of reality, they must be verified and validated so that they can accurately inform the system decision. The arrow in Figure 9.2 from the System Models uncertainty node to the Value Measure Data uncertainty node represents how the model outputs generate the data used to evaluate an alternative. For each alternative in the squad enhancement design example, we modeled the assault and defense scenarios using an agent-based simulation; for each scenario, we executed 100 trails. The value model used four model outputs to populate the data for the following value measures: Beyond Line of Sight Awareness, Line of Sight Distance, Protection, and Logistical Impact. Because the model is stochastic, each alternative has 100 different outcomes for each of the four outputs.

9.3.2.8 Value Measure Data

Data, information, and knowledge along with values and alternatives are what drive decisions. Data is for the systems engineer what electricity is for the electrical engineer and construction material is for the civil engineer; it is the underlying commodity to base all system engineering methods, processes, and activities. We use data to populate value measures for each alternative. Value measures quantify the objectives we use to evaluate our alternatives. See Section 7.7 for details on how to develop value measures. Generally, we obtain data from models and simulations, subject matter experts, development testing, operational testing, and legacy system architectures. The data we use are either known (deterministic) or uncertain (stochastic). Chapter 3 describes a number of ways to handle uncertainty. In this chapter, we demonstrate the use of discrete probability elicitations for the constructed scale value measures, SME estimation of triangular distribution parameters, and agent-based simulation output data for the natural scale value measures.

Table 9.4 shows the value measures used for the squad enhancement design example. Additionally, the table indicates the functions and objectives the value measure evaluates, their types, minimal acceptable levels, ideal levels, and type of uncertainty data.

Table 9.4 Value Measures for the Squad Enhancement Design Example

Function Objective Value Measure Type Minimal Acceptable Value Ideal Value Uncertainty Type
Maintain situational awareness Increase beyond line of sight awareness Beyond LOS (% detected) Natural 0 1 Simulation output
Increase line of sight range Detection distance (meters) Natural 300 1500 Simulation output
Maintain networked communications Maximize range Communication range in various terrain Constructed 1 5 Probability elicitation
Maximize bandwidth Bandwidth (mbps) Natural 3 15 Triangular distribution
Provide secured connectivity Secured connectivity Constructed 1 8 Probability elicitation
Maneuver the squad Increase soldier mobility Weighted mobility Multidimensional constructed 1 8 Probability elicitation
Protect the squad Protect against kinetic threats Kinetic protection Constructed 1 9 Probability elicitation
Protect against chemical, biological, radiological, nuclear threats Chemical biological protection Constructed 1 9 Probability elicitation
Nuclear radiological protection Constructed 1 7 Probability elicitation
Achieve mission effects Maximize kinetic effects Lethality (% enemy killed) Natural 0 1 Simulation output
Minimize lateral damage Lethal mitigation Constructed 1 5 Probability elicitation
Sustain the squad Maximize power efficiency Power (kw/h) Natural 300 100 Triangular distribution
Minimize logistical footprint Logistical impact (rounds fired) Natural 6000 1000 Simulation output

Table 9.5 shows the assault scenario alternative data for each value measure. The value measure data with a single number in the table are deterministic, while the data with a distribution are stochastic; the mean is shown in the upper right corner. The distributions in Table 9.5 are the actual data distributions used in the model. We note the importance in considering the variability rather than relying on the mean only; the input distributions shown in Table 9.5 indicate that there is significant uncertainty in the alternatives' performance. A companion Excel file is provided on the Wiley website for this book.

Table 9.5 Squad Scores on Each Value Measure

image

9.3.2.9 Resources (Cost)

Systems require a number of resources in order to bring the system into being. The most common type of resource that will generally always be present is cost. Life cycle cost estimation is a challenging endeavor, especially for unprecedented systems. Chapter 4 reviews a number of different cost estimating methods. For the squad enhancement design example, the cost model decomposes the system into cost components that align with the major subsystems of the squad system; this alignment serves as the allocation of cost components to subsystems. The cost components are the soldier sensor, radio, rifle, protective suit, the UAV, and robot. In order to capture the life cycle costs, the model incorporates four types of costs: unit costs, training, maintenance, and disposal. The problem assumes that the deterministic costs are derived from the learning curve, analogy, or the cost estimating relationship methods described in Chapter 4. The component costs that are stochastic are derived from a triangular distribution with parameters elicited from subject matter experts.

9.3.3 Constant Node

A constant node is a number or a function that remains constant and is represented by a diamond. There is one constant node in Figure 9.2 that represents the value functions. The value functions are constant with respect to the decision opportunity and the requirements, functions, and objectives decision nodes; when any of these change, the value functions will change.

9.3.3.1 Value Functions

Value functions show the returns to scale of the value measures. These functions translate the raw value measure data into a scale between 0 and 100 (0 and 1 or 0 and 10 can also be used). The value function range should include the walk-away point or minimal acceptable value, the threshold, the objective, and the ideal values. The shape of the value function is often dictated by the requirements defined by the systems engineer as either a desired capability or a constraint. The constraints are what set the minimal acceptable level or walk-away point, while the desired capabilities help determine the marginally acceptable, target, stretch goal, and meaningful limit levels; see Section 5.3.2. Table 9.6 shows the value functions for the squad enhancement design example.

Table 9.6 Squad Enhancement Design Example Value Functions

image

9.3.4 Value Nodes

The value node is a hexagon representing either the total value or total life cycle cost of an alternative.

9.3.4.1 Total Value

The decision total value can be calculated based on decision, the value measure score, the value functions, and the priorities (swing weights) using the additive value model (see equations (2.1) and (2.2)). One way to understand how each alternative achieves each of the objectives in the value hierarchy is to use a value component chart. Figure 9.4 shows the value components of each alternative as a stacked bar chart. To the right, we see the Ideal alternative that represents the maximum possible value score, derived from the swing weights for each value measure. The Hypothetical Best alternative is the maximum value achieved for each value measure in the set of alternatives. The difference between the Ideal and Hypothetical Best is the value gap that the set of alternatives cannot achieve with existing technologies. Depending on its size, we may want to consider including other new alternatives that close the gap. In addition, we may have an opportunity to combine components from the existing alternatives to create new alternative. When we reconfigure system components to create new alternatives, we must consider the system integration challenges that may result as described earlier in the Technology Maturity uncertainty node section.

Graphical display of Value component chart for the assault scenario.

Figure 9.4 Value component chart for the assault scenario

9.3.4.2 Total Cost

The total cost value node is the estimated total life cycle cost for each alternative. Because life cycle cost is such an important aspect of any systems decision, we often exclude cost as a value measure and treat it as an independent variable. An effective way to understand the trade-offs between value and cost is to use a Pareto chart. The Pareto chart is a scatter plot with cost on the horizontal axis and value on the vertical axis; each dot represents an alternative's total life cycle cost and total value score. Figure 9.5 shows a deterministic cost versus value Pareto chart using the average costs and value scores from the squad enhancement design example. We used a notional cost spreadsheet model described in Section 9.3.2 to calculate each alternative's life cycle cost. We can see that the Sustainable and Survivable alternatives are deterministically dominated by all the others. It makes no sense to select a dominated alternative when we can select nondominated alternative with a higher value for less cost. The set of nondominated alternatives is known as the Pareto Frontier.

Graphical display of Deterministic Pareto chart for the assault scenario.

Figure 9.5 Deterministic Pareto chart for the assault scenario

However, Figure 9.5 does not consider the uncertainty associated with each alternative's value and cost and does not show the risk or the probability of a lower value. The majority of value trade-off studies in the literature are deterministic. Eliminating deterministically dominated alternatives without considering risk may lead to the wrong decision. Cost estimation techniques already incorporate uncertainties using Monte Carlo simulations and other methods. A major contribution of our approach is that we simultaneously integrate value trade-off uncertainties and cost uncertainties in order to facilitate better value and risk identification. If we do not consider how much variation there is in the consequences of our decision, we may end up making the wrong decision.

9.3.4.3 Integrated Trade-Off Analysis

The integrated trade-off analysis value node represents our approach that simultaneously models value and cost uncertainties to better identify value and risk. Figure 9.6 illustrates the integrated approach by showing how we use a variety of uncertainty modeling methods to propagate uncertainty through the value and life cycle cost models in order to analyze value and risk. After performing a Monte Carlo simulation, we have a collection of value and cost vector data for each alternative.

Illustration of the integrated approach.

Figure 9.6 The integrated approach

When faced with an uncertain system decision, we can leverage three types of analytical charts that help the systems engineer understand the risk associated with a decision and identify what drives the uncertainties in each alternative. First, stochastic Pareto charts identify nonstochastically dominated alternatives with respect to value and cost; second, cdf charts (S-curves) compare the alternative risk profiles; and third, tornado charts identify the value measures and cost components that have the highest impact on the alternative uncertainties. We will now describe each of these charts separately and use the squad enhancement design example to demonstrate the insights we can obtain from them with respect to value and cost.

9.3.4.4 Stochastic Pareto Chart

In order to address the limitations of the deterministic Pareto chart, we create a stochastic Pareto chart, shown in Figure 9.7, by displaying a two-dimensional box plot for each alternative's cost and value. The boxes along each axis represents the second and third quantiles while the lines represent the first and fourth quantiles of the vector output data from the value and cost models. We can create these box plots in Microsoft Excel using a scatter plot with a combination of four data series for each alternative, two for the cost axis and two for the value axis. The lines, otherwise known as whiskers, are created from the vector data using the maximum and minimum data points. The boxes are created using the third quantile, mean, and first quantile; to create the box, increase the line style width of the data series. The box plots allow us to understand the uncertainty associated with each alternative's cost and value simultaneously.

Graphical display of Stochastic Pareto chart for the assault scenario.

Figure 9.7 Stochastic Pareto chart for the assault scenario

The stochastic Pareto chart allows us to consider value, risk, and dominance simultaneously. In addition, it provides important information for affordability analyses (see Chapter 3). We can see in Figure 9.7 that Sustainable is stochastically dominated by all other alternatives and can be eliminated from consideration. If Performance and Defendable are affordable we then focus on understanding what drives their uncertainty to mitigate risk. If Defendable is not affordable, we then consider either LongRange or Survivable. If we used the deterministic Pareto chart from Figure 9.5 to eliminate the Survivable alternative as a dominated solution, we would have missed an important trade-off consideration. We can see in Figure 9.7 that LongRange has a higher risk in value (probability of lower value) compared to Survivable. We may want to accept a higher cost by choosing Survivable to mitigate the risk associated with LongRange. Of course, an alternative is to choose LongRange to reduce the risk. In order to better understand the risk implications, we can use cdf charts to compare alternative risk profiles.

9.3.4.5 Cumulative Distribution Function (S-Curve) Charts

The cdf chart displays an alternative's potential outcomes by accumulating the area under the outcome's probability mass functions for discrete data and the probability density functions for continuous data. Typically, the shape of the line in the chart is an S-curve that depicts the probability that the outcome will be at or below a given value. The horizontal axis has the outcome scale, either value or cost, while the vertical axis has the probability. Figure 9.8 shows a cdf chart with six S-curves that represent the uncertain alternative value outcomes, otherwise known as the risk profiles. Sustainable is deterministically dominated by the other five alternatives. Attack is deterministically dominated by Survivable, Defendable, and Performance. Survivable is deterministically dominated by Defendable and Performance. The Performance and Defendable alternatives stochastically dominate all others because their S-curves are positioned completely to the right of all others. The risk profiles of Survivable and LongRange value cross indicating that there is no clear winner between the two; we may want to accept a higher cost by choosing Survivable to mitigate the risk associated with LongRange. The cdf chart tells us that there is a 43% chance Survivable will outperform LongRange and that Survivable has less risk due to its steeper risk profile. In general, when the risk profiles of alternatives cross, we then consider risk preference (risk-averse, risk-neutral, or risk-taking) during our system decision. A risk-averse decision would spend more for Survivable to guarantee a higher value, while a risk-taking decision would select LongRange to save money with the risk of achieving less value.

Graph for Value cumulative distribution chart for the assault scenario.

Figure 9.8 Value cumulative distribution chart for the assault scenario

When we want to understand how to best mitigate an alternative's risk, we can use tornado diagrams to identify the value measures and cost components that have the highest impact on value and cost, respectively.

9.3.4.6 Tornado Diagrams

An effective way to identify the impact of uncertainty is to perform sensitivity analysis using tornado diagrams. Tornado diagrams allow us to compare the relative importance of each uncertain input variable with horizontal bars; the longer the bar, the higher the impact on the output variable's variation. The bars are sorted so that the longest bars are at the top; sorting the bars in this way makes the diagram look like a tornado. The length of the bars depends on the type of tornado diagram. Deterministic tornado diagrams vary each input variable using low, base, and high settings while all other input variables are held constant. A stochastic tornado diagram uses the vectors of input and output variable trials from a Monte Carlo simulation (Parnell et al., 2013). The low end of the bar is the average output variable from the subset of trials where the input is less than a specified lower percentile. Similarly, the high end of the bar is the average output variable from the subset of trials where the input is greater than a specified higher percentile. For the squad enhancement design example, we use stochastic tornado diagrams with a low percentile of 0.3 and a high percentile of 0.7. Figure 9.9 shows the value and cost tornado diagrams for the Performance alternative. The input variables for value are the value measures and the input variables for cost are the cost components. The horizontal axis shows each input variable's impact on the total variation for the value and cost. We can see in Figure 9.9 that the LethalMitigation value measure has the highest impact on the value's variation while the Lenses cost component has the highest impact on the cost's variation.

Graph for Value and cost stochastic tornado diagrams for the performance alternative from the assault scenario.

Figure 9.9 Value and cost stochastic tornado diagrams for the performance alternative from the assault scenario

Prior to performing the integrated trade-off analysis, we allocated functions and cost components to subsystems (see Figure 9.3 for an example). Our value hierarchy contains objectives that assess the performance of functions and value measures that define how well an alternative achieves the objectives. As a result, the value measures are indirectly allocated to system features through the objectives and functions. Cost components are generally allocated directly to subsystems. Because of these indirect and direct allocations, we can use tornado diagrams to identify the system features that have the highest impact on the system decision. Figure 9.10 illustrates how we use tornado diagrams to indirectly trace high impact value measures and directly trace cost components to system features.

Illustration depicting Value measure and cost component linkage to system features.

Figure 9.10 Value measure and cost component linkage to system features

As we learned from our stochastic Pareto chart and cdf charts, we do not have a clear winner between the Survivable and LongRange alternatives. To help understand how these alternative uncertainties impact the system decision, we can use tornado diagrams to identify the system features that are driving risk.

We can see from the top bar of the tornado diagrams in Figure 9.11 that the radio drives the majority of the risk for the LongRange value while soldier sensor drives the majority of its cost. The protective suit drives the majority of the risk for the Survivable value while the soldier sensor drives the majority of its cost. These subsystems and the system features that characterize them have the highest impact on the system decision. These insights provide a clearer understanding of what drives the alternatives' risk and how to prioritize system feature refinements. For example, the systems engineer can reduce the risk of the LongRange alternative by investing more resources into improving the radio's range, security, and bandwidth features.

Illustration depicting value measure and cost component linkages to system features.

Figure 9.11 Example of value measure and cost component linkages to system features

9.4 Other Types of Trade-Off Analysis

There are a number of other types of trade-off analysis we can perform using the integrated approach. The approach can be applied to evaluate any type of system decision encountered throughout the life cycle; the key application differences are the types of data, information, models, value measures, and system features used during the decision. Chapters 1014 will explain in more detail the concept, architecture, design, sustainment, and programmatic decisions, respectively. Often times, we must compare alternatives across different scenarios in order to capture their value differences and consider them in the systems decision. When we can assume that only one system component or subsystem affects only one value measure, we can perform system component optimization to identify a solution that maximizes value under a set of constraints (Parnell et al., 2011). When our value component chart reveals an unacceptable value gap, we can evaluate new technologies that will achieve higher value. When the system design specifications are frozen during development or while the system is in the operational stage, we can use the approach to prioritize system modifications; these modifications typically add new components as new technologies emerge. We can examine the tornado diagrams and the lower end of the S-curve in the cdf chart to identify the source of risk and support risk management programs during all stages of the life cycle.

9.5 Simulation Tools

In order to effectively deal with the multiple sources of uncertainty, we need software that can facilitate Monte Carlo simulations. We prefer tools that are Microsoft Excel Add-Ins so that we can build our value and cost models in the same environment. There are a number of software features that every tool should offer; these include the capability to run thousands of trials, model correlated uncertain input variables, identify the uncertain input variables that have the highest impact on the model's variation, and clearly display the results of the output distributions. Our next two sections discuss some of the Microsoft Excel Add-Ins available to perform Monte Carlo simulations.

9.5.1 Monte Carlo Simulation Proprietary Add-Ins

There are a variety of tools used to conduct Monte Carlo simulation analysis. Three of the leading proprietary software tools are Crystal Ball (www.crytalball.com), @Risk (www.palisade.com), and Risk Solver (www.solver.com/risksolver.htm). All three of these tools provide an intuitive user interface that facilitates all the software features mentioned earlier. The key advantage of using any of these proprietary software tools is the relative ease of use and accessibility of the output analysis charts they provide. The disadvantages are the cost, the need to create distributions data from other simulations, and that they create static output charts requiring the user to rerun the simulation after changing the input variables.

9.5.2 The Discipline of Probability Management

ProbabilityManagement.org (http://probabilitymanagement.org/), a 501(c)(3) nonprofit, has pioneered an approach to simulation based on an open data standard for storing Monte Carlo trials. The new data standard, known as the Stochastic Information Packet (SIP), ushers in a new category of data, which simulates the future instead of recording the past. The SIP makes the abstract concept of a probability distribution actionable, additive, and auditable. The discipline of probability management was formalized by Savage et al. (2006) and further developed by Savage (2012). Traditional simulation illuminates uncertainty by generating random variates and running them through an analytical model in a single application. The discipline of probability management allows random variates to be generated in one application for use in other applications. As an analogy, substituting electricity for random variates, the former is like a generator with a light bulb attached, while the latter is like a collection of generating plants, light bulbs, and appliances connected by a power grid. The SIPs are the electricity, which allow the results of simulations to be aggregated across such platforms as Crystal Ball, @RISK, Risk Solver, Matlab, and R. A coherent collection of SIPs that preserve statistical dependence is known as a Stochastic Library Unit with Relationships Preserved (SLURP).

Computers are now fast enough to perform interactive simulation, in which thousands of trials are processed in real time while the user adjusts parameters of the model. This provides an experiential understanding not possible with command driven simulation. Just as light bulbs may be used by those with no knowledge of how the electricity was generated, probability management enables stochastic dashboards for managers with little understanding of how the random variates were generated. Nonprofit probabilitymanagement.org developed and maintains the SIPmath™ open cross-platform standard for SIP libraries, which may be easily generated by such software stored as XML, CSV, or XLSX files. It also provides a suite of tools to facilitate the generation of SIP libraries and SIPmath models.

9.5.3 SIPmathTM Tool in Native Excel

The open SIPmath™ standard is platform agnostic, but fortunately, the Data Table function in native Microsoft Excel is now powerful enough to run thousands of trials through a model extremely quickly http://viewer.zmags.com/publication/90ffcc6b#/90ffcc6b/29 (Savage, 2012). Not only is this approach free to any Excel user, but it is fully interactive. Because setting up the Data Table for this purpose can be time-consuming, the nonprofit offers a free add-in to automate this. Their SIPmath™ Modeler Tools create simulations in Excel, which do not require the tool to rerun once the inputs are changed. When the user changes inputs, the outputs are dynamically updated after each change; this feature allows the user to build compelling, dynamic, custom-made dashboards in the Microsoft Excel environment without the need for macros. The SIPmath Modeler Tools operate in two modes: Random and SIP Library Modes.

9.5.3.1 Random Mode

In this mode, random inputs are generated with built-in Excel generators. This can quickly create interactive Monte Carlo simulations in Excel, in which thousands of trials are run before the user's finger leaves the <Enter> key. Since inputs are produced dynamically by Excel, the resulting output distributions will vary from run to run.

9.5.3.2 SIP Library Mode

Stochastic library mode utilizes precompiled Monte Carlo trials (SIPS) to represent the uncertainty in the model inputs. This guarantees repeatability of results across multiple users and trials and allows results to be aggregated across applications. A SIP Library may be either built into the model workbook or linked to as an external file, for example, in the cloud, so that many users share a common set of trials.

We used the SIP Library Mode for the squad enhancement design example model. This model does contain macros for data manipulation, but the simulation is entirely performed within the Data Table. Within our squad enhancement model, there are three types of uncertain data: simulation output data, elicited discrete probability distributions, and triangular distributions. The model used the INDEX function and the Data Table functionality to generate the random variates for each of the discrete and triangular distributions. We used the SIPmathTM Modeler Tool Add-in to define cells as inputs for the simulation data SIPs. We defined the cells that needed the discrete and triangular distributions as output cells in order to generate a SIP of random variates from each of these distributions. Finally, we defined the cells that contained the total value scores and life cycle costs for each alternative as outputs to generate their SIPs of random variates. The SIPmathTM Modeler Tool Add-in uses the Microsoft Excel sparkline feature to display the input and output data distributions within a cell (see Table 9.5 for an example).

9.5.4 Model Building Steps

This section outlines the steps we used to build our value and cost models using the SIPmathTM Modeler Tool Add-In. As a reminder, there are two major differences between SIPmath models in Excel and traditional Monte Carlo simulation. First, SIPmath is interactive in native Excel, in that a full simulation is run with each keystroke using the Excel Data Table. Thus, the simulation will not require macros and may be stored in an .xlsx file without the need for additional software. Second, the random trials may be run in advance and stored as auditable data in a SIP Library in the open SIPmath format. Among the tools at probabilitymanagement.org are macros that allow both the @RISK and Crystal Ball simulation packages to write their results directly into this format. The SIP libraries generated may be used in two ways: either built into the model workbook or linked as external files. The squad model has the library built in, so that it may be widely distributed without linking to other files. In a setting in which a new SIP Library is published periodically to reflect the latest probabilistic estimates, it makes more sense to have all users link to the same external file.

Before we discuss each step, we want to note that we should not use functions defined using Visual Basic for Applications (VBA). We can have macros that perform procedures in Excel, but they cannot be user-defined VBA functions because they will slow down the Data Table calculations considerably. To create our value functions, we used the INDEX and MATCH functions for the natural scales and the VLOOKUP function for the constructed scales; see the Excel model that accompanies this book for the exact formula. We also note that the INDIRECT formulas should be used sparingly in models as it also slows down the Data Table. For more on building SIPmath models in Excel, visit the Tools page of Probability Management.org for videos, tutorials, and documentation.

Prior to using the SIPmathTM Modeler Tool Add-In, it is important to organize the model so that each value measure and cost component for each alternative has a unique name specified in a cell. We organized these names above the input data and to the side of our output data so that they are easily identified when defining our inputs and outputs. Figure 9.12 shows a screenshot of our model organization with the input names positioned above the input data cell entry area and the output names alongside the output data cells. At the lower right of Figure 9.12, there is a screenshot of the SIPmathTM Modeler Tools ribbon.

Screenshot of Value model named range data entry setup and SIPmath ribbon.

Figure 9.12 Value model named range data entry setup and SIPmath ribbon

Step 1: Initialize. Before we initialize, we must create a named range called “PM_Trials” that contains the number of trials in each of our SIPs; an appropriate place to name this range is at the top of our worksheet that contains the input distribution column data. We then press the “Initialize” button in the SIPmath ribbon and select the “In current workbook” radio button, specify the default number of bins we want to display in our graphics, and press ok. The “In current workbook” tells the tool that the column data reside in the model workbook. You should now notice that there is a “PMTable” and a “SIP Chart Data” worksheet created for you. The “PMTable” worksheet will contain a Data Table that creates the columns of outputs. In cell A1, there is a named range called “PM_Index.” The “PM_Index” is the row index used for the Data Table row input cell. Next, we must name each input distribution column as a named range. An effective way to name a collection of column data with the column header at the top row is to use the “Create from Selection” Excel feature found in the “Formulas” ribbon tab under the “Defined Names” group.

Step 2: Define inputs. Our next step defines input distributions in each of the appropriate empty cells in Figure 9.12. These cells represent an alternative's value measure that has input distributions that will be defined as a SIP. We can define multiple inputs simultaneously if they are positioned contiguously, otherwise we must define them separately. First, select the cell(s) in the alternative data entry area that need SIPs assigned to them. In the SIPmath ribbon, we click the “Define Inputs” button and designate the starting cells for the input names located above the entry area in Figure 9.12. These input ranges can be arranged as rows or columns; our data is arranged as columns. Under the window titled “Select Input,” select the named ranges that contain the input SIP data located in the “SIP Library” worksheet. Within the cells we selected, we should now see sparklines that show the input data distributions they were assigned.

Step 3: Define outputs. We now will define outputs for two types of data. For the first type, we must generate columns of data that result from our triangular distributions and discrete probability distributions. The second type will be for the actual value and life cycle cost outputs for each alternative.

9.5.4.1 Triangular and Discrete Probability Distribution Output Creation

Within our model, we have a worksheet named “Uniform Library” that has a collection of columns with 100 trials from a uniform distribution, the 101st row contains the mean, the 102nd row contains the 5th percentile, and the 103rd row contains the 95th percentile. We created these columns using the RAND Excel function and copied and pasted special values; we create a unique named range for each of them in the same way we did for the input distributions. We must ensure that we create the same number of trials for the uniform distributions as we do for the input distributions defined earlier. In another area of our workbook, we have triangular distribution formulas with the minimum, maximum, and mode parameter entry cells for each value measure and cost component that use them. For the discrete probability distributions, we use a probability table that contains a row for each constructed scale category and an ascending cumulative table that adds the probabilities for each category. The VLOOKUP formula has the last parameter set to “True” so that the function looks up the closes matching random number value (between 0 and 1) within the cumulative table and returns the category number. For each distribution, the cell assigned for the random number parameter uses an INDEX function with a named range that represents a unique uniform random number column as the array parameter and the “PM_Index” as the row index parameter. In the “PMTable” worksheet, if we enter 101 as the “PM_Index,” we can show the mean value for all inputs and outputs in our model. For each cell in Figure 9.12 that uses the triangular and discrete distributions, we set them equal to the cells that provide the output for each of the distributions. We then select each cell in the alternative data entry area shown in Figure 9.12, click “Define Outputs” in the SIPmath ribbon, assign the appropriate output named range, and click ok; we should now see sparklines that show the output distributions of the 100 trials of uniform random variates that were passed through each of the triangular and discrete distribution functions. You will now see the output data columns from these distributions in the “PMTable” worksheet.

9.5.4.2 Value and Cost Output Creation

In order to create these output columns, we simple select the cells that contain the value and cost outputs, click the “Define Outputs” in the SIPmath ribbon, and designate the output named ranges. We should now see sparklines for the resulting distributions. Table 9.5 shows a screenshot of the squad model input data entry area with the sparklines for each uncertain input.

9.6 Summary

This chapter demonstrated the integrated trade-off approach that simultaneously models value and cost in order to better identify value and risk. Trade-off decisions are present throughout the system life cycle; therefore, the trade-off analysis techniques we use have a high impact on the quality of our system decisions. The majority of value trade-off studies are deterministic without considering the types of uncertainties associated with a system decision. We used an influence diagram to express the types of decisions and uncertainties that influence system value and cost. Chapters 3 and 6 reviewed a variety of methods to model uncertainty and cost. In this chapter, we demonstrated how to use these methods to propagate uncertainties through the value and cost models with Monte Carlo simulations. We then used stochastic Pareto charts to visualize the Pareto Frontier, cumulative distributions function charts to display alternative risk profiles, and tornado diagrams to identify high impact value measures and cost components that are indirectly and directly allocated to system features. We learned that deterministic analysis does not tell the whole story and can mislead system decisions when we do not consider the types of uncertainties shown in our influence diagram. More often than not, it is unclear which alternative is the true winner in terms of value and cost because of the system decision uncertainties. The alternative risk profiles that cross in the cdf charts highlight the unclear winners that need further investigation. To better understand how to address alternative risk, we can trace the allocation of the high impact value measures and cost components identified by the tornado diagrams to system features. We can then invest more resources to improve these system features in order to reduce risk. Finally, we mentioned a few Monte Carlo software packages that facilitate the integrated approach, discuss the advantages and disadvantages of them, and implement our approach using a freely available software tool by Probability Management.org.

System decisions involve several stakeholders, multiple conflicting objectives, and a variety of different uncertainties. Our integrated trade-off approach incorporates all three of these aspects in order for the wider community to make better quality system trade-off decisions. We do this by providing the decision-makers all the critical trade-offs and risks associated with each alternative. Generally, the higher the value, the higher the risk; understanding where these risk reside facilitates a better quality system design decision.

9.7 Key Terms

  1. Constant Node: A number or a function that remains constant and is represented by a diamond within an influence diagram.
  2. Decision Node: A rectangle within an influence diagram that represents the set of choices the decision-maker must make.
  3. Functional Allocation: The allocation of functions to system components, parts, and system features that will perform them.
  4. Integrated Trade-Off Analysis: An approach that integrates uncertainty modeling with value and cost modeling in order to help understand value and risk while we analyze alternatives.
  5. Pareto Chart: A scatter plot with cost on the horizontal axis and value on the vertical axis; each dot represents an alternative's total life cycle cost and total value score.
  6. Pareto Frontier: The set of nondominating alternatives such that value or cost cannot be improved without degrading the other.
  7. Probability Management: A nonprofit organization that uses computer technology to address the Flaw of Averages through improvements in communication, calculations, and credibility of uncertainty estimates.
  8. Scenario: A sequence of events used to evaluate a system's performance in different environmental conditions.
  9. Stochastic Information Packet (SIP): Data arrays used to communicate uncertainties.
  10. Stochastic Tornado Diagrams: Compare the relative importance of each uncertain input variable with horizontal bars; the longer the bar, the higher the impact on the output variable's variation. The bars are sorted so that the longest bars are at the top; sorting the bars in this way makes the diagram look like a tornado. The low end of the bar is the average output variable from the subset of trials where the input is less than a specified lower percentile. Similarly, the high end of the bar is the average output variable from the subset of trials where the input is greater than a specified higher percentile.
  11. System Feature: The characteristics or design parameters that define each alternative. A system feature is analogous to what is known as a local property within physical architecture, a property that is local to a single system element. The settings of the system features define each system alternative.
  12. Technology Maturity: The technical readiness level of a system feature, component, or part that often drives the system design risk. Unprecedented systems with advanced system features typically have a low level of technological maturity.
  13. Uncertainty Node: Ovals within an influence diagram that represent uncertain information relevant to the decision; they could be a single probability value, a random variable, or a vector of data.
  14. Value Node: A hexagon within the influence diagram representing either the total value or total life cycle cost of an alternative.

9.8 Exercises

  1. 9.1 This chapter is an example of the Decision Management process presented in Chapter 5.
    1. a. Identify the techniques that were used in the chapter for each of the 10 steps in the Decision Management process.
    2. b. Were any of the 10 steps not included in this chapter?
  2. 9.2 Integrated trade-off analysis model.
    1. a. Use Figure 9.6 to describe an integrated trade-off analysis model.
    2. b. Why does this chapter advocate an integrated trade-off analysis approach?
  3. 9.3 The squad enhancement model illustrated in this chapter used three types of uncertainty data: probability elicitation, distribution (triangular), and simulation output.
    1. a. Are there other types of uncertainty data that could be used in the integrated approach?
    2. b. Briefly describe the criteria a systems engineer should use to determine the most appropriate type of uncertainty data to use in the integration approach.
  4. 9.4 Briefly explain how to construct and how to interpret the results of the following three outputs of an integrated trade-off analysis model.
    1. a. Stochastic Pareto charts
    2. b. cdf charts (S-curves)
    3. c. Stochastic tornado charts
  5. 9.5 Briefly describe the advantages and disadvantages of using the SIPmath approach versus a proprietary Excel add-in to perform Monte Carlo simulation
  6. 9.6 The following exercises walk you step by step through an example that builds a Monte Carlo simulation using the SIPmath Modeler Tool add-in for a simple value and cost model that uses the integrated approach discussed in this chapter. Note that when using any other Excel templates provided with the book for new work, be sure to delete any existing named ranges using Named Range Manager feature.
    1. a. In Excel, create three value functions charts for the following three value measures using the raw data and value measure scores given. For the natural scale, use a line chart, and for the constructed scale, use a column bar chart.
      Natural Scale Constructed Scale Natural Scale
      Measure Measure Measure
      Maximum Vehicle Vehicle Safety Star Miles per Gallon
      Speed (mph) Rating Category (mpg)
      Raw Data (X) Value Scores Raw Data (X) Value Scores Raw Data (X) Value Scores
      45 0 1 0 8 0
      50 30 2 20 20 20
      60 60 3 50 30 65
      100 80 4 90 40 90
      130.001 100 5 100 50.001 100
    2. b. You are given three vehicle alternatives that you want to assess using the value functions defined in problem 1 called Baseline, Frontier, and Starlight. Baseline has a maximum vehicle speed of 60 mph, a vehicle safety star rating of 3, and 18 mpg. Frontier has a maximum vehicle speed of 90 mph, a vehicle safety star rating of 3, and 42 mpg. Starlight has a maximum vehicle speed of 100 mph, a vehicle safety star rating of 4, and 22 mpg. In Excel, create the value function formulas for the natural value measure using the INDEX and MATCH Excel functions and the constructed scale value measure using a VLOOKUP Excel function. Note that we cannot use a macro-enabled function while using the SIP Math modeling tool. Calculate the value measure scores for each alternative. In addition, calculate the Hypothetical Best alternative using the MAX or MIN Excel functions depending on if more is better or less is better. Finally, calculate the Ideal alternative.

      Use the following procedure to create the natural value measure function. For the Maximum Vehicle Speed natural measure, assume that the raw data column has a named range of RawSpeed, the value scores data column has a named range of ValueSpeed, and the maximum vehicle speed for the Baseline alternative has a named range of BaseSpeed. To calculate the value for the natural measure, use the following Excel function: =INDEX(ValueSpeed,MATCH(BaseSpeed,RawSpeed))+(BaseSpeed-INDEX(RawSpeed,MATCH(BaseSpeed, RawSpeed)))*(INDEX(ValueSpeed,MATCH(BaseSpeed, RawSpeed)+1)-INDEX(ValueSpeed, MATCH(BaseSpeed, RawSpeed)))/(INDEX(RawSpeed, MATCH(BaseSpeed,RawSpeed)+1)-INDEX(RawSpeed,MATCH(BaseSpeed,RawSpeed))). Note that we do not have to name the data ranges for this formula to work. We assume that they are named as defined earlier in order show the Excel function with the appropriate cell ranges. Also note that the bottom-most value in the table has 0.001 added to it. We must add a small amount to the last value in order for the INDEX and MATCH functions to work properly.

    3. c. Calculate the total value score for each alternative using the matrix weights for the value measures shown in the following table and create a value component chart using a stacked bar chart.
      Value Measure Matrix Weight
      Maximum vehicle speed (mph) 100
      Vehicle safety star rating 65
      Miles per gallon (mpg) 40
    4. d. Assign each alternative with the life cycle costs shown in the following table and create a deterministic Pareto chart using a scatter plot with cost on the x-axis and total value on the y-axis.
      Alternative Life Cycle Cost ($1000s)
      Baseline 70
      Frontier 150
      Starlight 110
    5. e. For each value measure, perform a swing weight sensitivity analysis. Use an Excel data table to vary the value measure's matrix weight at 0, 20, 40, 60, 80, and 100, and calculate the resultant total value score for each alternative. Graph the results of the data table using a line graph; each alternative should have its own data line series showing the total value as the matrix weight varies from 0 to 100.
    6. f. Perform a Monte Carlo simulation using the SIP Math Modeler Tool add-in; ensure that the add-in is present within Excel. Start with the Excel file titled “Problem 6.xlsx.” Notice that there is a new worksheet named “SIP Library.” This worksheet contains the data used to perform the Monte Carlo simulation and a text box with a detailed description of the worksheet structure. We can perform a simulation for a set number of trials simply by using the Excel data table feature and INDEX function; see the SIP Math Modeler Tool tutorials and users guide on the Tools page of ProbbilityManagement.org for a detailed explanation on how to do this.

      To complete this exercise, follow the instructions in Steps 1–3 from Section 9.5.4. First, initialize the model using the instructions in Step 1; once complete, you should see two additional worksheets called “PMTable” and “SIPMath Chart Data.” Next, define the inputs for the Frontier and Starlight alternative vehicle speed raw data scores using the instructions in Step 2; these input cells are highlighted in green in the “Value Model” worksheet. Remember that the data is arranged in columns. You should see green sparklines in these cells once this step is complete. (The reader is referred to the online version of this book for color indication.)

      Next, each cell that contains the uncertain alternative raw data score in the “Value Model” worksheet should have a reference (set equal) to the cell under the “Output” titles in the “SIP Library” worksheet in columns D through H; these are the cells that use the discrete and triangular probability distributions. Next, follow the instructions in Step 3 to define outputs for each of the cells using the discrete and triangular probability distributions as well as the total value for the Frontier and Starlight alternatives; these output cells are highlighted in blue in the “Value Model” worksheet. You should see blue sparklines in these cells once this step is complete. In addition, you will see new named ranges and a data table within the “PMTable” worksheet and data used for chart construction in the “SIPmath Chart Data” worksheet. (The reader is referred to the online version of this book for color indication.)

    7. g. Create a stochastic Pareto chart using the template found in the Excel file titled “Problem 6.xlsx.” within the worksheet titled “Stochastic Cost vs. Value Chart.” Follow the instructions within the worksheet. Once complete, copy the chart and paste it into the “Value Model” worksheet and format appropriately.
    8. h. Create a cumulative distribution chart for the Frontier and Starlight alternative total value scores. Select the two cells that contain the total value sparklines for the Frontier and Starlight alternatives. In the SIPmath Modeler Tools ribbon, select “Graphs.” For the Cumulative Chart Starting Location, select and area in the “SIPmath Chart Data” worksheet. Once complete, there will be two lines series charts shown. Copy one data series after clicking a line in one line chart and paste it into the other chart. You should now have one line chart with two data series. Delete the chart with only one data series. At the top of the table in the “SIPmath Chart Data” worksheet, adjust the chart data to the desired settings; ensure that each alternative has the same values, especially the minimum values. Format the chart as needed and copy and paste the chart into the “Value Model” worksheet.
    9. i. Create a stochastic tornado diagram for the Frontier and Starlight total value scores using the template found in the Excel files titled “Problem 9.xlsm.” Creating the tornado diagrams in this template requires an extensive use of the Excel INDIRECT function. Because the INDIRECT function significantly slows the performance of the Excel file when used extensively, the template provides a macro to apply the INDIRECT function where needed and paste special values to obtain the percentiles used to create the tornado diagrams. Follow the instructions within the worksheet titled “Tornados.” Set the lower percentile to be 0.3 and the higher percentile to be 0.7. Copy the stochastic tornado diagrams into the “Value Model” worksheet.
    10. j. After completing exercises a–k, you should have a working model with the results of the Monte Carlo simulation. Answer the following questions once you have verified your model.
      1. 1. What insights can you identify from the value component chart? Provide some example of how to address the value gap.
      2. 2. What can you conclude from looking at the deterministic component chart?
      3. 3. How sensitive is each of the value model swing weights?
      4. 4. After performing the Monte Carlo simulation, what impact does the value measure uncertainty have on the overall assessment of each alternative?
      5. 5. Use the stochastic tornado diagrams to determine how we can best mitigate the risk in value of the Frontier and Starlight alternatives?
      6. 6. Change the matrix weight of the vehicle safety measure from 100 to 0. How do the answers to question j (1) to (5) change?
  7. 9.7 Perform a Monte Carlo simulation of problem 5 using an Excel add-in. Compare the results of both Monte Carlo analyses using the following three outputs: Stochastic Pareto charts, cdf charts (S-curves), and Stochastic tornado charts.

References

  1. Buede, D.M. (2000) The Engineering Design of Systems: Models and Methods, Wiley Inter-Science.
  2. Keeney, R.L. (1992) Value-Focused Thinking: A Path to Creative Decision Making, Harvard University Press, Cambridge, MA.
  3. Mankins, J.C. (1995). Technology readiness levels. White Paper, April, 6.
  4. Parnell, G.S., Bresnick, T.A., Tani, S.N., and Johnson, E.R. (2013) Handbook of Decision Analysis, Wiley & Sons.
  5. Parnell, G.S., Driscoll, P.J., and Henderson, D.L. (2011) Decision Making for Systems Engineering and Management, 2nd edn, Wiley and Sons, Hoboken, NJ.
  6. Parnell, G., Jackson, J., Lehmkul, L., and Engelbrecht, J. (1999) R&D concept decision analysis: using alternate futures for sensitivity analysis. Journal of Multi-Criteria Decision Analysis, 8, 119–127.
  7. National Defense Industrial Association (2011) Systems Engineering Division. Final Report of the Model Based Engineering (MBE) Subcommittee. Arlington, VA. From http://www.ndia.org/Divisions/Divisions/SystemsEngineering/Documents/Committees/M_S%20Committee/Reports/MBE_Final_Report_Document_(2011-04-22)_Marked_Final_Draft.pdf (accessed 23 September 2016).
  8. Savage, S.L., Scholtes, S., and Zweidler, D. (2006) Probability Management, OR/MS Today, Volume 33 Number 1. From http://viewer.zmags.com/publication/90ffcc6b#/90ffcc6b/29 (accessed 23 Sep 2016).
  9. Savage, S.L. (2009) The Flaw of Averages, John Wiley and Sons, Hoboken, NJ.
  10. Savage, S.L. (2012) Distribution Processing and the Arithmetic of Uncertainty. Analytics Magazine, November/December 2012.
  11. SEBoK (2015) BKCASE Editorial Board. Guide to the Systems Engineering Body of Knowledge (SEBoK), version 1.4, R.D. Adcock (EIC). Hoboken, NJ: The Trustees of the Stevens Institute of Technology ©2015.29 June 2015. Web. 16 Jun 2015, 14:02 http://sebokwiki.org/w/index.php?title=Decision_Management&oldid=50860. BKCASE is managed and maintained by the Stevens Institute of Technology Systems Engineering Research Center, the International Council on Systems Engineering, and the Institute of Electrical and Electronics Engineers Computer Society.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset