11

Threat Analysis in Distributed Environments

Hengameh Irandoust, Abder Benaskeur, Jean Roy and Froduald Kabanza

CONTENTS

11.1  Introduction

11.2  Some Definitions

11.3  Threat Analysis: Primary Concepts

11.3.1  Action, Event, and Reference Point

11.3.2  Intentionality

11.3.3  Impacts and Consequences

11.4  Threat Analysis as an Interference Assessment Problem

11.4.1  Intent–Capability–Opportunity Triad

11.4.1.1  Intent Indicators

11.4.1.2  Capability Indicators

11.4.1.3  Opportunity Indicators

11.4.1.4  Dual Perspective

11.4.2  Threat Analysis in the Data Fusion Model

11.5  Goal and Plan Recognition

11.6  Threat Analysis as a Plan Recognition Problem

11.6.1  Plan Recognition

11.6.2  Plan Recognition Approaches

11.6.2.1  Symbolic Approaches

11.6.2.2  Nontemporal Probabilistic Approaches

11.6.2.3  Probabilistic Approaches with a Temporal Dimension

11.6.2.4  Mental State Modeling

11.6.3  Issues in Threat Analysis

11.7  Threat Analysis in Military Operations

11.7.1  Task Complexity

11.7.2  Contextual Factors

11.7.2.1  Uncertainty

11.7.2.2  Time

11.7.2.3  Nature of the Threat

11.7.2.4  Operational Environment

11.8  Threat Analysis in Distributed Environments

11.8.1  Centralized and Decentralized Control

11.8.2  Advantages of Distribution

11.8.3  Operational Challenges

11.8.4  Analytical Challenges

11.8.5  Collaboration Challenges

11.8.6  Threat Analysis and Network-Centric Operations

11.9  Discussion

References

11.1  INTRODUCTION

A threat to a given subject (a “subject” being a person, a vehicle, a building, a psychological state, a nation, the economy, the peace, etc.) is an individual, entity, or event that can potentially hurt, damage, kill, harm, disrupt, etc., this subject itself, or some other subjects (assets) for which this particular subject has concern. Recent years have seen a wide range of threats requiring surveillance, mitigation, and reaction in domains as diverse as military operations, cyberspace, and public security. Although each context has its own particularities, in all cases, one attempts to determine the nature of the threat and its potential to cause some form of damage.

Threat analysis consists in establishing the intent, capabilities, and opportunities of individuals, and entities that can potentially put a subject or a subject’s assets in danger. Based on a priori knowledge and dynamically inferred or acquired information, threat analysis takes place in situations where there is indication of the occurrence of an event that can possibly harm a given (or set of) subject(s)/asset(s) of value.

Threat analysis involves the integration of numerous variables and calls upon several reasoning processes such as data fusion, intent, capability, opportunity estimation, goal and/or plan recognition, active observation, etc. Often performed in time-constrained and stressful conditions, the cognitive complexity of the threat analysis task can seriously challenge an individual, hence the automation of certain aspects of threat analysis in the operational domains.

The process of threat analysis can be performed by a single agent (human or software), but it can also be carried out by a team of agents, distributed over a geographic area, observing a situation from different perspectives, and attempting to merge their interpretations. This situation, while enabling information superiority, introduces a new set of challenges related to interoperability and inter-agent information sharing and collaboration. Similarly, the threat may also be comprised of multiple agents acting in coordination, which can significantly increase the difficulty of recognizing their common intent or plan. Thus, the challenges of threat analysis are multiplied significantly, as one moves from a one-on-one to a many-on-many configuration.

In the following sections, the problem of threat analysis is discussed from a theoretical perspective, while illustrating the observations by examples from the military domain.

The primary concepts of threat analysis, such as actions, goals, intentionality, consequences, reference point, are first discussed. Threat analysis is then addressed as an interference management problem where agents have to assess situations considering the intent, capabilities, and opportunities of their adversaries. By extending the scope of threat analysis to goal and then to plan recognition, it is shown that threat analysis can be viewed as an abduction problem where the observing agent is engaged in an evidence gathering—best-explaining hypotheses formulation cycle. This conceptual characterization of threat analysis enables the reader to measure the inherent difficulty of threat analysis regardless of the context of operations. The modeling frameworks and algorithmic techniques relative to different approaches, and plan recognition in particular, are extensively discussed, which shows the challenges for the automation of the threat analysis task. Next, threat analysis is presented in the context of military operations. The tasks to be performed and their complexity are discussed relatively to time, uncertainty, nature of threat, and other contextual factors.

After having grounded the problem in a military operational setting, the complexity of threat analysis in distributed environments is described, introducing the challenges of multi-threat environments and collaborative threat evaluation. The latter are analyzed both from situation analysis and collaboration perspectives. Finally, the operational challenges of threat analysis in network-centric operations are evaluated. Thus, through the document, the threat analysis problem is described in multiple contexts and at an increasing level of complexity.

11.2  SOME DEFINITIONS

A threat can be an individual, a physical entity, or an event that can potentially harm some asset of value, which is of concern to one or several agents.

It is generally accepted by the community working on the threat analysis problem (Paradis et al. 2005, Roy 2012, Steinberg 2005) that three concepts are central to the notion of threat. To constitute a threat, an entity must possess the intent or be intended to cause harm, as well as the capability and opportunity to achieve this intent.

•  Intent is defined as the goal of the threat. Intent assessment determines (using all available pieces of evidence) whether the threatening entity intends to cause harm.

•  Capability is defined as the ability of the threatening entity to achieve its goal and/or plan (or part thereof) as determined by the intent.

•  Opportunity is defined as the existence in the environment of the required preconditions for the threat’s goal/plan to succeed.

It is our contention that a threat can be defined along five dimensions, which capture these key concepts. These are

1.  Negativity: the notion of a threat evokes and involves only negative connotations such as danger, harm, evil, injury, damage, hazard, destruction, loss, fear, dread, etc.

2.  Intentionality: a threat can only be considered as such if it is intended so by a given goal-oriented and rational agent. Otherwise, there is danger and not threat.

3.  Potential: to be a threat, an agent or entity must have the capability and opportunity to inflict the negative effect it intends to.

4.  Imminence: a threat is always perceived as being in progress to achieve its goal by the agent expecting or observing it. Once the harmful event has occurred, it is no longer a threat.

5.  Relativity to a point of reference: a threat is always considered as such relatively to its target(s), and the level of harm it can inflict can only be measured relatively to that point of reference and not in absolute terms. Threats are modeled in terms of potential and actualized relationships between threatening entities and threatened entities, or targets (Steinberg 2005).

Concerning the concepts 1 and 5, it must be added that causing harm includes causing distraction or negatively interfering with the goal or the objectives of an agent. Yet, negative interference must always be measured relatively to the value given by a given agent to its goal. In a situation of threat, what is threatened is a crucial goal of some agent, whether that goal is to change or preserve a certain state of affairs. Indeed, one cannot talk of threat for negative events that do not destroy, harm, or damage assets that are of utmost importance to one or more agents. Therefore, the expression negative impact must be interpreted relatively to a crucial goal.

Threat analysis has been defined in Roy (2012) as

The analysis of the past, present and expected actions of external agents, covering the overall behaviour process of these agents from desires to effects/consequences, to identify menacing situations and quantitatively establish the degree of negativeness of their impact on the state and/or behaviour process of some agent of concern, and/or on some valuable human/material assets to be protected, taking into account the defensive actions that could be performed to reduce, avoid or eliminate the identified menace.

In operational environments, threat analysis is defined as the problem of determining the level of threat and the level of priority associated to it in a given situation. The level of threat indicates to what extent an entity is threatening. The level of priority indicates how much attention an observer should devote to that entity.

One should note that there are also the debatable notions of “inherent threat value” and “actual threat value” (also called actual risk value in Roy et al. [2002], Roy [2012]). The former is determined without consideration of a countermeasure/defensive action, whereas the latter is established with consideration of defensive actions. In the latter case, one could also talk of “residual” threat value to refer to the threat (or risk) that remains even after a defensive action.

11.3  THREAT ANALYSIS: PRIMARY CONCEPTS

This section revisits some of the basic notions that underlie the threat analysis problem.

11.3.1  ACTION, EVENT, AND REFERENCE POINT

The process of threat analysis may implicate the observation of an action or event, which in turn involves one or several state changes in the environment. The consequences of such action/event impact in different ways the entities and individuals concerned by the event. Thus, a given action may constitute a threat for one agent, be indifferent to another, and be an opportunity for still another. Consequences of actions are therefore positive or negative depending on the reference point (agent/entity) being considered. Moreover, the agents and entities concerned may be impacted by the change at different points in time. One agent may be threatened by an action instantaneously, as it occurs, while another may be affected by it only after a more or less long period of time. The effect of a negative action on different reference points varies in time and space. Often, the nature and magnitude of some state changes and/or consequences depend on the geometry (e.g., the proximity) between the “effector” (e.g., an agent performing some action) and the “affected” (e.g., a particular asset). As an example, consider the degree of severity of the explosion of a bomb as a function of the target proximity. Finally, actions can be viewed at different levels of granularity. An action can be perceived as being part of a more global action or event, which could be considered as a plan, or it can be a punctual and bounded occurrence.

11.3.2  INTENTIONALITY

Actions or events can be intentional or unintentional (e.g., potential natural disasters, accidents, or human errors). While unintentional actions can pose a danger to an agent or entity, only intentional actions can be considered as threats. Thus, threat analysis is concerned with characterizing, recognizing, and predicting situations in which a willful agent intends to do harm to some subject. However, actions intended for one reference point can also impact other agents and entities. Collateral damages and fratricides are examples of such unintended and unfortunate side effects.

The “belief–desire–intent” (BDI) model (Bratman 1987) used by a part of the intelligent agent community is a useful paradigm/model for a practical approach to the problem of intentionality or intent assessment. Roughly speaking, beliefs represent an agent’s knowledge. Desires express what the agent views as an ideal state of the environment. These provide the agent with motivations to act. Intention lends deliberation to the agent’s desires. Thus, intentions are viewed as something the agent has dedicated itself to trying to fulfill. They are those desires to which the agent has committed itself.

Bratman (1987) argues that unlike mere desires, intentions play the following three functional roles. Intentions normally pose problems for the agent; the agent needs to determine a way to achieve them. Intentions also provide a “screen of admissibility” for adopting other intentions. Whereas desires can be inconsistent, agents do not normally adopt intentions that they believe conflict with their present and future-directed intentions. Agents “track” the success of their attempts to achieve their intentions. Not only do agents care whether their attempts succeed but they are disposed to replan to achieve the intended effects if earlier attempts fail.

Castelfranchi (1998) defines goal-oriented or intentional agents or systems along the same lines. A goal is a mental representation of a world state or process that is candidate for (1) controlling and guiding action by means of repeated tests of the action’s expected or actual results against the representation itself, (2) determining the action search and selection, and (3) qualifying its success or failure.

Intentionality is one of the core concepts used to analyze the notion of cooperation (Bratman 1987). An agent cannot be considered as cooperative if it is not intended to be, even if its actions incidentally further the goals of another agent. Likewise, a threatening agent cannot be considered as such without intention, even if its actions compromise the goals of another agent.

11.3.3  IMPACTS AND CONSEQUENCES

All actions performed by an agent perturb the environment, i.e., they produce some alterations of the state of the environment (including the state of other agents). Actions and the state changes resulting from their execution play a key role in any discussion on impact assessment and threat analysis.

The impact of an action is always relative to the perspective from which it is viewed. However, all targets do not have the same value for the adversary and the latter’s intent, capability, and opportunity depend on the type of target being considered. Actions can be planned and executed to produce an overall broad effect, e.g., to demoralize the enemy, or they can be designed to produce a very specific effect, e.g., a high precision lethal attack. As the “vulnerability” of the target decreases, the required capabilities to affect it increase. As the “importance” of the target increases, so does the adversarial intent to affect it. Opportunities may also be dependent on the nature of the target. In an adversarial context, the subjects of threat will primarily protect their high-value assets. For example, in a naval task force, the oil tanker would be such an asset. All the other platforms would consider this particular platform to be the asset to protect when acting as an operational unit. However, it becomes very difficult to determine such assets or vulnerabilities in more complex systems. As a matter of fact, the impact of a threat instantiated as an attack can be physical (destruction, injury, death, etc.), psychological (instability, fear, distress, etc.), social (chaos), economical (crash, cost, etc.) etc., with one level affecting the other through complex interdependencies, leading to unpredictable consequences.

11.4  THREAT ANALYSIS AS AN INTERFERENCE ASSESSMENT PROBLEM

Let us consider two agents R and B, by reference to Red (enemy) and Blue (own or friendly) forces in the military domain. The term “agent” is used to refer to active, autonomous, goal-oriented entities.

In a world where agents co-exist, their actions, driven by inner or contingent goals, can accidentally or purposefully interfere with the actions of other agents. Therefore, in any situation, an agent needs to monitor its environment, and to assess and manage interferences with the surrounding agents and entities.

The reasoning an Agent B performs in a situation of possible negative impact is dependent on its knowledge of the type of situation and the time available for reasoning. In a situation of immediate danger, where Agent B observes Agent R’s capability and opportunity to harm, Agent B’s priority will be to avoid that situation.

Whether it is a car, out of control, heading toward us or a missile launched at our own ship, the first reaction would be to respond to that situation as to avoid it. The problem of the intent or the goal of the source of danger becomes irrelevant during that time frame.

In a less time-pressured environment and generally more complex situation, the negative interference of Agent R’s action has to be evaluated in the light of a higher-level goal, so that Agent B can assess the scope of that action and possibly anticipate its consequences.

Similarly, in cases of positive interference, depending on the situation, an agent could simply enjoy the fortunate circumstances or in order to benefit on a larger scale or in the long term, attempt to establish the goal of the other agent(s) and assess the possibility of cooperation (mutual benefit), or exploitation (unilateral benefit).

Within this larger context, threat analysis, concerned with purposeful actions endangering crucial goals, involves reasoning on Agent R’s capability/opportunity, intent/goal, and/or plan.

11.4.1  INTENT–CAPABILITY–OPPORTUNITY TRIAD

Essentially, it is sufficient to determine the intent, capability, and opportunity of Agent R to deliver damage to Agent B to establish its level of threat, if in fact a threat exists. Intent is an element of the agent’s will to act. Capability is the availability of resources (e.g., physical and informational means) sufficient to undertake an action of interest. Opportunity is the presence of an operating environment in which potential targets of an action are present and are susceptible to being acted upon.

A threat can be viewed as an integral whole constituted of intent, capability, and opportunity, the disruption of any of the constituents involving the disruption of the whole (Little and Rogova 2006). From this perspective, viable threats exist when all three essential threat elements (intent, capability, opportunity) are present and form a tri-partite whole via relations of foundational dependence. Potential threats exist when at least one essential part (intent, capability, or opportunity) exists, but its corresponding relations are not established. In this sense, potential threats are threats that are not in a state of being, rather they are in a state of becoming, where portions of the item are constantly unfolding and are yet to be actualized at a given place or time.

In a military operational setting, collated information from all available sources is interpreted as part of the overall analysis of threat information in an attempt to discern patterns which may provide evidence as to the hostile entity’s intent, capability, and opportunity. The threat will only have the opportunity to deliver its damage provided that it can both detect and track its target and can reach it (Paradis et al. 2005). Here, evidence consists in indicators of the presence of intent, of capability, and/or of opportunity of Agent R to harm Agent B.

Let us take the example of an air threat. Note that some of the threat indicators can be directly observed (e.g., bearing, range, speed, etc.), while others need simple calculations (e.g., Closest Point of Approach—CPA, flight profile) or advanced calculations (e.g., third party targeting), and still others could require knowledge-based inference. Indicators are derived from track characteristics, tactical data, background geopolitical situation, geography, intelligence reports, and other data. The indicators observed may be relative to any of the three key ingredients.

11.4.1.1  Intent Indicators

Within the intent–capability–opportunity triad, intent and its relationship to actions is the most complex one to assess. Intent assessment can involve both data-driven methods (i.e., explaining the purpose of observed activity) and goal-driven methods (i.e., seeking means to assumed ends) (Laird et al. 1991).

Intent can be derived based on the observation of current behavior, but also on the basis of information provided by other sources (e.g., intelligence). Also, it can be directed by a priori knowledge gained on an agent, or by own experience of its past actions.

To assess intent, one generally verifies if a certain number of criteria are satisfied, i.e., if a number of indicators are available. To do this, one has to create predictive models of behaviors that a purposeful agent might exhibit and determine the distinctive observable indicators of those behaviors (Steinberg 2007). In the military domain, while some indicators such as the track position, speed, identity, or responses (or the absence thereof) to Identification Friend or Foe (IFF) interrogations are readily available from the tactical picture, a priori databases, or from communications with other units in the force, others such as complex behaviors, e.g., threat maneuvers, tactics, group composition, and deception, can be very hard to analyze.

11.4.1.2  Capability Indicators

Capability determines the possibility for a given agent to carry out its intent. This concept refers to inherent or structural capability and can be measured independently of any particular situation, such as the lethality of a missile. Opportunity, on the other hand, refers to situational contingencies.

Capability indicators are generally available from a priori data (e.g., intelligence, database, etc.). Observations made during operations come to confirm the a priori information on the threat capability (e.g., characteristics of platforms, sensors, and weapons). It must be noted that one of the challenges with capability evidence gathering and exploitation is when dealing with asymmetric threats (see Section 11.7.2.3.3).

11.4.1.3  Opportunity Indicators

Opportunity, which we also refer to as the “situational capability,” is the presence of favorable factors for actions to occur (Roy et al. 2002), and thus depends on the dynamics of the situation. Assuming that Agent R has the intent and the (structural) capability to inflict harm to Agent B, several conditions may be required in the environment in order for Agent R to make the delivery of this harm possible.

Like intent indicators, some of opportunity indicators are readily available from the tactical picture or a priori data, some are easily calculated, and others are much more difficult to determine. Indeed, some instances of opportunity assessment require a more elaborate predictive analysis of the agent behavior, e.g., analyzing a trajectory taking into account the engagement geometry, dynamic models of the entities in the volume of interest, and potential obstructions.

11.4.1.4  Dual Perspective

Both the capability and the opportunity of Agent R are directly related to the vulnerability of Agent B. This vulnerability can also be either structural or situational. Structural vulnerability is a function of the very nature of Agent B and directly determines, and is impacted by, the capability of Agent R. The situational vulnerability of Agent B offers opportunity to Agent R. It can be expressed in terms of its observability, i.e., the extent to which it can be seen/sensed by Agent R, and its reachability, which is the likelihood that it will be reached and affected by the action of Agent R. If these two conditions do not hold simultaneously, then Agent R will not be considered as having the opportunity to deliver harm.

Opportunities for action can be characterized by an evaluation of the constraints imposed by the accessibility or vulnerability of targets to such actions (Steinberg et al. 1998). Agent R can acquire opportunities actively (e.g., through purposeful activities such as gaining knowledge of the plans of Agent B, gaining an advantageous spatial position, performing deception actions, etc.) or passively (e.g., through the presence of environmental factors such as weather, cover, the presence of noncombatants, the terrain, etc.).

Similarly, threat analysis can be carried out through passive observation, but also proactively. For example, one can generate information through stimulative intelligence (Steinberg 2006), which is the systematic stimulation of red agents or their environment to elicit information (Steinberg 2007). Such stimulation can be physical (e.g., imparting energy to stimulate a kinetic, thermal, or reflective response), informational (e.g., providing false or misleading information), or psychological (e.g., stimulating perceptions, emotions, or intentions).

It must be added that the risk assessment of Agent B, when threatened by Agent R, is not only a function of the intent, capability, and opportunity of the latter to harm it, but also of the feasibility of Agent B’s own options for defending itself. Thus, the risk would be low if these two balance out.

11.4.2  THREAT ANALYSIS IN the DATA FUSION MODEL

According to the data fusion model maintained by the Joint Directors of Laboratories’ Data Fusion Group (JDL DFG), threat analysis comes under “impact assessment,” which has to do with the estimation and prediction of effects on situations of planned or estimated/predicted actions by the participants (Steinberg et al. 1998). Per the revised JDL data fusion model, the level-3 data fusion process, originally called “Threat Assessment” (White 1988), has been broadened to that of impact assessment. Impact assessment, as formulated in the JDL DFG model, is the foundation of threat analysis (Roy et al. 2002).

Threat analysis in this framework involves assessing threat situations to determine whether adversarial events are either occurring or expected (Steinberg et al. 1998). Threat situations and threat events are inferred on the basis of the attributes and relationships of the agents involved. Estimates of physical, informational, and perceptual states of such agents are fused to infer both actual and potential relationships among agents. By evaluating and selecting hypotheses concerning agents’ capability, intent, and opportunity to carry out an attack, the threat analysis system will provide indications, warnings, and characterizations of possible, imminent, or occurring attacks.

11.5  GOAL AND PLAN RECOGNITION

The process of threat analysis can be carried out by an Agent B trying to infer the goal of an Agent R from a sequence of observations. This can be viewed as an ongoing dynamic process of evidence gathering and hypotheses formulation, as illustrated in Figure 11.1.

Agent B observes the environment and the actions of Agent R (both agents’ actions impact the environment) trying to infer its goals. It perceives evidences that confirm or contradict what he hypothesizes as being Agent R’s goals. Hypotheses are put forth, strengthened, or discarded, based on Agent B’s expectations regarding Agent R’s current goals and the new evidence that is observed/sensed. To generate hypotheses about Agent R’s goals (match expectations with observations), Agent B uses its model of the situation and its knowledge about the adversary. This model is fed by a priori knowledge (background knowledge on the potential behaviors and capabilities of Agent R, experience of past cases, high-level information, etc.). Agent B takes action on the basis of its hypotheses about the current situation, as generated by the use of the model and the observations. Solving the problem may involve not only reasoning about past behaviors, as indicated by the observations to date, but also hypothesizing over likely future behaviors. Agent B’s actions affect, in turn, the environment and the future actions of Agent R.

Image

FIGURE 11.1 Threat analysis: the blue perspective.

Goal recognition, on the basis of action observation, is a very complex problem. First, it must be noted that agents generally pursue consistent goals at different levels of abstraction. In some contexts, such as military operations, high-level goals are easier to figure out than low-level goals that must be recognized for a particular situation. For example, while the strategic goals of a nonfriendly country may be widely known—hence the identification of its assets as “hostile” prior to any intent assessment—the tactical goals (and the plans) or punctual objectives of that agent may be very difficult to discern in the field.

In a threat analysis context, Agent B, using its model of the situation and based on its observations, attempts to determine the relationship between the actions and goals of Agent R. Goal and/or plan recognition in this context, where Agent B attempts to discern a sequence of goal-directed actions, can be problematic in several regards. In effect, several misconceptions regarding the actual goal of the adversary Agent R are possible, which may be due either to Agent B’s flawed perception of the situation or the reasoning it performs given its model of the situation (including its model of Agent R).

Various perception problems can arise. The action of interest may not be fully observed because of the imperfection that is inherent to the perception and identification of actions, whether by humans or nonhuman sensors. Agent B may fail to see some actions or may see arbitrary subsets of the actual actions (partial observability). It may not be able to distinguish actions of interest from clutter (activities of other agents or entities in the environment). It may also be reasoning on an action, a portion of which has not been observed yet.

Agent B can also make errors related to the use of the model (Figure 11.1):

1.  Missing goal: A goal pursued by Agent R may be completely unknown to Agent B (B has no representation of that goal in its model). Another situation is when the goal is represented in Agent B’s model but has been discarded because B assumes that such goal cannot be pursued. This occurs, for example, when B makes an assumption of rationality. We generally consider that other individuals follow the same line of reasoning as us and that agents behave based on decisions that are in accordance with their reason, i.e., their proper exercise of the mind. However, in the case of terrorism and asymmetric threats, one is often confronted with behavior that is based on decisions that can be qualified as irrational (Roy et al. 2002).

2.  Wrong inference on the structure of actions and goals: The establishment of an action–goal relationship can be very complex. Consider the following cases: (1) an action of Agent R can contribute to several goals; (2) the goal of the action is rightly identified, but that action is only the initial phase of a higher-order action, and thus contributing to another goal; (3) a goal is dismissed because the action’s conditions are not respected (duration, precondition, etc.); (4) Agent R is performing interleaved actions, i.e., a set of actions observed sequentially by Agent B are performed by Agent R in the execution of different plans while pursuing different goals. For example, consider observing a person moving in a house and performing bits of different plans one after the other (e.g., clean up, write down two or three items on the grocery list, do some cooking, write down another item on the list, go back to cooking, etc.).

3.  Model manipulation: Sometimes, agents whose plans and goals we attempt to identify may help us in our recognition process by making their goals as explicit as possible. However, in an adversarial context, deception is the rule. Agent R can attempt to dissemble, misdirect, or otherwise take actions to deliberately confuse Agent B. It does so by using its own model of Agent B’s beliefs and reasoning process.

The cases discussed in item 2 come under the problem of plan recognition. In adversarial contexts, goal recognition generally implies some degree of plan recognition, as both parties achieve their goals through the accomplishment of a course of actions. If Agent B determines that the action of interest is not a single isolated action but is rather part of a plan, then it needs to organize the observed actions in a goal-oriented sequence. At the same time, if several opponents are involved, i.e., Agent R is member of a team, then, the role of each team member in the higher-level action must be determined.

Problems of perception can also be particularly problematic for plan recognition, which is an incremental inference process where the validity of a hypothetical plan can only be confirmed by the observation of significant elements or actions, or at least portions of them.

One of the major difficulties in goal and plan recognition, excluding those already mentioned, is that every situation is dynamic and constantly changing, and even more so in a battlespace. This has important consequences on Agent R’s actions and on Agent B’s interpretation of those actions and selection of defensive actions. Thus, Agent R may abandon its plan, change its initial plan (e.g., change resources, course of actions, etc.) to adapt it to the new circumstances (which may be the outcome of Agent B’s actions), decide to act opportunistically, etc.

As previously discussed, Agent B has to assess the impact of Agent R’s actions and goals on its own goals, plans, and on the environment (including neutral actors), whether Agent R’s plan is recognized or not. More specifically, Agent B has to determine if Agent R’s goal can be achieved given Agent R’s capability and opportunity, and its own capability to defend itself.

11.6  THREAT ANALYSIS AS A PLAN RECOGNITION PROBLEM

Establishment of hostile intent may not be enough in the evaluation of a threat event, as the situation awareness needed for threat analysis requires that Agent B be able to organize the observed actions into a course of actions, and to some extent predict the evolution of the situation. This means that Agent B must engage in some kind of plan recognition. Note that recognizing the plan of a threat implies that, to a certain extent, its intent, capability, and opportunity have been recognized, but this implication is not true the other way around.

Unlike threat analysis, the plan recognition field is concerned with plans carried out by agents in general and not only non-friendly ones. Adversarial plan recognition can be brought close to threat analysis, although an adversary is not necessarily a threat. Also, while the plan recognition community is interested in the recognition of a plan as a process constituted of a sequence of goal-oriented actions (which can in certain cases be suspect or hostile), the threat analysis community is primarily concerned with the determination of what constitutes a threat and how to identify it.

11.6.1  PLAN RECOGNITION

The problem of plan recognition can be viewed as a case of abductive inference or of deductive explanation. This kind of explanation is concerned with the construction of theories or hypotheses to explain observable phenomena, thus requiring an abductive reasoning process. As Southwick (1991) observes, and this is the challenge of plan and/or goal recognition in general, “in order to arrive at a hypothesis, a person must first find some pre-existing model or schema, and try to interpret all data in terms of that model.” As mentioned before, this abductive leap from a small amount of data to a working hypothesis is risky because of the incompleteness or uncertainty of the data and/or the use of a model that may be defective or wrong.

Plan recognition is used by everyone in everyday life to be able to manage a conversation, to avoid bumping into people in the corridors, or to guess what people around us are up to. It is used in cooperative, neutral, and adversarial settings. Two types of plan recognition can be distinguished: one type in which Agent W (neutral or cooperative) helps Agent B in its plan recognition (this is intended recognition), and another type where Agent R attempts to thwart recognition of its plan by Agent B (this is adversarial plan recognition). From the observer’s viewpoint, the distinction is made between “keyhole” and “intended” plan recognition (Cohen et al. 1981). Keyhole means that the plan recognizer B is passively watching an Agent W execute its plans (W may not be aware of this observation) (e.g., story understanding). Intended means that the observed Agent W intends that the observing Agent B be able to infer its plan (e.g., tacit teamwork).

Plan recognition has long been established as one of the most fundamental and also challenging problems in human cognition. Through his psychological experiments, Schmidt provided evidence that humans do infer hypotheses about the plans and goals of other agents and use these hypotheses in subsequent reasoning (Schmidt 1976). Later, he positioned plan recognition as a central problem in the design of intelligent systems (Schmidt et al. 1978). Computational approaches to plan recognition have followed in various areas, such as story understanding (Bruce 1981) and natural language understanding (Allen 1983). The general problem can be described as the ability to infer, given fragmented description of the actions performed by one or more agents in a situation, a richer description relating the actions of the agents to their goals and future actions.

Automation of plan recognition means that the system contains a knowledge base, often called “plan libraries,” of actions and recipes for accomplishing them (i.e., models of situations). These recipes include actions’ preconditions, subgoals, goals, and effects. To infer the agent’s goal from the observed actions, the plan inference system constructs a sequence of goals and actions that connect the observed actions to one of the possible domain goals. This is accomplished by chaining from actions to goals achieved by the actions, from these goals to other actions for which the goals are preconditions or subgoals, from these actions to their goals, etc. (Carberry 2001). Knowledge engineering and computational efficiency remain the main challenges of plan recognition automation.

Plan recognition works differ both in how the problem is framed, and how the problem, once framed, is solved. Differences between different frameworks concern

1.  Plan representation: Plans are commonly represented as straight line classical plans or Hierarchical Task Network (HTN) plans (see the example of Figure 11.2), which can capture complex, phased behaviors. The latter permit some partial ordering of actions, yet there is little extant work on temporal goals in plan recognition because there’s little sense of how they would fit into plan libraries.

2.  The observer/observed relationship: A distinction was made earlier between keyhole, intended, and adversarial plan recognition. The particularity of the latter is that it can involve deception. In an adversarial setting, behaviors may segment into deceptive/nondeceptive, which significantly increases the complexity of plan recognition.

3.  Observability: Most plan recognition works assume full observation. This means that Agent B sees the full set of actions of Agent R or W. On the opposite, partial observability boils down to performing incremental plan recognition from some prefix of what Agent B expects to be a full plan.

There are ways of simplifying the plan recognition problem by performing only goal recognition (i.e., determine the “what” and not the “how” of what the agent is doing) or agent classification (do not figure out precise objectives, but only a category of goals or agents, such as hostile, friendly, etc.).

Image

FIGURE 11.2 Example of HTN plan representation.

Activity modeling is another problem simplification approach, where instead of recognizing a particular plan, one recognizes an activity, i.e., a simple temporally extended behavior (Agent R is playing tennis is simpler to recognize than Agent R is preparing for an overhead smash).

11.6.2  PLAN RECOGNITION APPROACHES

Plan recognition approaches can be categorized into symbolic and probabilistic approaches. The latter treat uncertainty numerically, the former do not. Among probabilistic approaches, one can distinguish between temporal and nontemporal models.

11.6.2.1  Symbolic Approaches

One of the dilemmas of plan recognition is that of the Occam’s razor or minimization, i.e., the principle according to which, of all plans explaining given observations, the minimalist one is the best explanation. Circumscription techniques which keep a minimal true set prevail here as they minimize the hypothesized plans. In Generalized Plan Recognition, Kautz and Allen (1986) define the problem as that of identifying a minimal set of top-level actions sufficient to explain the set of observed actions, representing it as plan graphs with top-level actions as root nodes expanded into unordered sets of child actions. Although efficient, this approach assumes that agents attempt one top-level goal at a time. Moreover, these techniques fail when likelihood matters. For example, in a medical diagnostic problem, HIV disease can be an explanation for virtually any symptom (best minimal explanation), but this hypothesis is very unlikely compared to a combination of likely hypotheses such as head cold and sinus infection.

Another symbolic approach in plan recognition is parsing, which is using a grammar (showing the decomposition of actions) and a parser (an algorithm which “reads” a given plan using the grammar). Based on Kautz and Allen’s work, and taking advantage of the great amount of work in this area, Vilain (1991) investigates parsing as a way of exploring the computational complexity of plan recognition. The problem with this formalism is that the size of the grammar and the performance of the parser blow up in the presence of partially ordered grammars, where actions are represented as having a partial temporal order. Pynadath and Wellman (1995) first used probabilistic context-free grammars which suffer from the same problem. To overcome that, they proposed a probabilistic context-sensitive grammar. While handling state dependencies, this approach does not address the partial-ordering issues or the case of interleaved plans.

11.6.2.2  Nontemporal Probabilistic Approaches

Probability theory has imposed itself in plan recognition as it is the normative way of doing abductive reasoning. Probabilistic approaches include probabilistic decision trees, influence diagrams, and mainly Bayesian networks, which are directed graph models of probability distributions that explicitly represent conditional dependencies.

Image

FIGURE 11.3 Bayesian network.

In a Bayesian network, nodes represent random variables and arcs between nodes represent causal dependencies captured by conditional probability distributions. When used for plan recognition, the nodes are propositions, the root nodes represent hypotheses about the plan of Agent R, and the probability assigned to a node represents the likelihood of a proposition given some observed evidence. A link from a variable a to a variable b could be interpreted as a causing b. This way, in the plan library (which specifies the potential plans an agent may execute), subgoals would be connected to goals, preconditions to actions, and actions to effects.

Figure 11.3 illustrates a simple Bayesian network for goal recognition. The variables shown in black circles represent the goals that an entity in the environment might have (transiting, reconnaissance, or attacking). The variables in gray circles are subgoals. For example, to attack a target, an entity must approach it, detect it, and engage it. The goal decomposition conveys some kind of a hierarchical plan that describes how to achieve a given task (goal) by decomposing it into subtasks (subgoals). The variables in white circles represent observable facts about the entity being observed. There is a conditional probability distribution for each variable given its parents. Using Bayesian inference, one can calculate the probability that the entity is committed to some goals given the observations that have been made.

Bayesian inference supports the preference for minimal explanations in the case of equally likely hypotheses (minimum cost proofs in symbolic logical approaches are equivalent to maximum a posteriori estimation), but also correctly handles explanations of the same complexity but with different likelihoods. Bayesian networks provide computational efficiency (avoid joint probability tables) and assessment efficiency (reduce the number of causal links that have to be modeled). However, like most diagrammatic schemes, Bayesian networks have only propositional expressive power (quantification and generalization are not possible). Another pitfall is that they do not explicitly model time, which is needed when it comes to reasoning about behaviors. However, it is possible to approximate the flow of time with causality.

Charniak and Goldman (1991) were among the first to use Bayesian inference for plan recognition. Their Bayesian network represented a hierarchical plan expressed as a decomposition of goals into subgoals and actions. It used a marker passing a form of spreading activation in a network of nodes and links to identify potential explanations for observed actions and to identify nodes for insertion into a Bayesian belief network.

Other works include Elsaesser and Stech (2007) where a Bayesian network is used to perform sensitivity analysis on the hypotheses generated by a planner. In Santos and Zhao (2007), a network represents the threat’s beliefs on goals and high-level actions for both itself and its opponent, and an action network represents the relationship between the threat’s goals and possible actions to realize them. Finally, Johansson and Falkman (2008) use a Bayesian network to calculate the probability of an asset being targeted by a threat.

Causality in Bayesian networks as exploited in these approaches is not enough to make inferences about complex behaviors. An explicit temporal model must be incorporated in order to make inferences on sequences of observations. Temporal dependencies between actions in plans must be modeled and related to goals. This is even truer for coordinated agents accomplishing arbitrary, temporally extended, complex goals.

11.6.2.3  Probabilistic Approaches with a Temporal Dimension

Temporal probabilistic models allow inferences over behaviors based on temporal sequences of observations. Different types of probabilistic queries can be made for threat analysis. A probabilistic explanation query would compute the posterior probability distribution of a given behavior (as a sequence of states) based on a sequence of observations. A probabilistic filtering query would compute the posterior probability distribution over the current goal or plan given the observations to date (e.g., the Kalman filter). This would require augmenting the state space with goals or plans that agents are pursuing. A probabilistic prediction query would compute the posterior probability over future goals given the observations to date.

Dynamic Bayesian networks (DBNs) are Bayesian networks where each “slice” represents a system state at a particular instant in time. Causal influences run from nodes in one time slice to the next (e.g., the state of Agent B at time t + 1 is a probabilistic function of its state at time t and whether Agent R attacked it at time t). On the one hand, DBNs have the virtue of explicitly modeling the probabilistic dependencies among the variables. On the other hand, inference in DBNs has more computational complexity.

In the example of the Bayesian network in Figure 11.3, if an entity is observed with a heading toward a ship, then it follows that it has a high probability of approaching its target, regardless of its previous headings. In other words, the inference does not take into account the history or the past behavior of the entity. After all, the track could very well be in the process of turning and this particular heading may only be coincidental and temporary. It is possible to remedy the Bayesian network by including mega-variables capturing the history of events. However, that would be very tedious and error prone. A better approach consists in using a DBN that naturally captures the flow of time. Figure 11.4 shows a DBN obtained from the Bayesian network in Figure 11.3 by adding the temporal extension. With this DBN, the calculation of the probability of the subgoal Approach takes into account the previous heading, the previous distance, and the previous probability of the Approach subgoal. Using this approach, even if the probability for the subgoal Approach at time k − 1 was small, but the entity was/is heading toward the ship both at time k − 1 and k, the probability of Approach would nonetheless be higher at time k than at time k − 1.

Image

FIGURE 11.4 Dynamic Bayesian network.

An alternative to DBNs are hidden Markov models (HMMs). Actually, the latter are a particular form of DBNs in which the state of a process is described by a single discrete random variable. HMMs offer greater flexibility because one can specify the state transition and observation models using conditional probability tables. An HMM models the dynamics of only one variable and relates observations at time k only to the state of the variable at time k + 1. These approaches offer many of the efficiency advantages of parsing approaches, with the additional advantages of incorporating likelihood information and of supporting machine learning to automatically acquire plan models. However, because of the weak expressiveness of models (even weaker than that of grammars), state spaces can explode if complex plans are to be represented. Similarly, training can become very difficult.

Figure 11.5 illustrates an HMM for plan recognition. The hidden state is the current plan of an observed entity, where a plan is represented as a hierarchical decomposition of goals (or tasks) into subgoals (or subtasks), which is more or less reminiscent of the causal structure underlying the Bayesian network in Figure 11.3. That is, a state is a graph in which the root node is a goal (task), the leaves are actions (primitive tasks), and the inner nodes are subgoals (subtasks). The observations are assumed to be caused by the actions of the observed entity while it executes the plan.

Image

FIGURE 11.5 Hidden Markov model.

With HMMs, the dynamics related to the plan and plan execution can be modeled using plan libraries, as opposed to DBNs where they are explicitly modeled as variables. This keeps the inference mechanism related to the evolution of plans over time (conveyed by the plan libraries and their simulation) separated from the inference mechanism related to the generation and evaluation of competing hypotheses about the current plan (conveyed by the inferences within the HMMs).

HMMs have been commonly used in “activity recognition,” specifically for recognizing behaviors of moving individuals for diverse purposes, such as eldercare (Liao et al. 2007), detection of terrorist activity (Avrahami-Zilberbrand and Kaminka 2007), and teamwork in sports and in the military (Sukthankar and Sycara 2006). An illustration of an HMM approach is the Probabilistic Hostile Agent Task Tracker (PHATT) introduced by Goldman et al. (1999) and later refined through successive improvements (Geib and Goldman 2003, 2005, Geib et al. 2008). A state of the HMM underlying PHATT is a set of concurrent plans the agent may be pursuing and the current points in the execution of these plans. From these current points, the next potential actions are derived (called pending sets) thereby constraining the model of observation (observations are mapped to effects of the pending actions to infer the probability distribution for the observed action). The states of the HMM are generated on the fly by simulating the execution of the plans in the current state based on the current observed action. By hypothesizing goals and plans for the agent, and then stepping forward through the observation trace, a possible sequence of pending sets is generated. When the end of the set of observations is reached, each observed action will have been assigned to a hypothesized plan that achieves one of the agent’s hypothesized goals and a sequence of pending sets that is consistent with the observed actions. This collection of plan structures and pending sets is a single complete explanation for the observations.

A similar approach is taken by Avrahami-Zilberbrand and Kaminka (2005) who also maintain a set of hypotheses, but instead of using a model of plan execution and pending sets, they check the consistency of observed actions against previous hypotheses. Although solving some of the problems addressed by PHATT, the approach does not allow them to recognize those tasks that depend on pending sets, including negative evidence (actions not observed) (Geib and Goldman 2009). Kaminka et al.’s (2002) keyhole recognition for teams of agents considers the question of how to handle missing observations of state changes. However, it differs from PHATT significantly in using a different model of plan execution and by assuming that each agent is only pursuing a single plan at a time. On the other hand, it devotes a great deal of effort to using knowledge of the team and its social structures and conventions to infer the overall team behavior.

Finally, some “hybrid” works have used the theoretical framework of Bayesian Networks and HMMs but exploited research on parsing while mitigating the problems posed by partial orders. Such works include ELEXIR (Geib 2009) and YAPPR (Geib and Goldman 2009).

11.6.2.4  Mental State Modeling

Another way of approaching the problem of plan recognition is through adversary modeling. In domains where the tasks are radically unconstrained and it is too hard to build a database of possible plans, Agent B can simply invert the planning problem by putting itself in Agent R’s shoes. For instance, Agent B can speculate on what Agent R may consider as critical to its mission. For Whitehair (1996), intent assessment is a model of the (adversarial) agent’s internal utility/probability/cost assessment, by which the utility of particular states, the probability of attaining such states given various actions, and the cost of such actions are estimated.

In some areas, knowledge of Agent R’s plans observed in the past may poorly predict its later plans. The military domain is one of these areas. The opposing forces’ actions are nevertheless constrained by their doctrine and rules of engagement (at least in the case of conventional forces), their capabilities (weapons and resources), the environment in which they are operating, etc. All of these factors constrain what the opposing forces can do in practice.

In Glinton et al. (2005), field model prediction based on a priori knowledge is accomplished by the interpretation of opposing forces’ disposition, movements, and actions within the context of their known doctrine and knowledge of the environment. Along the same reasoning line, TacAir-Soar, probably the most widely referenced expert system for tactical military operations (Jones et al. 1998), uses its knowledge of aircraft, weapons, and tactics to create a speculation space in which it pretends to be the opponent by simulating what it would do in the current situation. Such a methodology is generally known as mental state modeling, given that it literally consists in modeling the mental state of the opponent. TacAir-Soar is a symbolic rule-based system based on the Soar architecture for cognition (Laird et al. 1991). Its functionalities cover not just threat analysis, but also other command and control processes, including planning, and action execution (Jones 2010).

Game-theoretic methods must also be mentioned in this category, although they typically do not involve very complex iterative opponent modeling. Chen et al. (2007) discuss a mathematical framework for determining rational behavior for agents when they interact in multi-agent environments. The framework offers a potential for situation prediction that takes real uncertainties in enemy plans and deception possibilities into consideration. It can give an improved appreciation of the real uncertainty in the prediction of future development. However, prediction of the behavior of the other agents is based on an assumption of rationality.

11.6.3  ISSUES IN THREAT ANALYSIS

From a plan recognition perspective, threat analysis, or adversarial plan recognition, poses several challenges. Aside from the more general issues relative to the representation and interpretation of events and states, as discussed in Section 11.5, threat analysis can further complicate plan recognition frameworks because of the following:

1.  Model manipulation: Generally, existing frameworks assume that the observed agent makes no use of deception.

2.  Plan revision: Plan revision on the part of the observed agent is a serious challenge to plan recognition. Plan revision can be a consequence of a change in the operational environment or caused by a change in agents’ inner motives. A particular case of plan revision is that of plan abandonment where an initial plan is abandoned and a new one developed. This can cause further multiplication of hypotheses. Also, it is difficult to determine when and on what basis previous active hypotheses must be terminated.

a. Multiple interleaved plans: Where the observed agent attends to several tasks is another challenging area, as it can cause explosion of hypotheses generation.

b. State representation: Systems typically observe actions rather than states of the world. Yet, states as much as actions are indicative of a threatening situation (e.g., a platform being in own forces’ volume of interest).

c. Models of opponent actions: It is very difficult to gather enough data to come up with concise and robust representations of the plans of the opponent.

d. Completeness of plan libraries: No set of opponent plans will account for all possible scenarios.

Plan recognition, however, remains a very promising paradigm for the problem of threat analysis as it subsumes many of the elements necessary for the determination of the existence of threat and its evolution in time.

11.7  THREAT ANALYSIS IN MILITARY OPERATIONS

In this section, threat analysis is discussed from the perspective of the Command and Control (C2) process in an operational environment. This process can be decomposed into a set of generally recognized, accepted functions that must be executed within some reasonable delays to ensure mission success: picture compilation, threat analysis, engageability assessment, and combat power management (also referred to as weapons assignment).

The process of all actions and activities aimed at maintaining tracks on all surface, air, and subsurface entities within a certain volume of interest is referred to as picture compilation. It includes several subprocesses, the most important being object localization (or tracking), and object recognition, and identification. Threat analysis establishes the likelihood that certain entities within that volume of interest will cause harm to a defending force or its interests. The output of threat analysis, along with that of the engageability assessment process, which determines the defending force options against the threat, is used by the combat power management function to generate and optimize a response plan (Irandoust et al. 2010).

Threat analysis in an operational context such as the military setting is conducted based on a priori knowledge (e.g., intelligence, operational constraints and restraints, evaluation criteria, etc.), dynamically acquired and inferred information (e.g., kinematics and identification of entities in a given volume of interest, as well as various indicators), and data received from complementary sources in relation to the mission objectives. Threat indicators are derived from the entity characteristics, tactical data, background geopolitical situation, geography, intelligence reports, and other data.

11.7.1  TASK COMPLEXITY

Threat analysis is a highly demanding cognitive task for human analysts mainly because of the (typically) huge amount of data to be analyzed, the level of uncertainty characterizing these data, and the short time available for the task (Irandoust 2010).

The staff in charge of threat analysis must often process an important amount of data, of which only a small fraction is relevant to the current situation. The data come in multiple forms and from multiple sources. Analysts have to make difficult inferences from this large amount of noisy, uncertain, and incomplete data.

In a series of studies conducted by Liebhaber and his colleagues (Liebhaber and Feher 2002), it is shown that due to the multi-tasking, tempo, integration demands, and short-term memory requirements, threat analysis is cognitively challenging, even under normal conditions. It requires the mental integration and fusion of data from many sources. This integration/fusion requires a high level of expertise, including knowledge of the types of threats, the own force’s mission, own and adversary doctrines, and assessment heuristics built from experience. The cognitive overload in a time-constrained environment puts the operators under a great amount of stress.

11.7.2  CONTEXTUAL FACTORS

Threat analysis, like any other task, cannot be decoupled from the context in which it occurs. The context of operations greatly impacts the effective conduct of threat analysis through a set of fundamental factors that characterize any (tactical) military operation: the nature of the threat, the operational environment, uncertainty, and time. Moreover, these factors are inter-related and impact each other in many ways. For instance, the operational environment highly influences the nature of the threat, while both the former and the latter impact uncertainty and time.

11.7.2.1  Uncertainty

Uncertainty in the representation of the situation is mainly due to sensor limitations, the limited reliability of intelligence information, and the limited accuracy of inferences (by humans or systems) used to derive knowledge from this data. The individuals performing threat analysis have to deal with the unpredictability of (adversary) human behavior and the imperfection of the information sources on which they rely to observe the environment (including the adversary).

11.7.2.2  Time

Time is another key factor in threat analysis for three main reasons. Firstly, the information gathered and compiled during the picture compilation process, as well as the knowledge derived by the threat analysis process, remain valid for only a finite period of time. Secondly, time is a resource, both for own forces and the adversary, which is consumed as information is being gathered and processed. Thirdly, in an adversarial context, the high tempo of operations often limits the time available to understand the impact of the events on the situation at hand and to react to them. The high tempo imposes a requirement on responsiveness, i.e., critical agents (potentially red or harmful) must be assessed as early as possible so as to provide more reaction time to human decision makers. The responsiveness requirement involves reducing the decision process timeline while maintaining or increasing response quality.

Furthermore, time is also consumed by coordination requirements, including the requirement to liaise with a higher-echelon staff that may or may not possess the same appreciation of the situation, which is being driven by the dynamic actions of own force and the opposing force.

11.7.2.3  Nature of the Threat

Threats can be categorized along several dimensions such as predictability of the behavior, susceptibility to coercion, and symmetry. They can also be distinguished using the single/multiple dichotomy. The problem of multiple coordinated threats is addressed in Section 11.8.4.

11.7.2.3.1  Predictability of the Behavior

One possible classification is based on the predictability of the behavior. Deterministic threats are those which, once detected, can have their behavior determined without uncertainty on their intent, capability, future course of action, or trajectory (e.g., projectiles). Adaptive threats have the capability to adapt their behavior, making their evolution difficult to predict. In simple cases, this consists in the threat altering its trajectory, such as a cruise missile that adapts to the landscape features or follows waypoints. In more complex cases, threats can adopt various elaborate tactics. This is particularly obvious with manned or man-controlled threats, such as aircraft or seacraft, but is not exclusive to them. Unmanned vehicles (aerial, surface, or submarine) equipped with advanced technology can also exhibit sophisticated adaptive behaviors, involving a dynamic generation of goals and plans in reaction to changes in the environment. The capability of a threat to adapt its behavior is an important factor for the assessment of the threat opportunity, and even more so for the assessment of its intent. This factor increases the difficulty for these assessments.

11.7.2.3.2  Susceptibility to Coercion

Coercible threats, as opposed to unyielding threats, are threats which are equipped to potentially respond to deterrence. These are, in principle, manned or man-controlled threats which can respond to warnings, requests, and other deterrence actions, i.e., they have the capability to communicate, to reason, and to act. The capability of a threat to respond to deterrence is a factor in threat analysis as it provides options for assessing the intent of this threat through the observation of its reactions to its own force actions.

11.7.2.3.3  Symmetry

Symmetric versus asymmetric threats is another categorization that is very relevant to today’s reality of conflicts. By asymmetric threats, we mean threats that adopt unconventional warfare strategies and tactics (e.g., conduct attacks using recognized civilian vehicles, such as boats, light aircraft, or cars). A wide disparity in military power between the parties leads the opponents to adopt strategies and tactics of unconventional warfare, the weaker combatants attempting to use strategies to offset deficiencies in quantity or quality (Stepanova 2008).

The potential presence of asymmetric threats imposes a nonuniform environment that cannot be pictured as a confrontation between friendly (blue) agents and enemy/undesirable (red) agents. Threat analysis, particularly intent and capability assessments, becomes even more challenging, as it is extremely difficult to anticipate the moves of an opponent who is no longer a crisp, well-defined entity and is determined to use unconventional means. Another challenge is about the sparse and ambiguous indicators of potential or actualized threat activity being buried in massive background data.

11.7.2.4  Operational Environment

A good example of the effects of changes in the environment on C2 operations and threat analysis in particular is the recent shift of emphasis toward congested environments such as urban and littoral areas. Contrary to the traditional maneuver space, urban and littoral areas are characterized by significant congestion due to the existence of nonmilitary activity. This activity complicates the process of picture compilation, and thereby necessitates increased efforts on the part of the analysts to generate and maintain a complete and clean operating picture.

In addition to the high number of background objects, modern warfare spaces impose nonuniform environments where blue and red, as well as neutral (white) agents are interspersed and overlapping, presenting a highly complex challenge with respect to discerning one type of agents from another.

The shift from open battlespaces to congested areas also increases the exposure of the forces to an adversary provided with the terrain advantage. Such environments are also very conducive to attacks from asymmetric threats. For example, modern navies face asymmetric threats such as suicide attacks from explosive-laden small boats, small and medium caliber weapons on small boats (individually or in swarms of many boats), low and slow flyers (civilian aircraft), and a wide range of underwater mines or improvised explosive devices (IEDs). While the ships are alongside, the threat may even be initiated by a dockside terrorist or a small boat. Increased traffic within the littoral environment can make discerning these threats from other traffic exceptionally complicated. These types of threats can also be more difficult to detect with sensors due to their reduced signatures.

11.8  THREAT ANALYSIS IN DISTRIBUTED ENVIRONMENTS

In distributed environments, entities, both red and blue agents, are physically dispersed over a wide geographic area. Own and friendly units operate conjointly to achieve mission objectives as a task force or task group. This configuration involves distributed teams on air, surface, and subsurface units cooperatively interacting to perform C2 activities. It also means that a global task must be decomposed into sub-components and communication channels and coordination mechanisms established so that these subcomponents can work together effectively, synergistically, and harmoniously. In such a configuration, information is shared and the threat analysis task is conducted collaboratively, in a distributed manner. The capability of the system as a whole becomes much greater than the sum of its subcomponents.

The geographical dispersal in a force operation offsets the vulnerability of individual units and improves the overall survivability of the force; however, distribution introduces additional C2 challenges. In the following, the problems of coordination inherent to distributed forces, as well as the cost and advantages of distribution are discussed. In a threat analysis context in particular, the complexity augments both from the blue and the red perspective with the multiplicity of agents.

11.8.1  CENTRALIZED AND DECENTRALIZED CONTROL

In force operations, C2 may be centralized or decentralized. This refers to the level of involvement, control, and responsibility exercised by the higher echelons and the subordinates during the conduct of operations. In decentralized C2, it is conceivable that the output of the C2 functions (picture compilation, threat analysis, engageability assessment, and combat power management) emerges from distributed cooperative interactions among the units rather than being directly consolidated by a central decision maker. This means that those units must develop shared situation awareness and coordinate their actions.

The concept of a decentralized C2 approach involves transferring the coordination function from the small hub of key decision makers to a larger group. In this situation, distributed units independently decide upon required actions based upon a shared tactical picture and common doctrine which sets the boundaries on approved behavior and in turn provides a coordination mechanism. A key to the decentralized approach is achieving shared situation awareness through the creation of a conflict-free force-level picture on all units and the development of a coherent understanding across the force by sharing information.

As part of the C2 process in a distributed environment, threat analysis can be carried out in a centralized or a decentralized manner. Centralized threat analysis implies that threats to the task force are identified, assessed, and prioritized by a central authority that uses the threat lists of individual units to derive a force-level threat evaluation. This includes the consolidation of the intent, capability, and opportunity assessments of each threat to the force, and the prioritization of all threats in terms of their relative threat ranking in order to generate a consistent force-level threat list.

Consistency in decentralized threat analysis is accomplished through a consolidation process of all the unit-level evaluations through a series of collaboration, information sharing, and communication mechanisms. Information is passed between units whereby the units converge to a conflict-free evaluation for the entire force. This is in stark contrast to the centralized approach whereby the force-level threat list is constructed by the central decision makers based on information from the individual units.

11.8.2  ADVANTAGES OF DISTRIBUTION

Distributed threat analysis inherently offers the following advantages of distributed systems:

•  Functional separation: Distributed threat analysis spatially distributes entities that perform different tasks based on their capability and purpose. This function specialization simplifies the design of the system, as the latter is split into entities, each of which implementing part of the global functionality and communicating with the other entities.

•  Information superiority: The main advantage of a distributed system is its ability to allow the sharing of information and resources. Information and knowledge provided by other sources and their fusion into a common picture enhances the quality of the assessment and supports informed decision making.

•  Enhanced real-time response: Increased responsiveness is one of the major requirements of threat analysis. This can be achieved through distribution by deploying observers and processors close to the threat. In a networked environment, this has the potential of improving the flow of real-time information directly to decision makers, providing means of assessing rapidly changing situations and making informed decisions.

•  Robustness and resilience: Distributed threat analysis has a partial-failure property since even if some blue agents fail, others can still achieve the task (at least partly). Such failure would only degrade, not disable, the whole evaluation outcome. If the blue multi-agent system has self-organization capabilities, it can also dynamically re-organize the way in which the individual agents are deployed. This feature makes the system highly tolerant to the failure and bias of individual agents.

11.8.3  OPERATIONAL CHALLENGES

The aforementioned advantages require that the components of the system performing threat analysis (including software and hardware agents) be able to exchange information clearly and in a timely manner. The lack of the following requirements can sometimes be an impediment to effective communication in a distributed context:

•  Interoperability: This is the ability of two or more agents, systems or components to exchange information and to use the information that has been exchanged. Distributed threat analysis can encompass different autonomous, heterogeneous, distributed computational entities that must be able to communicate and cooperate among themselves despite differences in language, context, format, or content.

•  Connectivity: Establishment of communications can be troublesome by itself. Provision of remote connectivity between the nodes in distributed threat analysis is a major technical challenge which cannot be understated. Maintaining a communication channel is not guaranteed and, when it is, its quality can be degraded due to multiple environmental factors. Communications can also be hampered in an attempt by different units to use certain communication frequencies while remaining covert to minimize the detection, localization, and recognition by the opposing forces through the electromagnetic emissions (Athans 1987). Kopp (2009) listed security of transmission, robustness of transmission, transmission capacity, message, and signal routing and signal format and communications protocol compatibility as the main challenges of communication media in the military domain, although most of them apply also to nonmilitary domains.

•  Security: Threat analysis represents a specific domain of interest that highly correlates with information system security. Although the use of multiple distributed sources of information can improve situational awareness, it can make the system more vulnerable to unauthorized access, use, disclosure, disruption, modification, or destruction.

Additional communication problems may arise in combined operations (Irandoust and Benaskeur [in press]), where the force units belong to different allied nations. Communication processes, technologies, codes, and procedures may be very different from one contingent to another. Moreover, the participating units may be reluctant to share sensitive information.

11.8.4  ANALYTICAL CHALLENGES

Threat analysis poses several challenges relatively to the analysis of the situation by blue agents. These may be relative to the multiplicity of threats to analyze, or the change of perspective required by collaborative threat analysis (multiplicity of own force units). The following are some examples:

•  Multiplication of reference points: When operating as a force, one should not only consider the own unit/platform as a potential target of the threat, but also the other units/platforms that are part of the force. Impact assessment must therefore be performed with regard to several reference points. This situation analysis issue entails a response planning problem, which is at the heart of the self-defense versus force-defense dilemma. It is quite conceivable that the highest priority threat from the unit’s perspective does not equate to the highest priority threat for the force. As such, conflicts may arise with respect to applying defensive measures in response to the threat.

•  Recognition of coordinated plans: In a distributed environment, a blue or defending force may have to deal with single or coordinated threats. Obviously, a group of threats acting in coordination is harder to comprehend in terms of its tactical capability. Moreover, it is not sufficient in this case to determine the intent, capability, and possibly the plan of the adversary. One must also comprehend the spatial configuration of the different units (i.e., which unit is operating in which zone) and the temporal order of the actions carried out by each unit. One has to integrate the actions of different agents into a global plan.

•  Team recognition: The identification of the structure (members, possibly subteams) and roles in the adversarial team is a difficult issue. A functional analysis of the different entities needs to be conducted to establish their respective roles in a coordinated action.

•  Spatial reasoning: One must reason upon the operation of blue and red forces within a larger spatial environment.

•  Collaborative plan recognition: In this configuration, different pieces of information are created and maintained by different agents. This information could be stored, routed through the network to be fused, analyzed, and used by other agents, which may or may not be aware of the existence of the agent(s) generating the information. Analysts must make sense out of this large amount of raw data that has been taken out of its context of observation.

11.8.5  COLLABORATION CHALLENGES

Remote collaboration is another challenging area for force-level threat analysis. In force operations, data and message sharing across several units is completed via networks. In turn, this information exchange is used to establish a common understanding of the task at hand. Yet, the inherent richness that accompanies face-to-face collaboration is not supported. As such, it is harder to effectively perform contentious discussions. A coalition context will further introduce miscommunications that can affect force-level threat analysis at different degrees (Irandoust and Benaskeur [in press]).

Furthermore, remote collaboration entails an additional coordination overhead. Within a dispersed force, there is a need for both inter-unit and intra-unit coordination. The task also becomes more complicated since there are numerous system interactions which may be dependent on the current disposition of the forces. Concurrency, whereby multiple units may be simultaneously performing similar and complementary activities, can result in conflicting conclusions. Moreover, delays in communication caused by limited bandwidth, interferences, and breakdowns can hamper force-level threat analysis.

Overall, the dependence on electronic communications, geographical distances, multiplication of parameters, and the impossibility of having direct face-to-face interactions on a regular basis are all obstacles to cohesion and effective collaboration in force threat analysis.

11.8.6  THREAT ANALYSIS AND NETWORK-CENTRIC OPERATIONS

Teamwork and collaborative decision making are critical elements of the military’s vision of network–centric operations (Alberts et al. 1999). The main principle underlying the concept of a networked force is to allow individuals and/or groups the ability to leverage information both locally and globally to reach effective decisions quickly. Access to different perspectives and the widespread and timely collection and distribution of information around the battlefield will, it is anticipated, allow the more accurate and timely application of military force necessary to react to the ongoing situation. Advances in network technologies are augmenting the connectivity of military units, and automated sensors, and intelligence feeds provide an increased access to previously unavailable information. However, the electronic linkage of multiple units does not necessarily bring about automatic improvement in situation understanding, including threat analysis, collaboration, and the synchronization of defensive actions. While technology can offer C2 organizations a great information-processing capability, the need to consider and reconcile the variety and complexity of interpretations of information outputs generated by humans and computer systems remains. It is indeed incorrect to automatically assume that fusing information into a common operating picture will result in uniform interpretation of the information by the various users.

This is why great emphasis is put by the promoters of the network-centric approach on the social dimension of distributed operations. According to the conceptual framework of network-centric operations (Garstka and Alberts 2004), raw information must be transformed into actionable knowledge through collaborative sensemaking among the stakeholders. However, common understanding of a given situation requires that all participants use a common reference frame, i.e., use the same models, physical or mental, for interpreting the situation elements and “creating mutually intelligible representations” (Shum and Selvin 2000), which is the essence of collaborative sensemaking. Yet,

there are not only gaps in the languages, frames of reference, and belief systems that people in the different communities of practice have, but gaps between their respective sensemaking efforts—their concepts in the representational situation are different. In many cases, different communities have mutually unintelligible sense-making efforts, leading to mutually unintelligible representational effort (Shum and Selvin 2000).

Furthermore, it has been observed that a likely cause of failure for overall mission success is that the abilities of humans to access, filter, and understand information, to share it between groups, and to concur on their assessment of the situation are clearly limited, especially under stress and time-pressure (Scott et al. 2006).

Finally, Kolenda (2003) argues that shared situational awareness does not inevitably lead to “shared appreciation on how to act on the information” as different people, based on their experience, education, culture, and personalities will assess threat/risk and how to best “maximize the effectiveness of themselves and their organizations” differently. Simply providing people with access to the same information does not necessarily create a common understanding. The issue of how “common intent” can actually be promoted among network players, often from diverse backgrounds and cultures (both national and organizational) represents a major challenge for future operations.

11.9  DISCUSSION

In the preceding, the problem of threat analysis was addressed from different angles and at different levels of complexity. The use of primary concepts and defining features such as negativity, intentionality, potential, imminence, and relativity to a reference point allowed us to provide a framework in which the concept of threat is elucidated and distinguished from other goal conflict situations.

Threat analysis is a very challenging cognitive task that can involve different layers of reasoning when time allows it. Interference management, goal recognition, and plan recognition were extensively discussed, showing the complexity of the inferences which have to be made by an observing agent performing threat analysis. This provided a theoretical basis as the question of the automation of threat analysis was investigated.

By illustrating the problem in a military context, it was shown that threat analysis can be further complicated through contextual factors that characterize the warfare environment. These problems, described from a single unit perspective, remain valid at the force level, where new challenges are introduced. Collaborative threat analysis, while providing information superiority, was shown to impact situation analysis by multiplying the operational parameters and creating coordination overhead. Finally, distributed multi-threat scenarios were shown to significantly complicate the determination of intent, capability, opportunity, and the higher-level plan of adversary elements.

REFERENCES

Alberts, D.S., J.J. Garstka, and F.P. Stein. 1999. Network Centric Warfare: Developing and Leveraging Information Superiority. CCRP Publication Series, Department of Defense C4ISR Cooperative Research Program (CCRP), Washington, DC, www.dodccrp.org

Allen, J.F. 1983. Recognizing intentions from natural language utterances. In Computational Models of Discourse, M. Brady and R. Berwick eds. MIT Press, Cambridge, MA.

Athans, M. 1987. Command and control (C2) theory: A challenge to control science, IEEE Transactions on Automatic Control, AC-32(4), 286–293.

Avrahami-Zilberbrand, D. and G.A. Kaminka. 2005. Fast and complete symbolic plan recognition. Proceedings of IJCAI 2005, Edinburgh, U.K.

Avrahami-Zilberbrand, D. and G.A. Kaminka. 2007. Incorporating observer biases in keyhole plan recognition (efficiently!). Proceedings of AAAI 2007, Vancouver, British Columbia, Canada, pp. 944–949.

Benaskeur, A.R., A.M. Khamis, and H. Irandoust. 2010. Cooperation in distributed surveillance. Autonomous and Intelligent Systems–First International Conference, AIS 2010, Povoa de Varzim, Portugal, Proceedings, pp. 1–6, IEEE, 2010.

Bratman, M.E. 1987. Intention, Plans, and Practical Reason. Harvard University Press, Cambridge, MA.

Bruce, B. 1981. Plan and social action. In Theoretical Issues in Reading Comprehension, R. Spiro, B.C. Bruce, and W.F. Brewer (eds.). Lawrence Erlbaum, Hillsdale, NJ.

Carberry, S. 2001. Techniques for plan recognition. User Modelling and User-Adapted Interaction, 11(1–2), 31–48.

Castelfranchi, C. 1998. Modelling social action for AI agents. Artificial Intelligence, 103, 157–182.

Charniak, E. and R. Goldman. 1991. A probabilistic model of plan recognition. Proceedings of AAAI’91, Anaheim, CA.

Chen, G., D. Shen, C. Kwan, J. Cruz, M. Kruger, and E. Blasch. 2007. Game theoretic approach to threat prediction and situation awareness. Journal of Advances in Information Fusion (JAIF), 2(1), 35–48.

Cohen, P.R., C.R. Perrault, and J.F. Allen. 1981. Beyond question answering. In Strategies for Natural Language Processing, W. Lehnert and M. Ringle (eds.), Bold, Beranek and Newman, Inc. Cambridge MA, pp. 245–274.

Elsaesser, C. and F.J. Stech. 2007. Detecting deception. In Adversarial Reasoning: Computational Approaches to Reading the Opponent’s Mind, A. Kott and W.M. McEneaney (eds.). Chapman & Hall/CRC, Boca Raton, FL, pp. 111–124.

Garstka, J.J. and D.S. Alberts. 2004. Network centric operations conceptual framework—Version 2.0. Report prepared for the Office of the Secretary of Defense, Office of Force Transformation. Evidence Based Research, Vienna, VA.

Geib, C. 2009. Delaying commitment in plan recognition using combinatory categorial grammars. Proceedings of IJCAI 2009, Pasadena, CA, pp. 1702–1707.

Geib, C. and R. Goldman. 2003. Recognizing plan/goal abandonment. Proceedings of IJCAI 2003, Acapulco, Mexico, pp. 1515–1517.

Geib, C. and R. Goldman. 2005. Partial observability and probabilistic plan/goal recognition. Proceedings of IJCAI 2005, Workshop on Modeling Others from Observations (MOO), Edinburgh U.K.

Geib, C. and R. Goldman. 2009. A probabilistic plan recognition algorithm based on plan tree grammars. Artificial Intelligence, 173(11), 1101–1132.

Geib, C., J. Maraist, and R. Goldman. 2008. A new probabilistic plan recognition algorithm based on string rewriting. Proceedings of ICAPS 2008, Sydney, New South Wales, Australia, pp. 81–89.

Glinton, R., S. Owens, J. Giampapa, K. Sycara, M. Lewis, and C. Grindle. 2005. Intent inference using a potential field model of environmental influences. Proceedings of Fusion 2005, Philadelphia, PA.

Goldman, R., C. Geib, and C. Miller. 1999. A new model of plan recognition. Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, Stockholm, Sweden.

Irandoust, H. and A. Benaskeur. (in press). Political, Social and Command & Control Challenges in Coalitions—A Handbook. Canadian Defence Academy Press, Kingston, Ontario, Canada.

Irandoust, H., A. Benaskeur, F. Kabanza, and P. Bellefeuille. 2010. A mixed-initiative advisory system for threat evaluation. Proceedings of ICCRTS XV, June 2010, Santa Monica, CA.

Johansson, R. and G. Falkman. 2008. A Bayesian network approach to threat evaluation with application to an air defense scenario. Proceedings of Fusion 2008, Cologne, Germany, pp. 1–7.

Jones, R. 2010. TacAir-Soar: Intelligent, autonomous agents for the tactical air control domain. www.soartech.com (accessed February 15, 2010).

Jones, M.R., O.M. Jones, J.E. Laird et al. 1998. Automated intelligent pilots for combat flight simulation. AI Magazine, 20, 27–41.

Kaminka, G., D. Pynadath, and M. Tambe. 2002. Monitoring teams by overhearing: A multi-agent plan-recognition approach. Journal of Artificial Intelligence Research, 17(1), 83–135.

Kautz, H.A. and J.F. Allen. 1986. Generalized plan recognition. Proceedings of AAAI 1986, Philadelphia, PA, pp. 32–37.

Kolenda, C.D. 2003. Transforming how we fight—A conceptual approach. Naval War College Review, LVI(2), 100–121.

Kopp, C. 2008. NCW101: An introduction to network centric warfare, AirPower Australia, Melbourne, Vic Australia.

Laird, J.E., A. Newell, and P.S. Rosenbloom. 1991. Soar: An architecture for general intelligence. Artificial Intelligence, 47, 289–325.

Liao, L., D. Patterson, D. Fox, and H. Kautz. 2007. Learning and inferring transportation routines. Artificial Intelligence, 171(5–6), 311–331.

Liebhaber, M. and B. Feher. 2002. Air threat assessment: Research, model, and display guidelines. Proceedings of ICCRTS, Quebec City, Quebec, Canada.

Little, E.G. and G.L. Rogova. 2006. An ontological analysis of threat and vulnerability. Proceedings of Fusion 2006, Florence, Italy, pp. 1–8.

Paradis, S., A. Benaskeur, M. Oxenham, and P. Cutler. 2005. Threat evaluation and weapons allocation in network-centric warfare. Proceedings of Fusion 2005, Philadelphia, PA.

Pynadath, D.V. and M.P. Wellman. 1995. Accounting for context in plan recognition, with application to traffic monitoring. Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, Montreal, Quebec, Canada, pp. 472–481.

Roy, J. 2009. A view on threat analysis concepts, Technical report, DRDC – Valcartier-SL-2009–384, July 6, Defence R&D Canada, Valcartier, Valcartier Quebec, Canada.

Roy, J., S. Paradis, and M. Allouche. 2002. Threat evaluation for impact assessment in situation analysis systems. SPIE Proceedings: Vol. 4729, Signal Processing, Sensor Fusion, and Target Recognition XI, Orlando, FL.

Santos, E. and Q. Zhao. 2007. Adversarial models for opponent intent inferencing. In Adversarial Reasoning: Computational Approaches to Reading the Opponent’s Mind, A. Kott and W. McEneaney (eds.). Chapman & Hall/CRC, Boca Raton, FL, pp. 1–22.

Schmidt, C.F. 1976. Understanding human action: Recognizing the plans and motives of other persons. In Cognition and Social Behavior, J. Carroll and J. Payne (eds.). Erlbaum Press, Hillsdale, NJ.

Schmidt, C.F., N.S. Sridharan, and J.L. Goodson. 1978. The plan recognition problem: An intersection of psychology and artificial intelligence. Artificial Intelligence, 11, 45–83.

Scott, S.D., M.L. Cummings, D.A. Graeber, W.T. Nelson, and R.S. Bolia. 2006. Collaboration technology in military team operations: Lessons learned from the corporate domain. Proceedings of CCRTS, June 2006, San Diego, CA.

Shum, A.B. and A.M. Selvin. 2000. Structuring discourse for collective interpretation. In Distributed Collective Practices: Conference on Collective Cognition and Memory Practices, September 2000, Paris, France.

Southwick, R.W. 1991. Explaining reasoning: An overview of explanation in knowledge-based systems. The Knowledge Engineering Review, 6(1), 1–19.

Steinberg, A.N. 2005. An approach to threat assessment. Proceedings of Fusion 2005, Philadelphia, PA, 95–108.

Steinberg, A.N. 2006. Stimulative intelligence. Proceedings of the MSS National Symposium on Sensor and Data Fusion, McLean, VA.

Steinberg, A.N. 2007. Predictive modeling of interacting agents. Proceedings of Fusion 2007, Quebec City, Quebec, Canada.

Steinberg, A.N., C.L. Bowman, and F.E. White. 1998. Revision to the JDL data fusion model. Joint NATO/IRIS Conference, October 1998, Quebec City, Quebec, Canada.

Stepanova, E. 2008. Terrorism in asymmetrical conflict: Ideological and structural aspects. (Technical Report SPRI Research Reports 23) Stockholm International Peace Research Institute (SIPRI), Solna, Sweden.

Sukthankar, G. and K. Sycara. 2006. Robust recognition of physical team behaviors using spatio-temporal models. Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, Hakodate, Japan, pp. 638–645.

Vilain, M. 1991. Deduction as parsing. Proceedings of AAAI, Anaheim, CA, pp. 464–470.

White, F.E. 1988. A model for data fusion. Proceedings of the First National Symposium on Sensor Fusion, Vol. 2. GACIAC, IIT Research Institute, Chicago, IL, pp. 143–158.

Whitehair, R.C. February 1996. A framework for the analysis of sophisticated control. PhD dissertation, University of Massachusetts, Boston, MA, CMPSCI Technical Report 95.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset