Subject Index
A
cost-based approaches,
34
grounding-based approaches,
34
probabilistic approaches,
34
Abductive model construction,
70
Abductive Stochastic Logic Programs (ASLPs),
80
Abstract Hidden Markov Model (AHMM), ,
14
Activity recognition,
123,
151
and intent recognition,
349
pervasive sensor-based,
152
Ad hoc agents/teammates
best-suited model, determining,
264
Capture-the-Flag domain,
254
incremental value model,
261
limited role-mapping model,
260
measure of marginal utility,
252
multiagent plan recognition,
254
unlimited role-mapping model,
259
Advanced Scout system,
316
Adversarial plan recognition,
87
air-combat environment,
112
anomalous behavior, detecting,
99
AVNET consortium data,
108
catching a dangerous driver,
112
efficient symbolic plan recognition,
95
efficient utility-based plan recognition,
96
suspicious behavior, detecting,
110
Air-combat environment,
112
in smart environments,
124
Anomalous behavior, detecting,
99
AVNET consortium data,
108
Anomalous plan recognition,
88
keyhole adversarial plan recognition for,
87
Anytime Cognition (ANTICO) architecture,
276
Appearance-based object recognition,
346
Artificial corpus generation,
domain modeling,
goal generation,
planner modification,
start state generation,
Augmented forward probabilities,
16
Automatic subgroup detection,
323
percentage of false positives in,
108
standing for long time on,
109
B
Basic probability assignment (bpa),
18
Bayesian Abductive Logic Programs (BALPs),
59
Bayesian logic programs (BLPs),
60
probabilistic modeling, inference, and learning,
64
Bayesian mixture modeling with Dirichlet processes,
154
Bayesian nonparametric (BNP) methods,
150
Bayesian theory of mind (BToM),
179,
181
Belief–desire inference,
181,
186
Binary decision diagrams (BDDs),
80
C
Candidate activities,
238
Candidate occurrences,
232
Capture-the-Flag domain,
254,
292
Cascading Hidden Markov Model (CHMM),
13
computing the forward probability in,
15
schema recognition algorithm,
16
Causal-link constraints,
239
precision and recall on,
104
Chinese restaurant process,
156–
157
Commonsense psychology,
188
Commonsense theory of mind reasoning,
178
Conditional completeness,
241
Conditional soundness,
241
Conjunctive normal form (CNF) formula,
229
Constraints
Context Management Frame infrastructure,
152
Contextual modeling and intent recognition,
349
dependency parsing and graph representation,
351
graph construction and complexity,
352
induced subgraphs and lexical “noise,”,
352
intention-based control,
356
lexical directed graphs,
350
local and global intentions,
350
using language for context,
351
Cutting Plane Inference (CPI),
40,
299
D
DARE (Domain model-based multiAgent REcognition),
228
Data cubing algorithm,
127
Decision-theoretic planning
Bayesian belief update,
208
interactive POMDP (I-POMDP) framework,
206
Dempster-Shafer Theory (DST),
18
Digital games
Dirichlet process mixture (DPM) model,
154
Dynamic Bayes Network (DBN), ,
185
Dynamic Hierarchical Group Model (DHGM),
227
Dynamic play adaptation,
325–
326
Dynamic Bayesian networks (DBNs),
90
E
EA Sports’ Madden NFL® football game,
319
Electrocardiogram (ECG),
152
F
Finitely nested I-POMDP,
207
First-order linear programming (FOPL),
40
Folk–psychological theories,
177
Foreground–background segmentation,
346
G
Galvanic skin response (GSR),
152
Game-based learning environments,
290
Gaussian mixture models (GMM),
149
Geib and Goldman PHATT algorithm,
292
Generalized Partial Global Planning (GPGP) protocol,
253
Goal chain,
Goal parameter value generation,
adding parameter recognition,
17
hierarchical plan of,
and Markov logic networks (MLNs),
300–
301
representation of player behavior,
300
Goal schema generation,
H
Healthcare monitoring
pervasive sensors for,
152
Hidden Cause (HC) model,
65,
67,
292
Hidden Semi-Markov Models (HSMMs),
91
Hierarchical Dirichlet processes (HDP),
129,
150,
157
Hierarchical goal recognition, ,
12
goal schema recognition,
13
Hierarchical Hidden Markov Model (HHMMs),
14,
96
Hierarchical parameter recognition,
21
Hierarchical transition network (HTN),
73
Human activity discovery, stream sequence mining for,
123
activity recognition,
123
mining activity patterns,
133
tilted-time window model,
131
Human dynamics and social interaction,
152
Human plan corpora
general challenges for,
goal-labeled data,
plan-labeled data,
unlabeled data,
Human plan recognition, modeling
comparison to human judgments,
190
using Bayesian theory of mind,
177,
181
Humanoid robot experiments,
361
Human–robot interaction (HRI),
343
activity recognition and intent recognition,
349
application to intent recognition,
353
dependency parsing and graph representation,
351
experiments on physical robots,
356
graph construction and complexity,
352
hidden Markov models (HMMs)-based intent recognition,
348
induced subgraphs and lexical “noise,”,
352
in robotics and computer vision,
345
intention-based control,
356
lexical directed graphs,
350
local and global intentions,
350
processing camera data,
346
using language for context,
351
Hybrid adversarial plan-recognition system,
94
I
Incremental value model,
261
Inference-based discourse processing,
33
Input-Output Hidden Markov Models (IOHMM),
293
Instantiated goal recognition, ,
12
Integer linear programming (ILP) techniques,
33
based weighted abduction,
36
cutting plane inference,
40
in robotics and computer vision,
345
intention-based control,
356
Intention-based control,
356,
360
Interaction modeling,
346
Interactive partially observable Markov decision process (I-POMDP) framework,
205–
206,
316
Bayesian belief update,
208
computational modeling,
214
learning and decision models,
215
level 3 recursive reasoning,
211
solution using value iteration,
209
weighted fictitious play,
216
Inverse optimal control,
189,
278
IRL. See Inverse optimal control
J
K
Knowledge base model construction (KBMC) procedure,
57,
70
Knowledge-lean approach,
307
Kullback-Leibler divergence,
323
L
Last subgoal prediction (lsp) bpa,
23
activity recognition systems,
151
Bayesian mixture modeling with Dirichlet processes,
154
healthcare monitoring, pervasive sensors for,
152
hierarchical Dirichlet process (HDP),
157
human dynamics and social interaction,
152
pervasive sensor-based activity recognition,
152
Latent Dirichlet allocation (LDA),
149–
150
Lexical directed graphs,
350
dependency parsing and graph representation,
351
graph construction and complexity,
352
induced subgraphs and lexical “noise,”,
352
using language for context,
351
Lexical-digraph-based system,
361
Limited role-mapping model,
260
M
Madden NFL® football game,
319
Markov Chain Monte Carlo methods,
154
Markov decision process (MDP), ,
276
partially observable MDP,
276
representing user plan as,
278
abductive model construction,
70
Pairwise Constraint Model,
65
plan recognition using manually encoded MLNs,
71
player behavior, representation of,
300
probabilistic modeling, inference, and learning,
72
Markov random field. See Markov network (MN)
MARS (MultiAgent plan Recognition System),
228
Maximum a posteriori (MAP) assignment,
60
Mental problem detection,
152
Mining activity patterns,
133
Model-selection methods,
155
Monte Carlo
Multiagent Interactions Knowledgeably Explained (MIKE) system,
316
Multiagent learning algorithms,
313
Multiagent plan recognition (MAPR),
57,
227
candidate activities,
238
candidate occurrences,
232
Multiagent STRIPS-based planning,
229
Multiagent team plan recognition,
254
Multilevel Hidden Markov Models,
Multi-User Dungeon (MUD) game,
Mutual information (MI),
324
N
Narrative-centered tutorial planners,
307
Natural language, understanding,
33
Natural tilted-time window,
131
Next state estimator,
331
O
Observation constraints,
239
Offline UCT for learning football plays,
326
Online UCT for multiagent action selection,
330
successor state estimation,
335
automatic subgroup detection,
323
dynamic play adaptation,
325–
326
offline UCT for learning football plays,
326
online UCT for multiagent action selection,
330
play recognition using support vector machines,
319,
321
successor state estimation,
335
compatibility constraints,
47
P
Pacman Capture-the-Flag environment,
263
Pairwise Constraint (PC) model,
65
Parameter recognition,
17
Partially observable Markov decision processes (POMDP),
179,
183,
185,
195,
205
modeling deep, strategic reasoning by humans using,
210
Passive infrared sensor (PIR),
123
Pervasive sensor
based activity recognition,
152
for healthcare monitoring,
152
Physical robots, experiments on,
356
intention-based control,
360
lexical-digraph-based system,
361
similar-looking activities,
359
surveillance setting,
356
Pioneer 2DX mobile robot,
356
Pioneer robot experiments,
361
Plan, activity, and intent recognition (PAIR),
180–
181,
189–
190
Plan corpora
general challenges for,
goal-labeled data,
human sources of,
plan-labeled data,
unlabeled data,
Plan decomposition path,
95
abductive model construction,
70
artificial corpus generation,
Bayesian logic programs (BLPs),
60
cognitively aligned plan execution,
283
data for,
human sources of plan corpora,
logical abduction,
59,
61
Markov Logic Networks,
60
Pairwise Constraint model,
65
predicted user plan, evaluation of,
283
proactive assistant agent,
276,
282
probabilistic modeling, inference, and learning,
64,
72
representing user plan as MDP,
278
using manually encoded MLNs,
71
using statistical–relational models,
57
Plan recognition. See also Multiagent plan recognition (MAPR)
Play recognition using support vector machines,
319,
321
Player behavior, representation of,
300
Player-adaptive games,
289–
290
Position overlap size,
101
Proactive assistant agent, plan recognition for,
276,
282
Probabilistic context-free grammars (PCFGs),
Probabilistic plan recognition, for proactive assistant agents,
275
Probabilistic state-dependent grammars (PSDGs),
Probabilistic Horn abduction (PHA),
58
Problem-solving recognition,
30
Propositional attitudes,
179
Pruning
Q
Quantal-response model,
210,
216
R
Radial basis function (RBF) kernel,
319
Radio frequency identification (RFID),
149
Rao-Blackwellization (RB),
Reality Mining dataset,
153
Real-time strategy (RTS) games,
292
Recognizing Textual Entailment (RTE) task,
47
Reinforcement learning (RL) algorithm,
319
RESC plan-recognition algorithm,
112
“Risk-sensitive” plan repair policies,
339–
340
Robocup simulation league games,
315
Robocup soccer domain,
318
Role-based ad hoc teamwork. See Ad hoc agents/teammates
Rush Analyzer and Test Environment (RATE) system,
314,
338
S
Search-space generation,
37
processing camera data,
346
SharedPlans protocol,
253
Shell for TEAMwork (STEAM) protocol,
253
Simplified-English Wikipedia,
352
Smart environments
Smoothing distribution,
187
Stanford Research Institute Problem Solver (STRIPS),
228
multiagent STRIPS-based planning,
229
Stanford-labeled dependency parser,
351
Statistical relational learning techniques,
292
Statistical–relational learning (SRL),
58
Subgoals,
play recognition using,
319
Surveillance setting,
356
Suspicious behavior, detecting,
110
air-combat environment,
112
catching a dangerous driver,
112
leaving unattended articles,
110
Symbolic Behavior Recognition (SBR),
87
anomalous behavior recognition for,
99
Symbolic plan-recognition system,
94–
95
T
automatic subgroup detection,
323
dynamic play adaptation,
325–
326
Theory-based Bayesian (TBB) framework,
181
3D scene, estimation of,
346
Tilted-time window model,
131
Top-level parameter recognition,
17
U
Uncertain transitions
Unlimited role-mapping model,
259
Utility-based plan-recognition (UPR) system,
87,
96
decomposition transition,
96
observation probabilities,
96
sequential transition,
96
V
Value iteration
for stochastic policy,
279
W
Weighted abduction
based on integer linear programming,
33
for discourse processing,
43
for recognizing textual entailment,
48
Weighted fictitious play,
216
Weka J.48 classifier,
336
Wu’s weighting formula,
21
Y