334 19.AnEgocentricMotionManagementSystem
behavior is a series of events that may be externally visible to the player (or per-
haps not), but nonetheless influences the agent’s intention. Now it makes obvious
sense to make as much as possible visible to the player, so agents should provide
visual cues to what is transpiring in their minds (e.g., facial movements to more
exaggerated actions like head looks or body shifts).
19.4ModelingtheEnvironment
How we view an object in the world serves as an important basis for how the ob-
jects are acted upon because our perception is only one of many possible per-
spectives. In the ECMMS, we not only wanted to model an asset, but we also
wanted to provide the data in a form such that different agents have the ability to
understand an object relative to themselves. When we look at a chair, we typical-
ly understand what that chair could do for us, but if I’m a dog and I look at that
chair, then I have a whole set of other affordances available to me! It’s just a
matter of perspective, and that perspective guides our eventual behavioral re-
sponses.
While authoring assets for the ECMMS, the only strict requirement is that
collidable objects have a collision layer associated with them. A collision layer is
a device that controls whether overlapping objects generate collision callbacks
for the agent. This collision layer is assigned to the object as a whole, or it can be
done on a per-surface basis. Assigning multiple collision layers to an object
makes sense if an agent can interact with a specific surface on that object differ-
ently than it can with the object as a whole. Referring to Figure 19.5, we see a
rocky outcropping that has the top surface tagged as jumpable, while the rest of
the rock is tagged as standard collidable. What the jumpable tag signifies is that
when this surface is overlapping a collision sensor, it affords the possible interac-
tion of jumping to the agent.
By applying tags to specific surfaces, we are essentially assigning affordanc-
es for the potential types of interaction that a surface allows for. To handle the
Figure 19.5. A rock showing the various tags that can be applied to an asset.
Collision layer jumpable
Collision layer collidable
19.5TheECMMSArchitecture 335
different types of interpretations that a surface may have to differing types of
agents, we use an affordance mapper. The affordance mapper determines the type
of interaction that the object allows to the agent. This allows the modelers to la-
bel the surfaces to an objectified agent, and then the animators populate the vari-
ous interactions for the different agent types. For example, a surface may be
jumpable for a small animal, but only serves as a stepping surface for a very large
animal.
19.5TheECMMSArchitecture
Figure 19.6 shows the various components of the framework that forms the
ECMMS. The primary system that facilitates access to the core navigation com-
ponents (e.g., spatial representation and pathfinding) is the navigation manager.
When an agent needs to move through the environment, it calls through the ani-
mal planner into the navigation manager. The navigation manager then generates
a coarse route [Ramsey 2009a]. The pathfinder uses a modified A* algorithm
[Hart et al. 1968]. As noted by Ramsey [2009a], the layout of the navigable spa-
tial representation for the environment may be nonuniform, and the agent’s lo-
comotion model may also need to factor in the agent’s facing and velocity in
order to generate a realistic motion path. The agent utilizes a planner to handle
coarse routing through the environment while factoring in the behavioral con-
straints of nearby agents. The behavioral controller handles the interaction of the
agent with any predefined contexts, as well as implements the behavioral re-
sponse algorithm (see Section 19.9). The behavioral controller also interfaces
with the ECMMS manager. The ECMMS manager deals with the creation of the
query space, handling of collision callbacks, generation of spatial semantics (see
Section 19.7), and animation validation (see Section 19.8).
19.6ModelinganECMMSEnabledAgent
Creating an agent for the ECMMS requires three different representations: a
coarse collision representation, a ragdoll representation, and the collision sensor
layout. The coarse collision representation for an agent can be anything from a
simple primitive to a convex hull that is a rough approximation of the agent’s
physique. The coarse representation is used during the physics simulation step to
generate the contact points for the agent’s position. This is just the agent’s rough
position in the world, as we can still perform inverse kinematics to fix up an
agent’s foot positions. The ragdoll representation is a more accurate depiction of
an agent’s physical makeup. Typically, a ragdoll is created by associating rigid
336 19.AnEgocentricMotionManagementSystem
Figure 19.6. The ECMMS layout.
bodies with the bones of an agent’s skeleton. Then when the agent animates, the
rigid bodies are keyframed to their respective bone positions. This in itself allows
for nice granular interactions with dynamic game objects. The collision sensors
placed around an agent do and should differ based upon aspects such as the
agent’s size, turning radius, and speed. The layout of the query space needs to be
done in conjunction with the knowledge of corresponding animation information.
If an agent is intended to jump long distances, then the query space generally
needs to be built such that the collision sensors receive callbacks from overlap-
ping geometry in time to not only determine the validity of the actions but also
the intended behavioral result.
19.7GeneratingaBehaviorModelwiththeECMMS
A full behavioral model is beyond the scope of this chapter, but in this section,
we cover the underlying components and processes that the ECMMS provides so
Navigation Planner
Pathfinder
Navigation Utilities
Navigation Manager
Agent
Agent Controller
Behavioral Controller
Agent Brain
Agent Planner
Behavior Context Manager
ECMMS Manager
Navigable Representation
19.7GeneratingaBehaviorModelwiththeECMMS 337
that you can build your own behavioral system. An agent’s observed behavior
provides significant insight into its emotional state, attitudes, and attention, and
as a result, a considerable amount of perceived behavior originates from how an
agent moves through the world relative to not only the objects but also to the
available space within that environment. How an agent makes use of space has
been covered [Ramsey 2009a, Ramsey 2009b]—here we focus on how the
ECMMS provides the underpinnings for a behavioral model that embraces ego-
centric spatial awareness.
As we’ve mentioned before, the ECMMS is a system that allows for an agent
to gather information about its environment as it moves through it. What an agent
needs is the ability to classify this information and generate what we call spatial
semantics; spatial semantics allow the higher-order systems to make both short-
term as well as long-term decisions based upon the spatial orientation of an agent
relative to the geometry in the scene. Spatial semantics signifies an important
distinction from the typical approach of agent classification in games, where they
rely upon methods of perhaps too fine a granularity to drive the immediate action
of an agent. To that end, we want to build the basis for informing behavioral de-
cisions from one aspect of the situation at hand, that being the relative spatial
orientation of an agent with the elements in its environment.
Figure 19.7 shows an example of an agent that is next to a wall, as well as
what its query space looks like. In general, we came up with a series of funda-
mental categories that allowed us to generate a meaning from the raw collision
sensor information. The syntax we allowed consisted of
SOLeft, SORight, SOBe-
hind
, SOInFront, SOAbove, SOBelow, SONear, and SOFar. If something was im-
Figure 19.7. (a) A bird next to a wall. (b) The bird’s query space.
(a) (b)
338 19.AnEgocentricMotionManagementSystem
peding movement in a particular direction, we said that direction was blocked.
We were also able to quantify the coverage of the collision sensors relative to
how much of a given object was within the query space of an agent. Quantifica-
tion fell into three categories:
QSmall, QMedium, or QLarge. A couple of ancillary
benefits are that the quantification process would allow a more advanced artifi-
cial intelligence system to not only generate quantities but also generate propor-
tions relative to past results, as well as adjectives for the quantifications such as
QNarrow or QLong.
Since the ECMMS is focusing on the agent, the quantification process needs
to be relative to the agent’s size. So to handle the quantification mapping, a sim-
ple function scales the quantified object relative to the agent’s size. This allows
us to objectively quantify a result and then allow the agent to requantify the ob-
ject based upon its perception, which is mainly due to its size.
Now that we have the ability to quantify objects that overlap with the colli-
sion sensors and the ability to generate a syntax for the specific situation at hand,
we need the ability to generate a resultant semantic. Figure 19.8 shows an exam-
ple of a bird between two walls. The interpretation that would be made by the
agent is that there is a blocking object behind it as well as to the left and right of
it. The resultant spatial semantic for the bird would be one of being cornered, so
the avenue for available actions would comprise either some forward animation
or flight. One of the uses of a spatial semantic is for processing what action, spe-
cifically what animation, to play on the agent. For example, by looking at the
spatial semantic generated from a situation, we can favor animations that exhibit
tight turns in places that are geometrically restrictive for the agent.
Figure 19.8. (a) A bird in a corner. (b) The bird’s generated syntax. (c) The semantic of
the spatial situation that the bird is in.
(a) Forward = Unblocked
Backward = Blocked
Left = Blocked
Right = Blocked
(b) (c) Move forward
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset