What is a game if not a great challenge to the player, who needs to use their character’s abilities to tackle different scenarios? Each game imposes different kinds of obstacles on the player, and the main one in our game is the enemies. Creating challenging and believable enemies can be complex; they need to behave like real characters and be smart enough so as not to be easy to kill, but also easy enough that they are not impossible to kill. We are going to use basic but sufficient AI techniques to make an AI capable of sensing its surroundings and, based on that information, make decisions on what to do, using FSMs or Finite State Machines, along with other techniques. Those decisions will be executed using intelligent pathfinding.
In this chapter, we will examine the following AI concepts:
By the end of the chapter, you will have a fully functional enemy capable of detecting the player and attacking them, so let’s start by seeing first how to make the sensor systems.
An AI works by first taking in information about its surroundings. Then, that data is analyzed in order to choose an action, and finally, the chosen action is executed. As you can see, we cannot do anything without information, so let’s start with that part.
There are several sources of information our AI can use, such as data about itself (life and bullets) or maybe some game state (winning condition or remaining enemies), which can easily be found with the code we’ve seen so far. One important source of information, however, is also the AI senses. According to the needs of our game, we might need different senses such as sight and hearing, but in our case, sight will be enough, so let’s learn how to code that.
In this section, we will examine the following sensor concepts:
Let’s start by seeing how to create a sensor with the three-filters approach.
The common way to code senses is through a three-filters approach to discard enemies out of sight. The first filter is a distance filter, which will discard enemies too far away to be seen, then the second filter would be the angle check, which will check enemies inside our viewing cone, and finally, the third filter is a raycast check, which will discard enemies that are being occluded by obstacles such as walls.
Before starting, a word of advice: we will be using vector mathematics here, and covering those topics in-depth is outside the scope of this book. If you don’t understand something, feel free to just search online for the code in the screenshots.
Let’s code sensors in the following way:
GameObject
called AI
as a child of the Enemy Prefab. You need to first open the Prefab to modify its children (double-click the Prefab). Remember to set the transform of this GameObject
to Position 0,1.75,0, Rotation 0,0,0, and Scale 1,1,1 so it will be aligned to the enemy’s eyes. This is done this way for the future sight sensors we will do. Consider your enemy prefab might have a different height for the eyes. While we can certainly just put all AI scripts directly in the Enemy Prefab root GameObject
, we did this just for separation and organization:Figure 9.1: AI scripts container
Sight
and add it to the AI
child object.float
type called distance
and angle
, and another two of the LayerMask
type called obstaclesLayers
and objectsLayers
. distance
will be used as the vision distance, angle
will determine the amplitude of the view cone, obstacleLayers
will be used by our obstacle check to determine which objects are considered obstacles, and objectsLayers
will be used to determine what types of objects we want the Sight
component to detect.
We just want the sight to see enemies; we are not interested in objects such as walls or power-ups. LayerMask
is a property type that allows us to select one or more layers to use inside code, so we will be filtering objects by layer. In a moment, you will see how we use it:
Figure 9.2: Fields to parametrize our sight check
Update
, call Physics.OverlapSphere
as in the Figure 9.3. This function creates an imaginary sphere in the place specified by the first parameter (in our case, our position) and with a radius specified in the second parameter (the distance property) to detect objects with the layers specified in the third parameter (ObjectsLayers
). It will return an array with all the colliders found inside the sphere; these functions use physics to carry out the check, so the objects must have at least one collider.
This is the method we will be using to find all enemies inside our view distance, and we will be further filtering them in the next steps. Note that we are passing our position to the first parameter, which is not actually the position of the enemy but the position of the AI
child object, given our script is located there. This highlights the importance of the position of the AI object.
Another way of accomplishing the first check is to just check the distance from the objects we want to see to the player, or if looking for other kinds of objects, to a Manager
component containing a list of them. However, the method we chose is more versatile and can be used for any kind of object.
Also, you might want to check the Physics.OverlapSphereNonAlloc
version of this function, which does the same but is more performant by not allocating an array to return the results.
for
loop:Figure 9.3: Getting all GameObjects at a certain distance
Start calculating the direction toward the object, which can be done by normalizing the difference between the object’s position and ours, like in Figure 9.4. You might notice we used bounds.center
instead of transform.position
; this way, we check the direction to the center of the object instead of its pivot. Remember that the player’s pivot is in the ground and the ray check might collide against it before the player:
Figure 9.4: Calculating direction from our position toward the collider
Vector3.Angle
function to calculate the angle between two directions. In our case, we can calculate the angle between the direction toward the enemy and our forward vector to see the angle:Figure 9.5: Calculating the angle between two directions
If you want, you can instead use Vector3.Dot
, which will execute a dot product, a mathematics function to calculate the length of a vector projected to another (search online for more info). Vector3.Angle
actually uses that one, but converts the result of the dot product into an angle, which needs to use trigonometry, and that can be time expensive to calculate. But our Vector3.Angle
approach is simpler and faster to code, and given that we don’t require many sensors because we won’t have many enemies, optimizing the sensor using dot products is not necessary now, but consider that for games with larger scale.
angle
field. Note that if we set an angle of 90
, it will actually be 180
, because if the Vector3.Angle
function returns, as an example, 30
, it could be 30
to the left or to the right. If our angle says 90
, it could be both 90
to the left and to the right, so it will detect objects in a 180-degree arc.Physics.Linecast
function to create an imaginary line between the first and the second parameter (our position and the collider position) to detect objects with the layers specified in the third parameter (the obstacle layers) and return boolean
indicating whether that ray hit something or not.
The idea is to use the line to detect whether there are any obstacles between ourselves and the detected collider, and if there is no obstacle, this means that we have a direct line of sight toward the object. Observe how we use the !
or not
operator in Figure 9.6 to check if Physics.Linecast
didn’t detect any objects. Again, note that this function depends on the obstacle objects having colliders, which in our case, we have (walls, floor, and so on):
Figure 9.6: Using a Linecast to check obstacles between the sensor and the target object
Collider
type called detectedObject
, to save that information for later usage by the rest of the AI
scripts.
Consider using break
to stop the for
loop that is iterating the colliders to prevent wasting resources by checking the other objects, and to set detectedObject
to null
before for
to clear the result from the previous frame. So if in this frame, we don’t detect anything, it will keep the null
value so we notice that there is nothing in the sensor:
Figure 9.7: Full sensor script
In our case, we are using the sensor just to look for the player, the only object the sensor is in charge of looking for, but if you want to make the sensor more advanced, you can just keep a list of detected objects, placing inside it every object that passes the three tests instead of just the first one. In our case, it’s not necessary given we have only one player in the game.
ObjectsLayer
to Player
so our sensor will focus its search on objects with that layer, and obstaclesLayer
to Default
, the layer we used for walls and floors. Remember the Sight
script is in the AI
GameObject, which is a child of the Enemy
prefab:Figure 9.8: Sensor settings
AI
child object and then play the game to see how the property is set in the Inspector. Also, try putting an obstacle between the two and check that the property says None (null
). If you don’t get the expected result, double-check your script, its configuration, and whether the player has the Player
layer, and the obstacles have the Default
layer. Also, you might need to raise the AI
object a little bit to prevent the ray from starting below the ground and hitting it.Given the size of the script, let’s dedicate an entire section to the Visual Scripting version, given it also introduces some new Visual Scripting concepts needed here.
Regarding the Visual Scripting version, let’s check it part by part, starting with the Overlap Sphere:
Figure 9.9: Overlap Sphere in Visual Scripting
So far, we just called Overlap Sphere after setting the sensedObject
variable to null
. Something to consider is how the sensedObject
variable in the Variables component in the Inspector doesn’t have a type (a Null type is no type in Visual Scripting). This can’t be possible in C#—all variables must have a type—and while we could set the sensedObject
variable to the proper type (Collider), we will keep the variable type to be set later via a script. Even if we set the type now, Visual Scripting tends to forget the type if no value is set, and we cannot set it until we detect something.
Don’t worry about that for the moment; when we set the variable through our script it will acquire the proper type. Actually, all variables in Visual Scripting can switch types at runtime according to what we set them to, given how the Variables component works. I don’t recommend doing that, though: try to stick with the intended variable type.
We just said that all variables in C# must have a type, but that’s not entirely true. There are ways to create dynamically-typed variables, but it’s not a good practice that I’d recommend using unless no other option is present.
Another thing to observe is how we set the sensedObject
variable to null
at the beginning using the Null node, which effectively represents the null
value.
Now, let’s explore the Foreach
part:
Figure 9.10: Iterating collections in Visual Scripting
We can see that one of the output pins of Overlap Sphere is a little list, which essentially represents the collider array returned by Overlap Sphere. We connect that pin to the For Each Loop node, which as you might imagine iterates over the elements of the provided collection (array, list, dictionary, etc.). The Body pin represents the nodes to execute in each iteration of the loop, and the Item output pin represents the item currently being iterated—in our case, one of the colliders detected in Overlap Sphere. Finally, we save that item in a Flow potentialDetection
variable, Flow variables being the equivalent to local variables in C# functions.
The idea here is that, given the size of the graph and the number of times we will be needing to query the currently iterated item, we don’t want the line connecting the output Item pin to the other nodes to cross the entire graph. Instead, we save that item in the Flow variable to reference it later, essentially naming that value to be referenced later in the graph, which you will see in the next parts of it.
Now let’s explore the Angle check:
Figure 9.11: Angle check in Visual Scripting
Here, you can see a direct translation of what we did in C# to detect the angle, so it should be pretty self-explanatory. The only thing here is given the proximity of the Item output pin to the Get Position node where we query its position, we directly connected the node, but we will use the potentialDetection
flow variable later.
Now, let’s explore the Linecast part:
Figure 9.12: Linecast check in Visual Scripting
Again, essentially the same as we did before in C#. The only thing to highlight here is the fact we used the Flow variable potentialDetection
to again get the position of the current item being iterated, instead of connecting the Get Position node all the way to the Foreach Item output pin.
Now, let’s explore the final part:
Figure 9.13: Setting the sensedObject
Again, pretty much self-explanatory; if the Linecast returns false
, we set the potentialDetection
variable (the currently iterated item) as the sensedObject
variable (the one that will be accessed by other scripts later to query which is the object our AI can see right now). Something to consider here is the usage of the Break Loop node, which is the equivalent to the C# break
keyword; essentially, we are stopping the Foreach loop we are currently in.
Now, even if we have our sensor working, sometimes checking whether it’s working or configured properly requires some visual aids we can create using gizmos.
As we create our AI, we will start to detect certain errors in edge cases, usually related to misconfigurations. You may think that the player falls within the sight range of the enemy but maybe you cannot see that the line of sight is occluded by an object, especially as the enemies move constantly. A good way to debug those scenarios is through editor-only visual aids known as Gizmos
, which allow you to visualize invisible data such as the sight distance or the Linecasts
executed to detect obstacles.
Let’s start seeing how to create Gizmos
drawing a sphere representing the sight distance by doing the following:
Sight
script, create an event function called OnDrawGizmos
. This event is only executed in the editor (not in builds) and is the place Unity asks us to draw Gizmos
.Gizmos.DrawWireSphere
function, passing our position as the first parameter and the distance as the second parameter to draw a sphere in our position with the radius of our distance. You can check how the size of the Gizmo
changes as you change the distance field:Figure 9.14: Sphere Gizmo
Gizmos.color
prior to calling the drawing functions:Figure 9.15: Gizmos drawing code
Now you are drawing Gizmos
constantly, and if you have lots of enemies, they can pollute the scene view with too many Gizmos
. In that case, try the OnDrawGizmosSelected
event function instead, which draws Gizmos
only if the object is selected.
Gizmos.DrawRay
, which receives the origin of the line to draw and the direction of the line, which can be multiplied by a certain value to specify the length of the line, as in the following screenshot:Figure 9.16: Drawing rotated lines
Quaternion.Euler
to generate a quaternion based on the angles we want to rotate. A quaternion is a mathematical construct to represent rotations; please search for this term for more info on it. If you multiply this quaternion by a direction, we will get the rotated direction. We are taking our forward vector and rotating it according to the angle field to generate our cone vision lines. Also, we multiply this direction by the sight distance to draw the line as far as our sight can see; you will see how the line matches the end of the sphere this way:
Figure 9.17: Vision angle lines
We can also draw the Linecasts, which check the obstacles, but as those depend on the current situation of the game, such as the objects that pass the first two checks and their positions, we can use Debug.DrawLine
instead, which can be executed in the Update
method. This version of DrawLine
is designed to be used in runtime only. The Gizmos
we saw also execute in the editor. Let’s try them the following way:
Linecast
didn’t detect any obstacles, so we need to draw a line between our sensor and the object. We can call Debug.DrawLine
in the if
statement that calls Linecast
, as in the following screenshot:Figure 9.18: Drawing a line in Update
Figure 9.19: Line toward the detected Object
Linecast
hit, so we can use an overload of the function, which provides an out
parameter that gives us more information about what the line collided with, such as the position of the hit and the normal and the collided object, as in the following screenshot:Figure 9.20: Getting information about Linecast
Note that Linecast
doesn’t always collide with the nearest obstacle but with the first object it detects in the line, which can vary in order. If you need to detect the nearest obstacle, look for the Physics.Raycast
version of the function.
else
of the if
sentence when the line collides with something:Figure 9.21: Drawing a line if we have an obstacle
Figure 9.22: Line when an obstacle occludes vision
Regarding the Visual Scripting version, the first part will look like this:
Figure 9.23: Drawing Gizmos with Visual Scripting
Then, the angle lines would look like this:
Figure 9.24: Drawing Angle lines of sight in Visual Scripting
Note that, here, we are showing just one, but the other is essentially the same but multiplying the angle by -1. Finally, the red lines towards the detected object and obstacles will look like this:
Figure 9.25: Drawing lines towards obstacles or detected objects in Visual Scripting
Note that, to accomplish this last one, we needed to change the previous Linecast node for the version that returns Raycast Hit info at the end.
With all of that, in this section, we created the sensors system that will give sight to our AI and plenty of info about what to do next. Now that we have our sensors completed, let’s use the information provided by them to make decisions with FSMs.
We explored the concept of Finite State Machines (FSMs) in the past when we used them in the Animator
component. We learned that an FSM is a collection of states, each one representing an action that an object can be executing at a time, and a set of transitions that dictates how the states are switched. This concept is not only used in animation but in a myriad of programming scenarios, and one of the common ones is AI. We can just replace the animations with AI code in the states and we have an AI FSM.
In this section, we will examine the following AI FSM concepts:
Let’s start by creating our FSM skeleton.
To create our own FSM, we need to recap some basic concepts. Remember that an FSM can have a state for each possible action it can execute and that only one can be executed at a time.
In terms of AI, for example, we can be patrolling, attacking, fleeing, and so on. Also, remember that there are transitions between states that determine conditions to be met to change from one state to another, and in terms of AI, this can be the user being near the enemy to start attacking or life being low to start fleeing. In the next figure, you can find a simple reminder example of the two possible states of a door:
Figure 9.26: FSM example
Animator
component if you want to or download some FSM system from the Asset Store. In our case, we are going to take the simplest approach possible, a single script with a set of If
sentences, which can be basic but is still a good start to understanding the concept. Let’s implement it by doing the following:EnemyFSM
in the AI
child object of the enemy.enum
called EnemyState
with the GoToBase
, AttackBase
, ChasePlayer
, and AttackPlayer
values. We are going to have those states in our AI.EnemyState
type called currentState
, which will hold the current state of our enemy: Figure 9.27: EnemyFSM state definition
Update
depending on the current state:Figure 9.28: If-based FSM
Yes, you can totally use a switch here, but I just prefer the regular if
syntax for this example.
currentState
field will change which state is active, seeing the messages being printed in the console:Figure 9.29: State testing
As you can see, it is a pretty simple but totally functional approach. In the future, you could face having to code enemies with many more states, and this approach will start to scale badly. In such a case, you could use any FSM plugin of the Asset Store you prefer to have more powerful and scalable tools, or even consider advanced techniques like Behavior Trees, but that’s outside the scope of this book. Now let’s continue with this FSM, creating its transitions.
If you remember the transitions created in the Animator Controller
, those were basically a collection of conditions that are checked if the state the transition belongs to is active. In our FSM approach, this translates simply as If
sentences that detect conditions inside the states. Let’s create the transitions between our proposed states as follows:
Sight
type called sightSensor
in our FSM script, and drag the AI GameObject
to that field to connect it to the Sight
component there. As the FSM component is in the same object as Sight
, we can also use GetComponent
instead, but in advanced AIs, you might have different sensors that detect different objects, so I prefer to prepare my script for that scenario. You should pick the approach you like the most.GoToBase
function, check whether the detected object of the Sight
component is not null
, meaning that something is inside our line of vision. If our AI is going toward the base but detects an object in the way, we must switch to the Chase
state to pursue the player, so we change the state, as in the following screenshot:Figure 9.30: Creating transitions
AttackBase
if we are near enough to the object that must be damaged to decrease the base life. We can create a field of the Transform
type called baseTransform
and drag the player’s base life object we created previously there so we can check the distance. Remember to add a float field called baseAttackDistance
to make that distance configurable:Figure 9.31: GoToBase transitions
ChasePlayer
, we need to check whether the player is out of sight to switch back to the GoToBase
state or whether we are near enough to the player to start attacking it. We will need another distance field called PlayerAttackDistance
, which determines the distance to attack the player, and we might want different attack distances for those two targets. Consider an early return in the transition to prevent getting null
reference exceptions if we try to access the position of the sensor detected object when there are not any:Figure 9.32: ChasePlayer transitions
AttackPlayer
, we need to check whether the player is out of sight to get back to GoToBase
or whether it is far enough to go back to chasing it. You will notice how we multiplied playerAttackDistance
to make the stop-attacking distance a little bit greater than the start-attacking distance; this will prevent switching back and forth rapidly between attacking and chasing when the player is near that distance. You can make it configurable instead of hardcoding 1.1
:
Figure 9.33: AttackPlayer transitions
AttackBase
won’t have any transition. Once the enemy is near enough to the base to attack it, it will stay like that, even if the player starts shooting at it. Its only objective once there is to destroy the base.Gizmos
to draw the distances:Figure 9.34: FSM Gizmos
print
messages in each state to see them changing in the console. Remember to set the attack distances and the references to the objects. In the screenshot, you can see the settings we use:Figure 9.35: Enemy FSM settings
A little problem that we will have now is that the spawned enemies won’t have the needed references to make the distance calculations to the player’s base transform. You will notice that if you try to apply the changes on the enemy of the scene to the Prefab (Overrides -> Apply All), the Base Transform variable will say None
. Remember that Prefabs cannot contain references to objects in the scene, which complicates our work here. One alternative would be to create BaseManager
, a Singleton that holds the reference to the damage position, so our EnemyFSM
can access it. Another one could be to make use of functions such as GameObject.Find
to find our object.
In this case, we will see the latter. Even though it can be less performant than the Manager
version, I want to show you how to use it to expand your Unity toolset. In this case, just set the baseTransform
field in Awake
to the return of GameObject
.Find
, using BaseDamagePoint
as the first parameter, which will look for an object with the same name, as in the following screenshot.
You will see that now our wave-spawned enemies will change states:
Figure 9.36: Searching for an object in the scene by name
Now that our FSM states are coded and execute transitions properly, let’s see how to do the same in Visual Scripting. Feel free to skip the following section if you are only interested in the C# version.
So far, most scripts in Visual Scripting were almost a mirror of the C# version with some differences in some nodes. While regarding state machines we could do the same, instead, we are going to use the State Machine system of Visual Scripting. The concept is the same, you have states and can switch them, but how the states are organized and when the transitions trigger is managed visually, in a similar way as the Animator system does. So, let’s see how we can use the system by creating our first State Machine Graph
and some states
. Follow these steps:
fixed
asset in a similar way to what we have done so far for regular Visual Scripts. In my case, I called it EnemyFSM
.Figure 9.37: Creating a Visual State Machine
Figure 9.38: Creating our first Visual State Machine State
Figure 9.39: Visual states
GoToBase
, AttackBase
, ChasePlayer
, and AttackPlayer
). If you don’t see the Info panel, click the button with the i in the middle to display it:Figure 9.40: Renaming a Visual State
Figure 9.41: All needed states
GoToBase
as that’s the one I prefer to be first. If you don’t have that one as the starting one, right-click the node that currently has the green bar in your state machine, select Toggle Start to remove the green bar from it, and then repeat for the node that you want to be the first one (GoToBase
in our scenario), adding the green bar to that one.Something to consider is that you can have more than one start state in Visual Scripting, meaning you can have multiple states running at the same time and transitioning. If possible, I recommend avoiding having more than one state active at a time to make things simple.
GoToBase
to enter the edit mode for these states. Connect a String node to the print Message input pin in the OnUpdate event node to print a message saying GoToBase
:Figure 9.42: Our first state machine logic
Figure 9.43: Returning to the State Machine editor mode
With this, we have created the nodes representing the possible states of our AI. In the next section, we will be adding logic for them to something meaningful, but before that, we need to create the transitions between the states and the conditions that need to be met to trigger them by doing the following:
baseTransform
, baseAttackDistance
, and playerAttackDistance
as we are going to need them to do the transitions. baseTransform
as we will fill it later via code, but regarding baseAttackDistance
, make it using the Float type and put a value of 2
, and finally for playerAttackDistance
, also use Float and a value of 3
. Feel free to change those values if you prefer:Figure 9.44: Variables needed for our transitions
GoToBase
node and select the Make Transition option, and then click the ChasePlayer
node. This will create a transition between the two states:Figure 9.45: A transition between two states
State Machine Graph
will need to look like the following screenshot:Figure 9.46: All the needed transitions
If
node during the state logic). Remember you have two yellow shapes, one for each transition direction, so check you are double-clicking the correct one based on the white arrows connecting them.sensedObject
variable is not null
. It should look like this:Figure 9.47: Adding a transition condition
Figure 9.48: GoToBase to AttackBase transition condition
Figure 9.49: ChasePlayer to GoToBase transition condition
Figure 9.50: ChasePlayer to AttackPlayer transition condition
1.1
(to prevent transition jittering as we explained in the C# version):Figure 9.51: AttackPlayer to ChasePlayer transition condition
Figure 9.52: AttackPlayer to GoToBase transition condition
A little detail we need to tackle before moving on is the fact that we still don’t have any value set in the baseTransform
variable. The idea is to fill it via code as we did in the C# version. But something to consider here is that we cannot add an Awake
event node to the whole state machine, but just to the states.
In this scenario, we could use the OnEnterState event, which is an exclusive event node for state machines. It will execute as soon as the state becomes active, which is useful for state initializations. We could add the logic to initialize the baseTransform
variable in the OnEnterState event node of the GoToBase state, given it is the first state we execute.
This way, GoToBase logic will look as in Figure 9.53. Remember to double-click the state node to edit it:
Figure 9.53: GoToBase initialization logic
Notice how, here, we set the result of the Find node into the variable only on the Null pin of Null Check. What Null Check does is check if our baseTransform variable is set, going through the Not Null
pin if it is, and Null
if it isn’t. This way we avoid executing GameObject.Find every time we enter the GoToBase state, but only the first time. Also, note that in this case, we will be executing the Set Variable node not only when the object initializes, but also each time GoToBase becomes the current state. If, in any case, that results in unexpected behavior, other options could be to create a new initial state that initializes everything and then transitions to the rest of the states, or maybe do a classic Visual Script graph that initializes those variables in the On Start event node.
With all this, we learned how to create a decision-making system for our AI through FSMs. It will make decisions based on the info gathered via sensors and other systems. Now that our FSM states are coded and transition properly, let’s make them do something.
Now we need to complete the last step—make the FSM do something interesting. Here, we can do a lot of things such as shoot the base or the player and move the enemy toward its target (the base or the player). We will be handling movement with the Unity Pathfinding system called NavMesh
, a tool that allows our AI to calculate and traverse paths between two points while avoiding obstacles, which needs some preparation to work properly.
In this section, we will examine the following FSM action concepts:
Let’s start by preparing our scene for movement with Pathfinding.
Pathfinding algorithms rely on simplified versions of the scene. Analyzing the full geometry of a complex scene is almost impossible to do in real time. There are several ways to represent Pathfinding information extracted from a scene, such as Graphs and NavMesh
geometries. Unity uses the latter—a simplified mesh similar to a 3D model that spans all areas that Unity determines are walkable. In the next screenshot, you can find an example of NavMesh
generated in a scene, that is, the light blue geometry:
Figure 9.54: NavMesh of walkable areas in the scene
Generating NavMesh
can take from seconds to minutes depending on the size of the scene. That’s why Unity’s Pathfinding system calculates the NavMesh
once in the editor, so when we distribute our game, the user will use the pre-generated NavMesh
. Just like Lightmapping, NavMesh
is baked into a file for later usage. Like Lightmapping, the main caveat here is that NavMesh
objects cannot change during runtime. If you destroy or move a floor tile, the AI will still walk over that area. The NavMesh
on top of that didn’t notice the floor isn’t there anymore, so you are not able to move or modify those objects in any way. Luckily, in our case, we won’t suffer any modification of the scene during runtime, but note that there are components such as NavMeshObstacle
that can help us in those scenarios.
To generate NavMesh
for our scene, do the following:
NavMesh
generation, you can click the arrow at the left of the static check and select Navigation Static only. Try to limit Navigation Static
GameObjects to only the ones that the enemies will actually traverse to increase NavMesh
generation speed. Making the terrain navigable, in our case, will increase the generation time a lot and we will never play in that area.NavMesh
panel in Window | AI | Navigation.NavMesh
:Figure 9.55: Generating a NavMesh
And that’s pretty much everything you need to do. Of course, there are lots of settings you can fiddle around with, such as Max Slope, which indicates the maximum angle of slopes the AI will be able to climb, or Step Height, which will determine whether the AI can climb stairs, connecting the floors between the steps in NavMesh
, but as we have a plain and simple scene, the default settings will suffice.
Now, let’s make our AI move around NavMesh
.
For making an AI object that moves with NavMesh
, Unity provides the NavMeshAgent
component, which will make our AI stick to NavMesh
, preventing the object from going outside it. It will not only calculate the path to a specified destination automatically but also will move the object through the path with the use of Steering behavior algorithms that mimic the way a human would move through the path, slowing down on corners and turning with interpolations instead of instantaneously. Also, this component is capable of evading other NavMeshAgent GameObjects
running in the scene, preventing all of the enemies from collapsing in the same position.
Let’s use this powerful component by doing the following:
NavMeshAgent
component to it. Add it to the root object, the one called Enemy
, not the AI child—we want the whole object to move. You will see a cylinder around the object representing the area the object will occupy in NavMesh
. Note that this isn’t a collider, so it won’t be used for physical collisions:Figure 9.56: The NavMeshAgent component
ForwardMovement
component; from now on, we will drive the movement of our enemy with NavMeshAgent
.Awake
event function of the EnemyFSM
script, use the GetComponentInParent
function to cache the reference of NavMeshAgent
. This will work similarly to GetComponent
—it will look for a component in our GameObject
, but if the component is not there, this version will try to look for that component in all parents. Remember to add the using UnityEngine.AI
line to use the NavMeshAgent
class in this script:Figure 9.57: Caching a parent component reference
As you can imagine, there is also the GetComponentInChildren
method, which searches components in GameObject
first and then in all its children if necessary.
GoToBase
state function, call the SetDestination
function of the NavMeshAgent
reference, passing the position of the base object as the target:Figure 9.58: Setting a destination for our AI
NavMeshAgent
to stop, which we can do by setting the isStopped
field of the agent to true
.
You might want to tweak the base attack distance to make the enemy stop a little bit closer or further away:
Figure 9.59: Stopping agent movement
ChasePlayer
and AttackPlayer
. In ChasePlayer
, we can set the destination of the agent to the player’s position, and in AttackPlayer
, we can stop the movement. In this scenario, Attack Player can go back again to GoToBase
or ChasePlayer
, so you need to set the isStopped
agent field to false
in those states or before doing the transition. We will pick the former, as that version will cover other states that also stop the agent without extra code. We will start with the GoToBase
state:Figure 9.60: Reactivating the agent
Figure 9.61: Reactivating the agent and chasing the player
AttackPlayer
:Figure 9.62: Stopping the movement
NavMeshAgent
to control how fast the enemy will move. Also, remember to apply the changes to the Prefab for the spawned enemies to be affected.GoToBase
will look like the following screenshot: if
was null
, and also when the variable wasn’t null
(Not Null pin of Null Check). Note that all of this happens in the On Enter State event, so we just need to do it once. We do it every frame in the C# version for simplicity but that’s actually not necessary, so we will take advantage of the OnEnterState event. We can emulate that behavior in the C# version if we want, executing these actions at the moment we change the state (inside the If
statements that check the transition conditions), instead of using the Update function. Finally, notice how we needed to use the GetParent node in order to access the NavMeshAgent
component in the enemy’s root object? This is needed because we are currently in the AI child object instead.Figure 9.64: Making our agent stop
Figure 9.65: ChasePlayer logic
Figure 9.66: AttackPlayer logic
Now that we have movement in our enemy, let’s finish the final details of our AI.
We have two things missing here: the enemy is not shooting any bullets, and it doesn’t have animations. Let’s start with fixing the shooting by doing the following:
bulletPrefab
field of the GameObject
type to our EnemyFSM
script and a float
field called fireRate
.Shoot
and call it inside AttackBase
and AttackPlayer
:Figure 9.67: Shooting function calls
Shoot
function, put similar code as that used in the PlayerShooting
script to shoot bullets at a specific fire rate, as in Figure 9.68. Remember to set the Enemy layer in your Enemy Prefab, if you didn’t before, to prevent the bullet from damaging the enemy itself. You might also want to raise the AI GameObject position a little bit to shoot bullets from a position other than the ground or, better, add a shootPoint
transform field and create an empty object in the enemy to use as a spawn position. If you do that, consider making the empty object not be rotated so the enemy rotation affects the direction of the bullet properly: Figure 9.68: Shoot function code
Here, you find some duplicated shooting behavior between PlayerShooting
and EnemyFSM
. You can fix that by creating a Weapon behavior with a function called Shoot
that instantiates bullets and takes into account the fire rate and call it inside both components to re-utilize it.
LookTo
function that receives the target position to look at and call it in AttackPlayer
and AttackBase
, passing the target to shoot at:Figure 9.69: LookTo function calls
LookTo
function by calculating the direction of our parent to the target position. We access our parent with transform.parent
because, remember, we are the child AI object—the object that will move is our parent. Then, we set the Y
component of the direction to 0
to prevent the direction from pointing upward or downward—we don’t want our enemy to rotate vertically. Finally, we set the forward vector of our parent to that direction so it will face the target position immediately. You can replace that with interpolation through quaternions to have a smoother rotation if you want to, but let’s keep things as simple as possible for now:Figure 9.70: Looking toward a target
Figure 9.71: AttackBase state
In this state, we have some things to highlight. First, we are using the LookAt node in the OnEnterState event node after the SetStopped node. As you might imagine, this does the same as we did with math in C#. We specify a target to look at (our base transform) and then we specify that the World Up parameter is a vector pointing upwards 0
,1
,0
. This will make our object look at the base but maintain its up vector pointing to the sky, meaning our object will not look at the floor if the target is lower than him. We can use this exact function in C# if we want to (transform.LookAt
); the idea was just to show you all the options. Also note that we execute LookAt
only when the state becomes active—as the base doesn’t move, we don’t need to constantly update our orientation.
The second thing to highlight is that we used coroutines to shoot, the same idea we used in the Enemy Spawner
to constantly spawn enemies. Essentially, we make an infinite loop between Wait For Seconds and Instantiate. We took this approach here because it was convenient given it takes fewer nodes in Visual Scripting.
Remember to select the OnEnterState node and check the Coroutine checkbox as we did before. Also, we need a new Float type variable called fireRate
in the Enemy’s AI child object:
Figure 9.72: Coroutines
Then, AttackPlayer will look like this:
Figure 9.73: AttackPlayer state
Essentially it is the same as AttackBase, but that looks towards the sensedObject
instead toward the player’s base, and we also made the LookAt node part of the infinite loop, to correct the enemy’s heading before shooting to target the player.
With that, we have finished all AI behaviors. Of course, these scripts/graphs are big enough to deserve some rework and splitting in the future, but with this, we have prototyped our AI, and we can test it until we are happy with it, and then we can improve this code.
I’m pretty sure AI is not what you imagined; you are not creating Skynet here, but we have accomplished a simple but interesting AI to challenge our players, which we can iterate and tweak to tailor to our game’s expected behavior. We saw how to gather our surrounding information through sensors to make decisions on what action to execute using FSMs and using different Unity systems such as Pathfinding to make the AI execute those actions. We used those systems to diagram a State Machine capable of detecting the player, running to them, and attacking them, and if the player is not there, just going to the base to accomplish its task to destroy it.
In the next chapter, we are going to start Part 3 of this book, where we will learn about different Unity systems to improve the graphics and audio aspects of our game, starting by seeing how we can create materials to modify the aspect of our objects and create Shaders with Shader Graph.