At our current game status, we mostly have a static Scene, without considering the Shader and particle animations. In the next chapter, when we will add scripting to our game, everything will start to move according to the behavior we want. But sometimes, we need to move objects in a predetermined way, such as with cutscenes, or specific character animations, such as jumping, running, and so on. The idea of this chapter is to go over several Unity animation systems to create all the possible movements of objects we can get without scripting.
In this chapter, we will examine the following animation concepts:
By the end of this chapter, you will be able to create cutscenes to tell the history of your game or highlight specific areas of your level, as well as create dynamic cameras that are capable of giving an accurate look of your game, regardless of the situation.
So far, we have used what are called static meshes, which are solid three-dimensional models that are not supposed to bend or animate in any way (aside from moving separately, like the doors of a car). We also have another kind of mesh, called skinned meshes, which are meshes that have the ability to be bent based on a skeleton, so they can emulate the muscle movements of the human body. We are going to explore how to integrate animated humanoid characters into our project to create the enemy and player movements.
In this section, we will examine the following skeletal mesh concepts:
We are going to explore the concept of skinning and how it allows you to animate characters. Then, we are going to bring animated meshes into our project to finally apply animations to them. Let's start by discussing how to bring skeletal animations into our project.
In order to get an animated mesh, we need to have four pieces, starting with the mesh itself and the model that will be animated, which is created the same way as any other mesh. Then, we need the skeleton, which is a set of bones that will match the desired mesh topology, such as the arms, fingers, feet, and so on. In Figure 12.1, you can see an example of a set of bones aligned with our target mesh. You will notice that these kinds of meshes are usually modeled with the T pose, which will facilitate the animation process:
Once the artist has created the model and its bones, the next step is to do the skinning, which is the act of associating every vertex of the model to one or more bones. In this way, when you move a bone, the associated vertexes will move with it. This is done in such a way because it is easier to animate a reduced amount of bones instead of every single vertex of the model. In the next screenshot, you will see the triangles of a mesh being painted according to the color of the bone that affects it as a way to visualize the influence of the bones. You will notice blending between colors, meaning that those vertexes are affected differently by different bones to allow the vertexes near an articulation to bend nicely. Also, the screenshot illustrates an example of a two-dimensional mesh used for two-dimensional games, but the concept is the same:
Finally, the last piece you need is the actual animation, which will simply consist of a blending of different poses of the meshes. The artist will create keyframes in an animation, determining which pose the model needs to have at different moments, and then the animation system will simply interpolate between them. Basically, the artist will animate the bones, and the skinning system will apply this animation to the whole mesh. You can have one or several animations, which you will later switch between according to the animation that you want to match the character's motion (such as idle, walking, falling, and so on).
In order to get the four parts, we need to get the proper assets containing them. The usual format in this scenario is Filmbox (FBX), which is the same that we have used so far to import 3D models. This format can contain every piece we need—the model, the skeleton with the skinning, and the animations—but usually, we will split the parts into several files to reutilize the pieces.
Imagine a city simulator game where we have several citizen meshes with different aspects and all of them must be animated. If we have a single FBX per citizen containing the mesh, the skinning, and the animation, it will cause each model to have its own animation, or at least a clone of the same one, repeating them. When we need to change that animation, we will need to update all the mesh citizens, which is a time-consuming process. Instead of this, we can have one FBX per citizen, containing the mesh and the bones with the proper skinning based on that mesh, as well as a separate FBX for each animation, containing the same bones that all the citizens have with the proper animation, but without the mesh. This will allow us to mix and match the citizen FBX with the animation's FBX files. You may be wondering why both the model FBX and the animation FBX must have the mesh. This is because they need to match in order to make both files compatible. In the next screenshot, you can see how the files should look:
Also, it is worth mentioning a concept called retargeting. As we said before, in order to mix a model and an animation file, we need them to have the same bone structure, which means the same amount of bones, hierarchy, and names. Sometimes, this is not possible, especially when we mix custom models created by our artist with external animation files that you can record from an actor using motion capture techniques or just by buying a Mocap library. In such cases, it is highly likely that you will encounter different bone structures between the one in the Mocap library and your character model, so here is where retargeting kicks in. This technique allows Unity to create a generic mapping between two different humanoid-only bone structures to make them compatible. In a moment, we will see how to enable this feature.
Now that we understand the basics behind skinned meshes, let's see how we can get the model's assets with bones and animations.
Let's start with how to import some animated models from the Asset Store, under the 3D | Characters | Humanoids section. You can also use external sites, such as Mixamo, to download them. But for now, I will stick to the Asset Store as you will have less trouble making the assets work. In my case, I have downloaded a package, as you can see in the following screenshot, that contains both models and animations.
Note that sometimes you will need to download them separately because some assets will be model- or animation-only. Also, consider that the packages used in this book might not be available at the time you're reading; in that case, you can either look for another package with similar assets (characters and animations, in this case) or download the project files from the GitHub repository of the book and copy the required files from there:
In my package content, I can find the animation's FBX files in the Animations folder and the single model FBX file in Model. Remember that sometimes you won't have them separated like this, and the animations may be located in the same FBX as the model, if any animations are present at all. Now that we have the required files, let's discuss how to properly configure them.
Let's start selecting the Model file and checking the Rig tab. Within this tab, you will find a setting called Animation Type, as in the following screenshot:
This property contains the following options:
In my case, the FBX files in my package have the modes set to Humanoid, so that's good, but remember, only switch to other modes if it is absolutely necessary (for example, if you need to combine different models and animations). Now that we have discussed the Rig settings, let's talk about the Animation settings.
In order to do this, select any animation FBX file and look for the Animation section of the Inspector window. You will find several settings, such as the Import Animation checkbox, which must be marked if the file has an animation (not the model files), and the Clips list, where you will find all the animations in the file. In the following screenshot, you can see the Clips list for one of our animation files:
An FBX file with animations usually contains a single large animation track, which can contain one or several animations. Either way, by default, Unity will create a single animation based on that track, but if that track contains several animations, you will need to split them manually. In our case, our FBX contains several animations already split by the package creator, but in order to learn how to do a manual split, do the following:
Now, open the animation file, click on the arrow, and check the sub-assets. You will see that here, there is a file titled for your animation, alongside the other animations in the clip list, which contains the cut clips. In a moment, we will play them. In the following screenshot, you can see the animations in our .fbx file:
Now that we covered the basic configuration, let's see how to integrate animations.
When adding animations to our characters, we need to think about the flow of the animations, which means thinking about which animations must be played, when each animation must be active, and how transitions between animations should happen. In previous Unity versions, you needed to code that manually, generating complicated scripts of C# code to handle complex scenarios; but now, we have Animation Controllers.
Animation Controllers are a state machine-based asset where we can diagram the transition logic between animations with a visual editor called Animator. The idea is that each animation is a state and our model will have several of them. Only one state can be active at a time, so we need to create transitions in order to change them, which will have conditions that must be met in order to trigger the transition process. Conditions are comparisons of data about the character to be animated, such as its velocity, whether it's shooting or crouched, and so on.
So, basically, an Animation Controller or state machine is a set of animations with transition rules that will dictate which animation should be active. Let's start creating a simple Animation Controller by doing the following:
Transitions must have conditions in order to prevent animations from swapping constantly, but in order to create conditions, we need data to make comparisons. We will add properties to our Controller, which will represent data used by the transitions. Later, in Part 3, we will set that data to match the current state of our object. But for now, let's create the data and test how the Controller reacts with different values. In order to create conditions based on properties, do the following:
Now that we have our first Animator Controller set up, it's time to apply it to an object. In order to do that, we will need a series of components. First, when we have an animated character, rather than a regular Mesh Renderer, we use the Skinned Mesh Renderer. If you drag the model of the character to the scene and explore its children, you will see a component, as shown:
This component will be in charge of applying the bones' movements to the mesh. If you search the children of the model, you will find some bones; you can try rotating, moving, and scaling them to see the effect, as shown in the following screenshot. Consider the fact that your bone hierarchy might be different from mine if you downloaded another package from the Asset Store:
The other component that we need is Animator, which is automatically added to skinned meshes at its root GameObject. This component will be in charge of applying the state machine that we created in the Animator Controller if the animation FBX files are properly configured as we mentioned earlier. In order to apply the Animator Controller, do the following:
Depending on how the Run animation was set, your character might start to move. This is caused by the root motion, a feature that will move the character based on the animation movement. Sometimes, this is useful, but due to the fact that we will fully move our character using scripting, we want that feature to be turned off. You can do that by unchecking the Apply Root Motion checkbox in the Animator component of the Character object:
You can start dragging other animations into the Controller and create complex animation logic, such as adding jumping, falling, or crouched animations. I invite you to try other parameter types, such as a Boolean, that use checkboxes instead of numbers. Also, as you develop your game further, your Controller will grow in its number of animations. To manage that, there are other features worth researching, such as Blend Trees and sub-state machines, but that's beyond the scope of this book.
Now that we understand the basics of character animations in Unity, let's discuss how to create dynamic camera animations to follow our player.
Cameras are a very important subject in video games. They allow the player to see their surroundings to make decisions based on what they see. The game designer usually defines how it behaves to get the exact gameplay experience they want, and that's no easy task. A lot of behaviors must be layered to get the exact feeling. Also, during cutscenes, it is important to control the path that the camera will be traversing and where the camera is looking to focus the action during those constantly moving scenes.
In this chapter, we will use the Cinemachine package to create both of the dynamic cameras that will follow the player's movements, which we will code in Part 3, and also, the cameras to be used during cutscenes.
In this section, we will examine the following Cinemachine concepts:
Let's start by discussing how to create a Cinemachine controlled camera and configure behaviors in it.
Creating camera behaviors
Cinemachine is a collection of different behaviors that can be used in the camera, which when properly combined can generate all kinds of common camera types in video games, including following the player from behind, first-person cameras, top-down cameras, and so on. In order to use these behaviors, we need to understand the concept of brain and virtual cameras.
In Cinemachine, we will only keep one main camera, as we have done so far, and that camera will be controlled by virtual cameras, separated GameObjects that have the previously mentioned behaviors. We can have several virtual cameras and swap between them at will, but the active virtual camera will be the only one that will control our main camera. This is useful for switching cameras at different points of the game, such as switching between our player's first-person camera. In order to control the main camera with the virtual cameras, it must have a Brain component.
To start using Cinemachine, first, we need to install it from the Package Manager, as we did previously with other packages. If you don't remember how to do this, just do the following:
Let's start creating a virtual camera to follow the character we animated previously, which will be our player hero. Do the following:
As you can see, using Cinemachine is pretty simple, and in our case, the default settings were mostly enough for the kind of behavior we needed. However, if you explore the other Body and Aim modes, you will find that you can create any type of camera for any type of game. We won't cover the other modes in this book, but I strongly recommend you look at the documentation for Cinemachine to check what the other modes do. To open the documentation, do the following:
As you did with Cinemachine, you can find other packages' documentation in the same way. Now that we have achieved the basic camera behavior that we need, let's explore how we can use Cinemachine to create a camera for our intro cutscene.
When the player starts the level, we want a little cutscene with a pan over our scene and the base before entering the battle. This will require the camera to follow a fixed path, and that's exactly what Cinemachine's dolly camera does. It creates a path where we can attach a virtual camera so that it will follow it. We can set Cinemachine to move automatically through the track or follow a target to the closest point to the track; in our case, we will use the first option.
In order to create a dolly camera, do the following:
With the dolly track properly set, we can create our cutscene using Timeline to sequence it.
We have our intro camera, but that's not enough to create a cutscene. A proper cutscene is a sequence of actions happening at the exact moment that they should happen, coordinating several objects to act as intended. We can have actions such as enabling and disabling objects, switching cameras, playing sounds, moving objects, and so on. To do this, Unity offers Timeline, which is a sequencer of actions to coordinate that kind of cutscenes. We will use Timeline to create an intro cutscene for our scene, showing the level before starting the game.
In this section, we will examine the following Timeline concepts:
We are going to see how to create our own animation clips in Unity to animate our GameObjects and then place them inside a cutscene to coordinate their activation using the Timeline sequencer tool. Let's start creating a camera animation to use later in Timeline.
This is actually not a Timeline-specific feature, but rather a Unity feature that works great with Timeline. When we downloaded the character, it came with animation clips that were created using external software, but you can create custom animation clips using Unity's Animation window. Don't confuse it with the Animator window, which allows us to create animation transitions that react to the game situation. This is useful to create small object-specific animations that you will coordinate later in Timeline with other objects' animations.
These animations can control any value of an object's component properties, such as the positions, colors, and so on. In our case, we want to animate the dolly track's Position property to make it go from start to finish in a given time. In order to this, do the following:
If you pay attention, the dolly cart now has an Animator component with an Animator Controller created, which contains the animation we just created. As with any animation clip, you need to apply it to your object with an Animator Controller; custom animations are no exception. So, the Animation window created them for you.
Animating in this window consists of specifying the value of its properties at given moments. In our case, we want Position to have a value of 0 at the beginning of the animation at the second 0 at the timeline, and have a value of 240 at the end of the animation at second 5. I chose 240 because that's the last possible position in my cart, but that depends on the length of your dolly track. Just test which is the last possible position in yours. Also, I chose the second 5 because that's what I feel is the correct length for the animation, but feel free to change it as you wish. Now, whatever happens between the animation's 0 and 5 seconds is an interpolation of the 0 and 240 values, meaning that in 2.5 seconds, the value of Position will be 120. Animating always consists of interpolating different states of our object at different moments.
In order to do this, do the following:
Now, if we hit play, the animation will start playing, but that's something we don't want. In this scenario, the idea is to give control of the cutscene to the cutscene system, Timeline, because this animation won't be the only thing that needs to be sequenced in our cutscene. One way to prevent the Animator component from automatically playing the animation we created is to create an empty animation state in the Controller and set it as the default state by doing the following:
Now that we have created our camera animation, let's start creating a cutscene that switches from the intro cutscene camera to the player camera by using Timeline.
Timeline is already installed in your project, but if you go to the Package Manager of Timeline, you may see an Update button to get the latest version if you need some of the new features. In our case, we will keep the default version included in our project (1.3.4, at the time of writing this book).
The first thing we will do is create a cutscene asset and an object in the scene responsible for playing it. To do this, follow these steps:
Now that we have the Timeline asset ready to work with, let's make it sequence actions. To start, we need to sequence two things—first, the cart position animation we did in the last step and then the camera swap between the dolly track camera (CM vcam2) and the player cameras (CM vcam1). As we said before, a cutscene is a sequence of actions executing at given moments, and in order to schedule actions, you will need tracks. In Timeline, we have different kinds of tracks, each one allowing you to execute certain actions on certain objects. We will start with the animation track.
The animation track will control which animation a specific object will play; we need one track per object to animate. In our case, we want the dolly track to play the Intro animation that we created, so let's do that doing the following:
Important note:
Timeline is a generic asset that can be applied to any scene, but as the tracks control specifics objects, you need to manually bind them in every scene. In our case, we have an animation track that expects to control a single animator, so in every scene, if we want to apply this cutscene, we need to drag the specific animator to control in the Bindings list.
Important note:
Remember that you don't need to use Timeline to play animations. In this case, we did it this way to control at exactly which moment we want the animation to play. You can control animators using scripting as well.
Now, we will make our Intro timeline asset tell the CinemachineBrain component (the main camera) which camera will be active during each part of the cutscene, switching to the player camera once the camera animation is over. We will create a second track—a Cinemachine track—which is specialized in making a specific CinemachineBrain component to switch between different virtual cameras. To do this, follow these steps:
If you wait for the full cutscene to end, you will notice how at the very end, CM vcam2 becomes active again. You can configure how Timeline will deal with the end of the cutscene, as by default, it does nothing. This can cause different behavior according to the type of track; in our case, again giving the control to pick the virtual camera to the CinemachineBrain component, which will pick the virtual camera with the highest Priority value. We can change the Priority property of the virtual cameras to be sure that CM vcam1 (the player camera) is always the more important one, or set Wrap Mode of the Playable Director component to Hold, which will keep everything, as the last frame of the timeline specifies.
In our case, we will use the latter option to test the Timeline-specific features:
Most of the different kinds of tracks work under the same logic; each one will control a specific aspect of a specific object using clips that will execute during a set time. I encourage you to test different tracks to see what they do, such as Activation, which enables and disables objects during the cutscene. Remember, you can check out the documentation of the Timeline package in the Package Manager.
In this chapter, we introduced the different animation systems that Unity provides for different requirements. We discussed importing character animations and controlling them with Animation Controllers. We also saw how to make cameras that can react to the game's current situation, such as the player's position, or that can used during cutscenes. Finally, we looked at Timeline and the animation system to create an intro cutscene for our game. These tools are useful for making the animators in our team work directly in Unity without the hassle of integrating external assets (except for character animations) and also preventing the programmer from creating repetitive scripts to create animations, wasting time in the process.
Now, you are able to import and create animation clips in Unity, as well as apply them to GameObjects to make them move according the clips. Also, you can place them in the Timeline sequencer to coordinate them and create cutscenes for your game. Finally, you can create dynamic cameras to use in-game or in cutscenes.
So far, we have discussed lots of Unity systems that allow us to develop different aspects of our game without coding, but sooner or later, scripting will be needed. Unity provides generic tools for generic situations, but our game's unique gameplay must usually be coded manually. In the next chapter, the first chapter of Part 3, we will start learning how to code in Unity using C#.