In the previous chapter, we were introduced to the motion capture library of Mixamo and how to make the Mixamo animations work inside Unreal.
In this chapter, we are going to build on that knowledge and create our very own animation using another third-party online tool called DeepMotion, which includes a tool called Animate 3D that analyzes video of human performances and, in return, creates a motion capture file. We will learn how to use a video camera effectively to record a performance, then use DeepMotion’s Animate 3D feature to create your very own bespoke motion capture file, and then use it to animate your MetaHuman character in Unreal.
As much as stock motion libraries such as Mixamo are great, you will need to have your own bespoke animation to give you creative freedom. Even if a solution such as DeepMotion’s Animate 3D may not be as good as the results that you would get from expensive motion capture suits, it will be much better than the compromises you will make using library stock motion.
So, in this chapter, we will cover the following topics:
In terms of computer power, you will need the technical requirements detailed in Chapter 1, and the MetaHuman character plus the Unreal Engine Mannequin that we imported into UE5 in Chapter 2. Both of these will need to be saved in the same project in UE, and you will need to have UE running for this chapter.
You will also need a stable internet connection, as we will be uploading a MetaHuman preview mesh and a video file, and downloading numerous animation assets. You will need a video camera – a webcam, smartphone, or a professional video or digital cinecamera will all do, but note that there aren’t benefits for video with greater resolution than HD.
Until recently, bespoke motion capture solutions required expensive equipment and expensive software; think about movies like Avatar, The Lord of the Rings, or Pirates of the Caribbean, where performers have to wear motion capture suits with either reflective balls or checker markers with infrared sensors. This technology was only viable for large studios that could make such an investment, and as a result, independent studios and freelance artists had to make do with just motion capture libraries, such as Mixamo, as we have already discussed.
Recently, this has changed as more and more affordable solutions have become available to artists. In my research for this book, I worked with a number of motion capture tools specifically with cost in mind. Because motion capture (mocap) suits and infrared sensors and all that fancy gear are just too expensive for most of us, I looked at DeepMotion’s Animate 3D.
DeepMotion’s Animation 3D is an example of a markerless mocap solution. This involves a process where a video of human performance is captured with a video camera with no special suit or markers required. The video file is analyzed by software that recognizes and tracks each body part. With machine learning aiding this process, the solution is able to utilize libraries or motion data to avoid errors. In addition, inverse kinematics is also utilized to further refine the process. This technology is constantly improving, and with machine learning being a key feature, the rate of progress is very fast indeed.
In the case of DeepMotion, the online tool allows us to upload a video of our performance and processes the motion capture for us to use in Unreal (or any other compatible platform). Though this process includes machine learning, fortunately for us, DeepMotion takes care of all that.
In the next section, we will dive into the nuts and bolts of how to record videos effectively.
Before we go over to DeepMotion, there are some key factors we need to know with regard to recording our performances on video. Let’s look at them now:
Also, ideally, the camera should be stationary because the software works out the performer’s motion based on a fixed floor position. The ideal position of the camera is about 2 meters from the performer.
With that said, if your intention is to capture the full body performance, then video capturing the full body is a must.
However, when the body goes in front of a limb, there can be issues. Taking hands as an example, if and when they go behind the body or the head, DeepMotion has to guess where the hands should be because it has no pixels to tell it where the hands are. It will either try to guess or apply no animation at all to those problem areas.
Also, occlusion can occur by objects that are between the performer and the camera, which can cause issues. For example, a tree branch or lamppost might momentarily get in the way but could inadvertently cause a lot more errors than expected.
Therefore, shooting with anything in the foreground obscuring your performers is not recommended.
In this section, we have covered all the major factors that need to be considered when it comes to recording a video of your performer. Given the nature of video and how every scenario will be different, there will be a certain amount of trial and error, so don’t be disheartened if you don’t get great results at first. You can always refer back to this section to troubleshoot any issues you may be having.
Alternatively, if you don’t want to shoot anything yourself, you could go to a stock library and download a clip that matches the criteria mentioned earlier. For the purpose of this book, I have taken clips from motionarray.com.
In the next section, you will upload your clip to DeepMotion, assuming that you have shot something.
In this section, we will head over to the DeepMotion site, register, and upload our footage.
The first thing you need to do is go over to https://www.deepmotion.com/. From the landing page, click on the SIGN UP button and you’ll see the form shown in Figure 6.1. Fill this out to create a Freemium account (and note the features on the right):
Figure 6.1: Signing up with DeepMotion
Note
DeepMotion’s Freemium account will allow you to upload clips no more than 10 seconds long, no bigger in resolution than 1,080 pixels, and with a frame rate of no more than 30 fps. With the free user price plan, you get a maximum of 30 seconds of animation per month. Effectively, this is a trial service and you will have to pay for more usage, but there are various paid plans that have more features, such as allowing you to upload clips with higher frame rates.
Once you have filled in your details, click the Get Started button. This will either send a link to your email address or take you directly to the DeepMotion Portal page, as shown in Figure 6.2:
Figure 6.2: The DeepMotion Portal page
With the Freemium account, the only feature available to us is the Animate 3D feature (the other features are locked). When you click on the Animate 3D box, you’ll be taken to the Animate 3D welcome page, as per Figure 6.3:
Figure 6.3: The Animate 3D welcome page
Here, you’ll see how many credits you have: 1 credit is equal to 1 second of animation. The next box is titled Reruns, and this shows how many attempts you have to refine the capture process. Reruns allow you to make changes to your settings without using up any of your animation credits.
Now, click on the big yellow Create button and you’ll be greeted with the dashboard where you can place your footage, as shown in Figure 6.4:
Figure 6.4: The Animate 3D dashboard
At the top of Figure 6.4, you can see two thumbnails: 3D Animation and 3D Pose. Make sure to switch to 3D Animation.
Underneath, you have the option to either drag and drop your folder into DeepMotion or navigate through your folders to upload the file. They also provide a list of accepted file types: MP4, MOV, and AVI. If you have any other format, you will need to convert your files using an encoder such as Adobe Media Encoder, VLC Player, or QuickTime Pro.
Note
Unless there is any notable difference in quality from an original MOV file compared to an MP4 conversion, use MP4 as it will reduce the uploading and processing time.
Finally, you’ll see a reminder of how to get the best results for your videos (which I expanded upon earlier in the chapter), and I advise you to take a look at that to make sure you get the most out of your limited credit.
Once you are ready, upload your video via whichever method you prefer; now, you will be able to view the video setting.
As soon as you have uploaded your video clip, you will be taken to the Animate 3D dashboard, as shown in Figure 6.5. If you want to use the same clip as me, I am using the clip of a dancing man, found here: https://motionarray.com/stock-video/happy-man-dancing-vertical-814129/.
Figure 6.5: Animate 3D dashboard
In Figure 6.5 you can see a thumbnail of the video clip, along with the clip’s filename, Happy-man-dancing-vertical-814129_1.mp4.The attributes of the clip are present on the right-hand side of the thumbnail, as follows:
In the bottom half of the interface shown in Figure 6.5, you’ll see two options: Animation Output and Video Output. For the purposes of this book, we are ignoring Video Output, so make sure that Animation Output is selected instead.
There are 10 features to use, but not all of them are either available or relevant based on your type of account and the complexity of the motion capture you want to achieve. The Animation Output parameters are as follows:
Note
Only the Auto setting is available in the Freemium account.
Note
This feature is not available in the Freemium accounts.
Now, once you have gone through all of those Animation Output settings, you can click the Create button (this is at the bottom of Figure 6.5). Once clicked, you’ll see a confirmation dialog box, as per Figure 6.6:
Figure 6.6: Confirm New Animation dialog box
You’ll see from the settings that I have enabled Root Joint at Origin, set Foot Locking to Auto, and then set Default Pose as A-Pose. Everything else was left to the default settings. I’m not interested in downloading any additional video, which is why the MPF Enabled option is disabled and everything else is marked n/a.
It is also worth noting at this point that we have the ability to rerun this process if we’re not happy with the outcome. With that said, it’s time to run it for the first time, so click on the Start Job button. You will be taken to the Starting job interface, as shown in Figure 6.7:
Figure 6.7: Starting job
The processing time varies depending on the file size and file length of your video clip. Also, if your video contains a lot of motion, expect a slower processing time.
Once processed, it’s time to see the results. In Figure 6.8, you can see the results of the clip I chose; of course, it’s impossible to see the results from this one single image, however, I was impressed with the results (after all, this motion capture is acquired from a single camera instead of an expensive motion capture suit and sensor array):
Figure 6.8: Motion capture result
From this screen, you will see an interactive 3D viewport where you can see the result from any angle and play the animation at various speeds.
On the right-hand side of the screen, you’ll see a short list of four-character types; similar to the MetaHuman preview mesh, we can preview various body types such as thin, large, male, and female, and choose them for downloading along with our animation. For the best results, select a body type that is the closest to your MetaHuman character. In my case, as Glenda is a little overweight, it would be better for me to choose the overweight character. Doing this would increase the arm distance from the body, which would help with any collisions between an arm and the torso (you might remember that feature called Arm Spacing in Mixamo; this works similarly to that).
Also note that you have two options at the bottom of the interface of Figure 6.8:
Needless to say, but I’ll say it anyway, Rerun is a very valid tool and it is unlikely that you will get the desired result the first time without having to tweak it. It is better to try and get as close as you can to the desired result using many reruns, rather than try to fix it in the Unreal Engine. By gaining enough experience with reruns, you’ll become more efficient at predicting which features should be used over practice and time.
In Figure 6.9, you can see the Rerun Animation dialog box:
Figure 6.9: Rerun Animation dialog box
The only difference between this Rerun Animation box and the Confirm New Animation box from Figure 6.6 is that it shows you the Current settings that gave you your last result and the New settings that you are about to contribute to your next job. This allows you to make a comparative study and, therefore, a more informed decision for the rerun.
Now, when you have gone through all of the settings and rerun your job until you are finally happy with the results, it is time to download your animation.
To download the DeepMotion motion capture file, instead of clicking the Rerun option, simply click the Download button instead. This will bring up the Download Animations dialog box, as shown in Figure 6.10:
Figure 6.10: Download Animations
Before downloading the animation, the dialog box gives us the option to switch or confirm which body type we want to use: it gives you two adult females, two adult males, and a generic child to choose from. When you’re happy with the correct preview mesh, make sure you have selected the correct download file time, FBX, from the dropdown menu (the BVH and DMPE solutions won’t work for what we need to do in Unreal). Then, click on the yellow Download button and save your file somewhere in your project folder.
When you download your animation, you’ll see two FBX files and a folder, as per Figure 6.11:
:
Figure 6.11: Saved files from DeepMotion and naming conventions
The FBX file that has TPose at the end of the filename (Adult_Male_Alt(includeTPose) is given to the file that uses an actual T-pose at the beginning of the animation. The other file, Adult_Male_Alt, has no such T-pose at the beginning and, therefore, kicks in with the animation at the very first frame. Choosing one over the other isn’t critical as you can always remove the T-pose section from the animation, but the T-pose could become useful in instances where you need to align it for the purposes of retargeting. We will look at that at the end of this chapter.
Now that we have downloaded the relevant files, in this next section, we are going to import the DeepMotion motion capture file into Unreal so that we can retarget it to the MetaHuman IK Rig.
As mentioned at the very beginning of this chapter, we are building on what we have learned from the previous chapters, but don’t worry, we’ll use this section as a bit of a recap on how to import a character into Unreal.
The first thing you will need to do is to create a folder in your Unreal project called Deepmotion. Make sure that you are working in the project that you used in the previous chapter where you have both a source and target IK Rig for your Mixamo character and your MetaHuman character.
Once you’ve created the Deepmotion folder, right-click anywhere within the Content browser of your Deepmotion folder and look for your new DeepMotion FBX files. Choose the one that has the name TPose included. When you select it, you will then be greeted by the FBX Import Options dialog box, as shown in Figure 6.12:
Figure 6.12: FBX Import Options
There are two things to consider here:
After those considerations, click on the blue Import All button. Don’t be alarmed by the error that mentions smoothing that pops up; you can dismiss that.
Now, when you look at your Deepmotion folder in your Content browser, you should see something similar to Figure 6.13:
Figure 6.13: Deepmotion Content folder
Much of what you see here should be familiar; from left to right, you will see the following:
In this section, we have successfully imported the Deepmotion file into the Unreal Engine. In the next section, we will use the DeepMotion animation with our character.
As in previous chapters, you’ll need to create an IK Rig as the source. So, while in the Deepmotion folder, right-click anywhere and choose Animation, followed by IK Rig, as per Figure 6.14:
Figure 6.14: Creating an IK Rig
To avoid confusion, call this IK Rig Source_DM_Happy (this will help us later when we are looking for it).
Then, as we have also done before, follow the same process to create IK chains. Again, we will need to create six separate IK chains corresponding to the following:
The only thing that is different, and you should expect this when importing motion capture from different sources, is the naming convention. The DeepMotion naming convention can be seen in Figure 6.15, where they use the JNS abbreviation for joints and start each appendage with the l_ or r_ prefix referring to either the left or the right:
Figure 6.15: IK chains
Notice that the order of these chains (starting from Root to hips_JNT and ending at r_upleg_JNT to r_foot_JNT) corresponds to my list of IK chains; having the same order in both the IK source and IK target reduces errors.
Now it’s time to create a new IK Retargeter, where we use our new IK Rig as the source. While still inside the Deepmotion folder, right-click anywhere in the Content browser, go to Animation, and select IK Retargeter, as shown in Figure 6.16:
Figure 6.16: Creating a new IK Retargeter
Once you have created a new IK Retargeter, double-click on it to open it and you’ll see a list of potential sources to choose from. You can see, in Figure 6.17, that I have a list of all the IK Rigs that are saved in my project folder:
Figure 6.17: Pick IK Rig To Copy Animation From
You’ll see that I have a source IK Rig titled SourceRigMixamo. As you practice with Unreal, you will soon end up with multiple source IK Rigs. You should still have a source IK Rig that you used from the previous chapter. This is why it’s important to label these IK Rigs appropriately so that you don’t inadvertently choose the wrong one.
You don’t want to use the Mixamo rig from the previous chapter, so select Source_DM_Happy as that is the rig from which we want to copy the animation. Then hit OK.
Note
If you choose the wrong IK Rig, you will need to delete this Retargeter and create a new one, hence why labeling the IK source rig is important.
Once you’ve created an IK Retargeter using the Source_DM_Happy rig, open it up by double-clicking on it. This will bring up the following interface:
Figure 6.18: Choosing the target in the IK Retargeter
Now, you want to choose the IK target to apply the DeepMotion motion capture to. You can see from Figure 6.19 that Source IKRig Asset is locked but Target IKRig Asset still has a drop-down list. Choose your MetaHuman target rig; in my case, it’s TARGET_meta.
It’s at this point that we also have the option to choose the appropriate Target Preview Mesh too. In my case, the target preview mesh is the Female Medium Overweight mesh, which is titled f_med_ovw_body_preview_mesh, as per Figure 6.19:
I
Figure 6.19: Target Preview Mesh
At the bottom of Figure 6.19, under the Asset Browser tab, you can see that the DeepMotion animation is now available for us to export. So, if you are happy with the animation, just click on the green Export Selected Animations button.
As a reminder, clicking on the Export Selected Animations button will create an animation file that will now work with your MetaHuman character. You will be given the option of where to save the animation to choose your Deepmotion folder.
Next, add your MetaHuman character to your scene by dragging and dropping it from your Content folder to your viewport. Then, ensure your MetaHuman blueprint is selected in Outliner.
After that, go to the Details panel, select Body, and under Animation Mode, choose Use Animation Asset. Run a search for your new retargeted animation file and click on it. In Figure 6.20, you can see that I picked the Happy Animation file from the drop-down list:
Figure 6.20: Applying the animation
With your file selected, run Play in your viewport and you’ll see your character come to life.
Now that you have successfully gone through the whole process of creating motion capture data from video and retargeted it to your MetaHuman character, we’ll take a look at troubleshooting a common problem in the next section.
It’s unlikely that your animation is going to perfectly match your video, and it’s just as unlikely that the retargeting process is going to be smooth sailing either. There’s always a slight mismatch between the source and target rigs. One of the most common problems is the arm positions, which mostly boils down to the T-pose and the A-pose.
As MetaHumans are set in an A-pose, and our DeepMotion rig was set to a T-Pose, this is where there’s a problem. In the event that you chose T-Pose in DeepMotion, there’s a handy little fix that saves you from having to go through the download and retargeting process again.
As you can see from Figure 6.21, showing the IK Retargeter interface, we can see the problem: the source DeepMotion character is in a T-Pose and the target MetaHuman character is in an A-Pose. In this scenario, because of the misalignment, the arms will not animate properly and will most likely intersect with the body:
Figure 6.21: IK Retargeter T-pose versus A-pose
The only option we have here is to change the pose of the target, which is the MetaHuman. To do this, click on the Edit Pose button at the top of the screen, as shown in Figure 6.22. This will allow you to rotate the joints of your MetaHuman character from within the IK Retargeter:
Figure 6.22: Edit Pose in the IK Retargeter
I strongly advise that you use rotational snapping, which is highlighted at the top-right of Figure 6.22. I found that the difference between the T-pose and the A-pose in terms of arm rotation was about 45 degrees. Currently, the snapping is set to increments of five degrees; however, you can change that to your liking.
The MetaHuman also has a slight rotation on the elbow joints that you will need to straighten to match that of the source; using the snapping tool allows you to get a much more uniform result. Be sure to change the angle from the front when you make corrections to the elbow joints so that you can see what you are doing.
When you are happy, click on the Edit Pose button again. This will allow you to press Play again, to view how well the animation is retargeted. You can keep going back to fine-tune these adjustments.
You may also find that you have exported the correct pose, but the animation is still causing slight issues, particularly with collision. In terms of DeepMotion, this could easily be attributed to the fact that you were using preset preview meshes as opposed to specific MetaHuman meshes. We will look at how to fix these issues in the next chapter.
In this section, we covered how we can create our own animation with as little as a single video camera. First, we looked at some of the major considerations when it comes to filming, what to avoid, and what steps we can take to bring even more accuracy.
Then we started using DeepMotion, exploring the various animation settings that it can offer us, including some artificial intelligence functionality. We looked at some best practices when it comes to these functions and the use of reruns before committing to the final animation.
Building on what we learned in the previous chapters, we once again retargeted our bespoke motion capture and applied it to our MetaHuman character, while looking a little more closely at A-poses versus T-poses. Finally, we covered an effective workaround for issues arising around those poses.
In the next chapter, we are going to look at the Level Sequencer and the Control Rig, and how to further refine our custom motion capture.