Unreal Engine Animation Workflow

Metahuman Animator + AI

During a weekend's research project, I exercised a familiar workflow with a tool that's been refined within UE5 called the MetaHuman Animator.

MetaHuman Animator uses the Iphone's depth sensors to generate a model called a MetaHuman Identity - the model is generated referencing a pre-recorded video of my own face. Essentially for me, MetaHumanID is mainly used to process performances, increasing the data's fidelity (If the face model in the performance is a 1-to-1 copy of the model it's being processed onto - I will get more accurate results). Through retargeting, this performance can then be mapped to other models - however, depending on how much the head model differs from your own features, the performance can degrade. In most cases, I found that the performance retargets to other models poorly.

This could be due to many reasons - the capture quality could be poor, the tracking itself could be inaccurate, many other things. But in this case, I suspect that my facial features just does not match this metahuman character's facial features. Unless it's trained to understand what expression is what - it will never retarget gracefully.

A few years ago, I fired up Spider-Man Miles Morales and I remember being extremely confused as to why Peter Parker was an entirely different person. Capture data - at least today - is still pretty sensitive. There isn't a magic solution to retarget performance without "training", without losing fidelity. Peoples facial features are unique - and these tools use them as landmarks to solve their trajectories, and animators solve that information for expressions.
This happens all the time and it makes sense. Models are often modified to accommodate an actor's facial features so that the "computer" has an easier time processing the data.

Making believable animations require artistic choices that are purposeful - otherwise we get uncanniness that make us say things like "Something's off but I'm not sure why". This is why you need animators who've studied, who've worked in the industry long, and have built an eye to catch tiny nuances that an AI prompter will never see. But I do see the angle in going down the path of cost-cutting solutions because this feeling of uncanniness can bypass a lot of people.

I believe that AI will be integrated into our pipeline as animators - we will need to eventually adopt a new workflow. But I'm hoping to use this as an example to myself to tread towards a better future where AI can be used ethically - as a tool to enhance what animators already know how to do: make captivating performances.

I've been researching AI software that can be ethically integrated into my own workflow. Many systems rely on optically tracked data - which use identifying landmarks and machine learning to craft performances based on 3D generated skeletons within the 2D video itself - this technology isn't all that new. But some are unique - like Motorica, which is generative in nature and uses prompts to craft performances. I reached out to them and asked how this data is sourced - they have artists that create these base animations - The AI aspect of this tool focuses on the ability to seamlessly integrate different types of performances into a cohesive whole (prompt example: "Walk Stop Holding Gun"). This can be, and is commonly done, traditionally through a "layered" approach in animation - often times a whole team dedicated to refurbishing existing libraries of capture data. In every scenario and every tool I've tried, I see them as a starting basepoint rather than a replacement for framing a key.

Markerless systems like the MetaHuman Animator fall within these types of AI-powered tools, where video footage is used to track facial landmarks and solves for expressions using machine learning. Afterward, I refined the performance using my skills as an animator. Personally I think that last part is what makes or breaks a piece of animation.

UE: Control Rig

The purpose of this research is to develop a workflow to do everything within Unreal Engine.

  • Retime animation

  • Bake animation

  • Retarget animation

  • Edit animation using a Control Rig

  • Edit animation real time

This post is not meant to be a step-by-step tutorial and the information written will not be 100 percent accurate. This is a personal blog and I’m simply documenting my process…

I started out with recording some MOCAP data. I had to make sure that this data could be exported out using an Unreal Engine skeleton.

Understanding the Skeletal Mesh

UE 5

UE 4

Custom Asset

There is a control rig system that already exists with the Unreal Engine mannequins. These control systems can easily be transferred on to any asset as long as they have the same skeletal structure. Their scale must also be somewhat similar.

In this instance, hooking up the UE5 Mannequin’s control systems to this Custom Asset will not work because they do not share the same skeletal structure. Instead, you can see in the illustration above that the Custom Asset shares the same skeletal structure as UE4’s Mannequin

Note that in the Epic Marketplace, information about the custom assets are typically disclosed; things like if it’s rigged to an Epic Skeleton, which version of the skeleton, if it has IK bones etc. Make sure to check before purchasing any assets.

If you plan on doing an animation with the UE Mannequins and not a custom asset, you can skip a lot of these steps.
I think for my next project, I plan on sticking with the Mannequin SkelMeshes, because you’ll see below that the translations aren’t perfect once you hook up the control rigs. This is most like due to the t-poses and proportions not being 1 to 1.

UE4 Control Rig

The Custom Asset I used did not come with a control system, but since it was rigged with the same skeletal structure/hierarchy as the UE4 mannequins, I was able to hook up it’s control rig to my Custom Asset. These mannequins can be found in the Epic Marketplace.

This is extremely valuable since if any future rigs are to have the same skeletal structure as a UE5 or UE4 Mannequin, we would instantly have access to a control rig to animate directly in Unreal Engine with. Otherwise, we would need to rig the asset manually and create a control system.

Unreal Engine only considers the skeletal structure of your asset when it’s hooking up the animations. Once everything is hooked up, the skeletal mesh can be swapped out to any asset you’d like, as long as their bones have the same hierarchy. This also means you can rig any asset with the same bone structure to the existing control rig system that comes with the UE mannequins. You can think of it like Unreal Engine technically still thinks the control system is driving the UE mannequin, but it’s visually a different asset, sort of like wearing a skin.

Now that you’ve got a basic understanding of this system, let’s get started with animation…

Sources (How to hook up custom assets):
***https://www.youtube.com/watch?v=vqVA4lvpVWQ&t=24s&ab_channel=RoyalSkies***
https://www.youtube.com/watch?v=VXlBqRqFwc4&t=320s&ab_channel=3DEducationwithJC
https://docs.unrealengine.com/4.26/en-US/AnimatingObjects/SkeletalMeshAnimation/

Animating in Unreal Engine 5

Hooking Everything Up

At this point, I have raw MOCAP data from my suit that I’ve imported into Unreal Engine directly. Normally, the first step for me would be to bring this data into Maya, where I would retarget the skeletal animation onto a Maya control rig using HumanIK. We’re skipping this step entirely and going straight into UE.

But now we want to edit this animation… change the timing, and maybe make the character jump a little higher. This is, and was always was, possible even without a control rig system. We can go directly into our imported MOCAP file and override the animations by keyframing the bones. However, as animators we know that this isn’t ideal for many reasons, one being that the system is driven entirely using forward kinematics. To save our brains, we need a control system, especially for foot plants, which require an IK rig.

First lets look into editing the timing. We can easily do this with:

Time Dilation

This is probably the most important section of this entire post. For me, retiming is a requirement whenever I’m using MOCAP data. We need to be able to retain the data but still be able to edit the timing of the animation.

In your sequencer, you can add a Time Dilation track. This is essentially the “Scene Time Warp” function that we have in Maya. We can use curves to retime our animations while maintaining all of it’s existing data. In my opinion, this function is much more powerful than anything we have in Maya because, with Time Dilation, we are able to key-frame the control rig real time, regardless of whether Time Dilation is on or off. With “Scene Time Warp”, we would typically have to bake everything down, then turn off the function to continue animating normally. This is not the case with Time Dilation.

Bake to Control Rig

Once I established the timing, I was ready to move on to baking the animation onto the control rig. Don’t worry though, like I said before, the timing can be changed whenever you want using the Time Dilation tool.

There is a function called Bake to Control Rig that can be accessed by right-clicking your SkelMesh in the sequencer. Typically, I’m use to baking out the skeleton that’s being driven by the control rig. But here, we are baking out the control rig where it’s position/rotation is determined by the skeletal animation.

So now I have my asset rigged with the UE Mannequin control system and I’m able to keyframe these controls. You can see that there are some issues with some controls being the wrong space and some rotation values being over-driven. These are some of the limitations with “borrowing” a control rig system from the UE Mannequins… Sometimes they don’t line up perfectly. Again, I believe this is most likely due to the proportions not lining up 1 to 1. This is where your skills as an animator comes into play, we have fix these values and plant the feet, and we’re able to do that now since we have an IK rig.

Animate Traditionally

Everything is set up now, we can animate as usual. We have access to a variety of tools that is already available to us in Unreal Engine (tweener) but we don’t have the luxury of using animbot, or any other third party tool that we’re use to using in Maya. I’m still in the process of figuring out animation layers; which is almost required when working with any sort of MOCAP data, but this is as far as I got for now.

What I learned

I learned two things during this research project:

  1. The fundamentals of animation is most important when it comes stuff like this. We might not have all of the fancy tools that we have in Maya, but if you’re a strong animator equipped with a good understanding of the fundamentals of animation, all you really need is the dope sheet and the graph editor.

  2. These are all tools in the end. Maya, Unreal Engine, Blender, Cascadeur etc. They are all tools that help animators achieve their desired motions. It also builds an ecosystem/workflow that help make it easier for different artists collaborate in. Anyone can learn to implement a tool quickly. But understanding how to actually animate, that will take years…

David Lee, Animator