Once you have footage for a performance and a MetaHuman Identity representing the performer, you are ready to turn the performance into animation. This is where the MetaHuman Performance Asset comes into play.
Asset Layout
MetaHuman Performance Asset. Different regions of the UI are described in detail below.
The Performance Asset itself has a simpler and more specialized toolkit than the MetaHuman Identity, but most of the interactions are similar between the two.
The Performance Asset interface consists of the following areas:
Toolbar
Viewport
Details panel
Sequencer
Toolbar (1)
| Button | Description |
|---|---|
Process | Once your Asset is fully configured, click this button to start processing of the footage. The result of this process is internal to the Asset and not available in the Unreal project yet. |
Cancel | If footage processing is running, cancels the processing. |
Export Animation | Once the internal processing of the footage has completed, this opens an export dialogue where you can configure the location and target Asset of that process in the form of an Animation Sequence. Any MetaHuman-compatible rig in the project is a valid target Asset. |
Export Level Sequence | While the Export Animation option offers an Animation asset, choosing this option also: Optionally exports things like the video and audio elements. Applies the animation directly to the Identity Rig. This is useful when you want a one-click inspection set. |
Viewport (2)
The controls in this Viewport are a subset of the ones in the MetaHuman Identity Asset Editor. Everything written for that applies here, except that the MetaHuman Performance has no selection context, as it has no Component Tree to provide it.
Details Panel (3)
A standard Unreal Engine Asset Details panel. Unlike other tool-carrying assets, which tend to configure the Asset from commands, you should operate on the MetaHuman Performance Asset is directly here.
Sequencer (4)
A standard subset of the Sequencer.
While you can make changes in this view, we do not recommend changing anything in the tracks contained here.
There are two things you can safely change if needed:
The range offers direct control not only on playback, but on the processing interval too (also available in the Asset Details).
At the top-right corner, you can configure the time display, including the type of Timecode, if it’s available.
Processing Requirements
The MetaHuman Performance requires the following minimal setup to run:
Footage Capture Data needs to reference the performance’s footage.
MetaHuman Identity needs to reference a MetaHuman Identity Asset that’s been configured for the performer, the way they look in the performance being delivered.
Once these two attributes are configured correctly, you’re ready to start processing footage.
The head and neck transformation needs to be registered against a rest pose of sorts. By default, this is done automatically by analyzing all solved frames, and finding the "most front facing frame".
If you have a specific frame you wish to use that shows what you consider the head rest pose, you can disable this in the Advanced attributes section, and manually enter the reference frame (which must be inside the solved range).
If no frame in the take presents a rest pose, then the resulting animation will have some angular offset "baked" into it which will need correcting. This only truly applies to statically mounted shots, because head-mounted devices won’t offer desirable head spatial movement in the first place.
You can process a maximum of 36,000 frames (equivalent to 10 minutes of footage at 60fps, or 20 minutes at 30fps). Your Capture Data length can be longer than this maximum.
To learn more about the optimal settings for this asset, read the Recommended Unreal Engine Project Settings document.
Time/Quality Parameters
Solving animation can be a very processing-intensive and long process.
Below you will find several options that can be used to improve iteration times while reducing accuracy. With these options you can inspect the quality of the results and make quick dry runs before running the final, more expensive solution.
The most common time tradeoff is the number of frames processed: You can set the boundaries of the footage to process, and directly affect the output by setting Start and End Frame to Process attributes.
The Solve Type has three options: Preview, Standard, and Additional Tweakers. While not necessarily "quality" parameters, they are ordered in processing time requirements as follows:
The Preview Mode solver is very quick, and it’s used to offer frame previews while the animation is being solved. In some cases, this might be all you’re looking for, while in other cases, it might be something that you use in combination with a very reduced frame range to have a dry run on your footage and see if everything is in good order.
The Standard solver is a full quality solver and it produces animation for a large number of channels, but not every channel. This might be the setting chosen for most final quality solutions.
The Additional Tweakers solver is similar to the Standard solver, but will produce animation for some additional channels specified on the Tweaker controls.
The Skip Filtering attribute determines whether the animation curves should be post processed for smoothness or not. If you have your own filters you can skip this step to obtain the unprocessed curves.
The Skip Per Vertex Solve attribute determines whether ‘per-vertex’ solving is turned on or off during processing. Per-vertex solving is a more accurate, but time-consuming solver option which can be applied by the Standard solver or Additional Tweakers solver. It is typically only worth using if your final target MetaHuman is a high-quality digital double rig, so it is not generally recommended.
Lastly, the Audio to Tongue solver requires audio, and your take might not have audio, or simply not show or require tongue animation at all. Skip Tongue Solve offers control on whether Audio to Tongue runs or not.
It’s worth noting that while the tongue solver does have a processing cost, it is very low, and skipping the step won’t save a lot of time compared to the rest of the solution time for each frame.
Preview and Inspection
There are several features that help you visualize the quality of the results.
Use the Override Visualization Mesh to select a different Skeletal Mesh than the one that was produced from the MetaHuman Identity fitting process. This doesn’t have an effect on the animation, and it’s meant to allow previewing results on a possible target rig while still working on the performance.
Head Movement can be previewed three ways:
Disabled, which will suppress neck and head rigid transformation to keep the head centered in frame. Useful when you want to inspect a very stable face and care less about the animation feeling natural.
Transform Track, which will only apply a rigid transformation to the head to track the camera as closely as possible. Useful when you want to inspect the relationship between footage and the facial animation.
Control Rig, which features the full neck solution. This is particularly useful for static mount shots (not head mounted) where you want to have an idea about the full final result.
In the processing parameters, you can also choose to Skip Preview (this is disabled by default), and/or turn off whether solved frames are displayed or not while they are being processed. Skip Preview will be disabled when the solver is in Preview Mode.
MetaHuman Camera Calibration
The MetaHuman Camera Calibration Asset is a very specific Asset whose only purpose is to bundle together and index lensing information. The lensing itself is the one that’s long been in use in the Unreal Engine Virtual Production toolkit.
This Asset is usually created automatically by the ingest process.
MetaHuman Camera Calibration Asset