Once you have audio or video footage for a performance, you are ready to turn the performance into animation. This is where the MetaHuman Performance Asset comes into play. For depth data, a MetaHuman Identity representing the performer is also required.
Asset Layout
The Performance Asset itself has a simpler and more specialized toolkit than the MetaHuman Identity, but most of the interactions are similar between the two.
The Performance Asset interface consists of the following areas:
Toolbar (1)
Process | Once your Asset is fully configured, click this button to start processing of the footage. The result of this process is internal to the Asset and not available in the Unreal project yet. |
Cancel | If footage processing is running, cancels the processing. |
Export Animation | Once the internal processing of the footage has completed, this opens an export dialogue where you can configure the location and target Asset of that process in the form of an Animation Sequence. Any MetaHuman-compatible rig in the project is a valid target Asset. |
Export Level Sequence | While the Export Animation option offers an Animation asset, choosing this option also:Optionally exports things like the video and audio elements. Applies the animation directly to the Identity Rig. This is useful when you want a one-click inspection set. |
Viewport (2)
The controls in this Viewport are a subset of the ones in the MetaHuman Identity Asset Editor. Everything written for that applies here, except that the MetaHuman Performance has no selection context, as it has no Component Tree to provide it.
Details Panel (3)
A standard Unreal Engine Asset Details panel. Unlike other tool-carrying assets, which tend to configure the Asset from commands, you should operate on the MetaHuman Performance Asset is directly here.
The Input Type drop down is used to select the input and solve animation from Audio, Depth Footage, or Monocular Video (the default). The selection here will determine the remaining options in the Details Panel.
If you do not see the Depth Data option in the Input Type drop down, check then MetaHuman Animator Depth Processing plugin is enabled in the project.
Sequencer (4)
A standard subset of the Sequencer.
While you can make changes in this view, we do not recommend changing anything in the tracks contained here.
There are two things you can safely change if needed:
The range offers direct control not only on playback, but on the processing interval too (also available in the Asset Details).
At the top-right corner, you can configure the time display, including the type of Timecode, if it’s available.
Preview and Inspection
There are several features that help you visualize the quality of the results.
Use the Override Visualization Mesh to select a different Skeletal Mesh than the one that was produced from the MetaHuman Identity fitting process. This doesn’t have an effect on the animation, and it’s meant to allow previewing results on a possible target rig while still working on the performance.
Head Movement can be previewed three ways:
Disabled: Suppresses neck and head rigid transformation to keep the head centered in frame. Useful when you want to inspect a very stable face and care less about the animation feeling natural.
Transform Track: This only applies a rigid transformation to the head to track the camera as closely as possible. Useful when you want to inspect the relationship between footage and the facial animation.
Control Rig: Features the full neck solution. This is particularly useful for static mount shots (not head mounted) where you want to have an idea about the full final result.
In the processing parameters, you can also choose to Skip Preview (this is disabled by default), and/or turn off whether solved frames are displayed or not while they are being processed. Skip Preview will be disabled when the solver is in Preview Mode.
Next Up
MetaHumans in Unreal Engine
Use an assembled MetaHuman character in Unreal Engine.