A MetaHuman Identity is required to solve animation from depth data in the MetaHuman Performance asset. The identity is best created using the Mesh to MetaHuman from Video Footage workflow.
Although there are no enforced requirements when using a MetaHuman Identity to solve Performance, realistically speaking you should generate a MetaHuman Identity of the performer delivering the performance specifically, and do so on the same class of device that is being used to record the performance.
If you are recording the same performer delivering to multiple devices, or their appearance changes significantly enough (changes in make-up, facial volume etc.), we suggest you capture footage and create a MetaHuman Identity for each “appearance” and each device.
MetaHuman Identity Creation Workflow
The workflow to create a MetaHuman Identity from video footage consists of the following steps:
Follow the Mesh to MetaHuman from Video Footage workflow to create a MetaHuman Identity that has been auto-rigged.
(Optional) Add a Teeth Pose. Adding a teeth pose registers the teeth position inside the head and improves the animation solved by the MetaHuman Performance asset considerably. Although optional, it is strongly recommended.
Add the Teeth Pose component using the Component Tree.
Select a frame where you can clearly see the corner of two incisors and as much of the teeth surface as possible. Follow the same process to promote and track the frame. This only needs to be done for one frame, there is no reason to add more.
Depending on the bite, you may only see the upper or lower incisors - keep the markers active for the incisors you can see, and disable the others.
Click Fit Teeth.
Click on Prepare for Performance. This is required before the identity can be used with the MetaHuman Performance asset. This is a computationally expensive process and can take a few minutes.
Next Up
Generate Animation
Use a MetaHuman Performance asset to solve animation from depth data.