It is possible to animate a MetaHuman character using depth data captured with a TrueDepth camera on an iPhone or iPad, or a stereo head-mounted camera (HMC). When using an iPhone or iPad, the Live Link Face application is also required. This capability is not supported on Android devices.
As part of capturing performances using these devices, you need to capture a neutral take from which to create the MetaHuman Identity. For HMC devices, an additional calibration take (using the calibration board) is required. Refer to the Facial Performance Capture Guidelines for more information.
To configure your project for offline animation from depth data, make sure the MetaHuman Animator and MetaHuman Animator Depth Processing plugins are enabled. You may also need the Live Link Hub application and Capture Manager Editor plugin to ingest captured data.
Refer to the Python Scripting page for example scripts that demonstrate how to automate this workflow as part of a performance capture workflow. Further examples are available for Capture Manager in Live Link Hub.
Prerequisites
To generate offline animation from depth data, you need:
An Unreal Engine 5.6 (or later) project.
An Epic Games account (used when creating the MetaHuman Identity).
The MetaHuman Animator and MetaHuman Animator Depth Processing plugins enabled.
The Live Link Hub application and Capture Manager Editor plugin enabled.
One or more performance takes captured using an iPhone or iPad TrueDepth camera or a stereo camera pair.
If using a stereo camera pair, a calibration take.
Generating Animation from Depth Data Workflow
A video tutorial for this workflow is available on YouTube.
This video was made using an older version of MetaHuman Animator. Steps and menus may be different if you are using a more recent version.
The workflow to generate animation from depth data consists of the following steps:
Ingest captured footage using Capture Manager in Live Link Hub. You must use either the Stereo Video, Live Link Face, or Take Archive device.
For footage captured using a stereo camera pair, generate calibration from the calibration take.
For footage captured using a stereo camera pair, generate depth data for each take.
Create a MetaHuman Identity for each actor.
Generate animation for each take using the identity and MetaHuman Performance asset.
Export the animation curves as an Animation Sequence or Level Sequence asset.