The MetaHuman Identity Asset is used to represent a performer (real or crafted). It hosts the larger array of tools in the plugin, and it’s used to conform the head’s Template (affecting both mesh and rig), to then generate a MetaHuman DNA and facial rig from any of the following:
From a Static Mesh or Skeletal Mesh
From Footage
From an exact Template Mesh
Directly importing MetaHuman DNA
The MetaHuman Identity can be used to solve performance into an Animation Sequence through the MetaHuman Performance asset. Alternatively, a MetaHuman Identity asset can be used with a MetaHuman Character asset to conform the head from the identity.
There are no enforced requirements when using a MetaHuman Identity to solve Performance, but realistically speaking you should generate a MetaHuman Identity of the performer delivering the performance specifically, and do so on the same class of device that is being used to record the performance.
When you create a MetaHuman Identity Asset, its matching Skeletal Mesh will be created right beside it. We recommend you avoid interacting with this asset directly. If you want access to this Skeletal Mesh (for example, to use Mesh to MetaHuman to quickly produce a head only rig), we recommend that you duplicate it first.
Depending on whether the MetaHuman Identity is Asset based on a mesh or on footage, there are some differences in the Asset GUI, functionality, and configuration requirements. The following sections cover both use cases.
Capture Data (Mesh)
The Identity Asset GUI consists of the following components:
Guided Workflow Toolbar (1)
All tools and commands are available from the MetaHuman Identity menu, except for the Components from Template command, which is located under the Asset menu.
This Asset’s commands often need to be run in a specific sequence in order to conform an Identity Asset to data. Most workflow steps can’t be performed until the previous step has been completed.
For ease of use and expediency, the toolbar contains only the most commonly used subset of commands. Their state (clickable or not) and tooltips can guide you through the process from an empty asset all the way to preparing it to solve Performance Capture.
Component Tree (2)
The Component Tree is similar to its presence elsewhere in Blueprints. Enabling functionality on the MetaHuman Identity Asset requires adding components that host data and attributes necessary for specific functionality.
Since the majority of the time you will need the same 6 or 7 components, we recommend that you create the component tree with the Components From… commands (either mesh or footage). This will populate and configure the tree correctly for each selected case, which depends on the source data. The only optional Component is the teeth pose, which is used to refine a rig after it’s been retrieved from the backend, before MetaHuman Performance is used to solve footage into animation.
The Component Tree also provides the context for other parts of the GUI. Most importantly, the Promotion Timeline and the Markers Outliner will be populated with the specific contents of a selected pose, and appear empty if something other than the pose is selected.
View Buffer Settings (3)
The MetaHuman Identity asset has a unique viewport, with different viewport modes to inspect and validate results of various capture processes. Because of these multiple view modes, the viewport has two buffers, A and B. The two buffers can be individually configured from the menus located at the two corners of the viewport.
Some options, such as tracking curves and vertices, will only be available in the Toggle view mode. If you see something greyed out in these buffers' settings, try switching the viewport to Toggle, and check what pose you have selected in the Component Tree.
Viewport and Camera Settings (4)
The Viewport has three modes:
Single Pane with A/B buffer toggle
Single Pane with A/B buffer wipe
Dual Pane Mode
The A/B toggle will only work in the first mode, while the camera settings are shared between all views and panes. The viewport responds to both the Components Tree selection context and the Promotions Timeline selection context.
We recommend doing most of the work in Single Pane toggle, and switching to other modes only for reviewing.
Promotions Timeline (5)
The Promotions Timeline is only available when a pose is selected. It shows all frames that have been promoted to be used for an Identity Solve, and it has some contextual functionality.
Three buttons are always available:
Promoting a frame
Demoting a promoted frame
Camera free roaming mode
Promoting a frame takes the current view and makes it into a promoted frame. This works from Free Roaming mode, as well as to duplicate an existing frame if one is already selected. Any promoted frame will appear as an additional button on the Promotions Timeline.
If a promoted frame is selected and not locked, any camera operation on that frame will change that frame.
Demoting a frame removes it from the Promotions Timeline.
While demoting a frame can be undone and redone, if the undo buffer is cleared, promoting the same frame again won’t bring back tracker modifications you might have made to the frame. Demote with care.
The Free Roaming button is there to provide a “free” frame that you can always go to in order to explore the space and promote from without losing your work in other frames.
Double-clicking a Promoted Frame is a shortcut to rename it.
Right-clicking the selected Promoted Frame brings up further options:
| Option | Description |
|---|---|
Lock Camera | Prevents any accidental or automated changes to the camera for that frame. |
Autotracking on/off | When the camera is unlocked and Autotracking is on, markers will be automatically tracked on releasing camera navigation controls. When off, the frame will have to be tracked manually by invoking the track frame command. |
Rename <name of frame> | Rename that frame. |
Set/Remove Front View | A MetaHuman Identity requires that one frame, and only one, is flagged to be the front frame. Right-clicking gives direct access to this functionality. Setting a new frame to be the Front Frame unflags the previously flagged frame if one exists. It’s possible to unflag all frames without having set one back to be the front frame, in which case fitting and solving in this MetaHuman Identity won’t work until a front frame is flagged again. |
Demote <name of frame> | Same as the Demote Frame button in the Promotions Timeline. |
Track Markers (Active Frame) | Same as the Track Active Frame command in the toolbar. |
Unselected Promoted Frames have a reduced subset of the above functionality.
Markers Outliner (6)
The Markers Outliner relates to the currently selected Promoted Frame. Changing what Promoted Frame is selected changes the contents of the Outliner. Marker layout is preserved separately for each Promoted Frame.
Markers are curves with control vertices that display the tracking of facial features used in almost all of our workflows. The list of possible markers is static, and they are grouped in named groups. Only a subset of those markers (limited to the front view) can be tracked automatically. Most of the time, those are all that’s needed.
On occasion you might want to further define how the input data and the resulting MetaHuman Identity correlate, in which case you could enable more markers, and manually set them in place so they correspond to the same features on your mesh.
From the button to the side of each marker (and group), you can toggle their visibility and whether they participate in the solution or not.
In general we strongly recommend doing things iteratively, and adding minimally:
Start from the autotracking markers with just the frontal frame, and activate and track markers for additional features only if those features (e.g. part of the ear) don’t match.
Don’t overlap marker activity across frames. For example, if you added side frames for the ear markers, ensure you don’t add any of the markers active in the front frame to these, and don’t add the ear markers to the front frame.
Auto-tracking markers should only be active in the front frame.
Don’t activate batches of markers all at once; even if things improve you won’t know which one optimized your results, and it’s possible your improvement would have been larger from one strong marker than from several, some of which diminish the strength of that one good marker.
Don’t activate markers on a frame unless the entirety of that feature is well in view. For example, don’t add ear markers to a front frame.
Capture Data (Footage)
The vast majority of functionality is similar for footage and mesh Capture Data. The Guided Workflow Toolbar, view modes, Component Tree, and Markers Outliner are all identical regardless of Capture Data type.
For footage, there is a significant addition of an embedded Sequencer view which is used to navigate the footage and choose (as well as visualize) the frames that will be promoted to.
The sections below capture the parts of the MetaHuman Identity Asset that are specific to an Asset created from footage.
View Buffer Settings (3)
The View Buffer Settings have some differences which relate to the differences in source data.
You have the following additional options:
| Option | Description |
|---|---|
Undistort | Toggle between the footage as ingested and the processed footage with the lens distortion removed. |
Depth Mesh | Toggle the meshing of the Depth component of the footage linked from the Capture Data. |
Both of these are diagnostic options, and don’t have an effect on tools, nor can be directly manipulated.
Viewport and Camera Settings (4)
In the Camera Settings, the Depth Data Near and Far parameters become unlocked. These are diagnostic options and have no effect on the results, but they can help greatly in reducing noise and sharpening visualization of the depth data.
Promotions Timeline (5)
The Promotions Timeline is almost identical in functionality with its counterpart for mesh Capture Data. The only difference is that, for Footage, frames are always automatically locked and tracked when you promote them.
Our recommendations also change because markers operate slightly differently. We recommend the following for the Neutral Pose:
For Stereo cameras, we recommend you only promote one frontal frame.
For mobile consumer classes of device (iPhone, iPad), we recommend using three frames: one frontal frame, and two slightly to the sides.
We advise against using more than three frames in either case.
When using multiple frames, the same markers should be present in all frames so that they can be correlated.
This is the opposite of what we would recommend for meshes.
Add markers very sparingly and favor markers that remain visible in all frames. Empirically, the Eyes Crease markers are some of the more successful additions with some eye morphologies that the MetaHuman Identity would otherwise struggle to fit.
Footage Sequencer (7)
This view is a variation of the normal Sequencer, specialized to look for frames suitable for promotion. It contains some automatically managed channels that help identify what frames of the footage have been promoted.
MetaHuman Identity Conforming
When created, a MetaHuman Identity Asset always contains the template of a head mesh in a default state, as well as a reference to the matching rig.
The following workflow doesn't add to that, as much as it configures it to look more like someone (real or imagined). We generally refer to this process as conforming, and it consists of the following steps:
The template mesh point positions are fit to the volume of the source data.
An approximation of that volume from our database is found.
The rig is configured to be well behaved with that approximation.
A final shape offset between the actual volume and our approximation is preserved as a delta.
Optionally, an adjustment to the teeth registration can also be added to the local rig.
Some of these steps happen locally on your workstation, and some require access to our online backend, because our database is far too large to obtain those results any other way.
It’s worth noting that Mesh to MetaHuman refers to the submission to our backend of the Template Mesh that is always referenced from the MetaHuman Identity asset, and not to a mesh referenced in the Capture Data. Mesh to MetaHuman works the same way regardless of what data was used to conform it, be it a Static or Skeletal Mesh, Footage, or MetaHuman DNA.
The mesh that is rigged by the backend is the approximation in step 2. The difference in volume between that and the input data is then added as a displacement shape on top of it.
A large difference between the approximation coming from the database retrieval and the volume will affect the rig and the animation. The larger it is, especially in sensitive or highly articulated areas like eyes and mouth, the easier it will be for animation to produce mesh artifacts. Furthermore, because of this mechanism and the fact that the topology is always the MetaHuman standard topology, the results won’t always match every fine detail of the volume.
With additional time spent on the Asset and the right technical know-how, these two issues can be addressed, or in extreme cases at least mitigated. The rest pose of the mesh and the skeleton can be altered through DNA Calib, and “forcing” a precise fit of the MetaHuman topology through the system can be done with the new Template input option.
MetaHuman DNA Import
MetaHuman DNA files store a description of the facial geometry and rig. It is possible to use one such file directly to conform an Identity asset to it without having to track and fit to geometry or footage.
MetaHuman DNA files can come from custom work that’s been saved to one through our DNA Calibration library, or directly bundled with a MetaHuman you downloaded.