Capture Data exists for both meshes and footage. It’s a necessary Asset that helps tracking things as needed, and is typically created by an automated process. Some specific pipeline, automation, or accident recovery issues might require you to create and populate Capture Data manually.
Every aspect of a Capture Data Asset can be configured manually, but be mindful of the fact that incompletely or incorrectly configured Capture Data will create unpredictable issues that might be difficult to troubleshoot.
Mesh Capture Data
Mesh-linking Capture Data is commonly created automatically from a MetaHuman Identity Asset when you use Mesh to MetaHuman to create components from a mesh source. This type of Capture Data is only useful for the Mesh to MetaHuman process accessible from the MetaHuman Identity Asset.
The reference mesh can be either a Skeletal Mesh or a Static Mesh Asset, and it can be anywhere inside the Unreal Project.
A populated Mesh Capture Data Asset
Footage Capture Data
Capture Data for footage is commonly created by the Capture Manager and can be an input to either or both MetaHuman Identity and Performance Assets.
An assorted set of data and metadata is required when working with performance capture. Some of that data is optional but commonly available (e.g. audio).
The ingestion process is the common way to get Footage Capture Data Assets. This process also creates a like-named folder that, for any device, should contain the following:
Audio as a SoundWave Asset
Calibration as a MetaHuman Camera Calibration Asset
A lens file for the Depth component of the Footage
A lens file for the RGB Video component of the Footage
An Image MediaSource Asset for the RGB Video
An Image MediaSource Asset for the Depth
For iPhone class devices, this folder will also contain the following:
A subfolder containing the transcoded media for the Depth
A subfolder containing the transcoded media for the RGB Video
Stereo Cameras will have some variability, as follows:
They might have an additional lens file (for the additional RGB Video channel).
They optionally might include a subfolder containing the inferred Depth media (depending on Capture Source settings).
They will link to out-of-project files for Video RGB, and might do so for Depth media (depending on Capture Source settings).
All of the information above is retained or referenced through the Capture Data Asset.
If you need to create and configure a Capture Data Asset manually, make sure it is complete and correctly configured.
It is important to not confuse the difference between the Device Class and the Device Model for iPhone devices.
The Device Class inside a Capture Data Asset is a fixed choice from limited options, and it’s the commonly known name of the model from a consumer point of view.
The Device Model is a less visible versioning that doesn’t necessarily line up with the public-facing model number. This is usually ahead of the consumer branding. For example, an iPhone 12 Device is likely to have a model number of iPhone 13,N.
A populated Footage Capture Data Asset
Timecode Information
The Timecode for the Image/Depth Sequences can be viewed and set via the ImgMediaSource asset editor.
Timecode settings in the ImgMediaSource asset editor
The Timecode and Timecode Frame Rate for SoundWave assets can be viewed and set selecting Scripted Asset Action > Set Timecode Info.
Set Timecode Scripted Asset Action
Enter the desired Timecode and Timecode Frame Rate values and click OK to set the values.
Set Timecode Dialog