unreal.MetaHumanPerformance

class unreal.MetaHumanPerformance(outer: Object | None = None, name: Name | str = 'None')

Bases: Object

MetaHuman Performance Asset

Produces an Animation Sequence for MetaHuman Control Rig by tracking facial expressions in video-footage from a Capture Source, imported through Capture Manager, using a SkeletalMesh obtained through MetaHuman Identity asset toolkit.

C++ Source:

  • Plugin: MetaHuman

  • Module: MetaHumanPerformance

  • File: MetaHumanPerformance.h

Editor Properties: (see get_editor_property/set_editor_property)

  • audio (SoundWave): [Read-Write] Audio of performance used with the Audio data input type

  • audio_channel_index (uint32): [Read-Write] Specify the audio channel used to solve into animation

  • audio_driven_animation_models (AudioDrivenAnimationModels): [Read-Write] The models to be used by audio driven animation

  • audio_driven_animation_output_controls (AudioDrivenAnimationOutputControls): [Read-Write]

  • audio_driven_animation_solve_overrides (AudioDrivenAnimationSolveOverrides): [Read-Write] Settings to change the behavior of the audio driven animation solve

  • auto_choose_head_movement_reference_frame (bool): [Read-Write] If set to true, automatically pick the most front - facing frame as the reference frame for control-rig head movement calculation, default to true. Changing this will cause a re-bake of Control Rig data

  • camera (str): [Read-Write] Name of camera (view) in the footage capture data calibration to use for display and processing

  • capture_data_config (str): [Read-Only] Display name of the config to use with the capture data

  • control_rig_class (type(Class)): [Read-Write]

  • default_solver (MetaHumanFaceAnimationSolver): [Read-Write] Solver parameters for processing the footage

  • default_tracker (MetaHumanFaceContourTrackerAsset): [Read-Write] Tracker parameters for processing the footage

  • downmix_channels (bool): [Read-Write] Downmix multi channel audio before solving into animation

  • end_frame_to_process (uint32): [Read-Write] The frame to end processing with

  • focal_length (float): [Read-Only] The estimated focal length of the footage

  • footage_capture_data (FootageCaptureData): [Read-Write] Real-world footage data with the performance

  • generate_blinks (bool): [Read-Write] Flag indicating if we should generate blinks

  • head_movement_mode (PerformanceHeadMovementMode): [Read-Write] Head movement type

  • head_movement_reference_frame (uint32): [Read-Write] Which frame to use as reference frame for head pose (if Auto Choose Head Movement Reference Frame is not selected), default to first processed frame. Changing this will cause a re-bake of Control Rig data

  • head_stabilization (bool): [Read-Write] Reduces noise in head position and orientation.

  • identity (MetaHumanIdentity): [Read-Write] A digital double of the person performing in the footage, captured in the MetaHuman Identity asset

  • input_type (DataInputType): [Read-Write] Enum to indicate which data input type is being used for the performance

  • maximum_scale_difference_from_identity (float): [Read-Write] The maximum allowed percentage difference in estimated head scale between Identity and Performance. Above this value a diagnostic warning will be flagged.

  • maximum_stereo_baseline_difference_from_identity (float): [Read-Write] The maximum allowed percentage difference in stereo baseline between Identity and Performance CaptureData camera calibrations. Above this value a diagnostic warning will be flagged.

  • minimum_depth_map_face_coverage (float): [Read-Write] The minimum percentage of the face region which should have valid depth-map pixels. Below this value a diagnostic warning will be flagged.

  • minimum_depth_map_face_width (float): [Read-Write] The minimum required width of the face region on the depth-map in pixels. Below this value a diagnostic warning will be flagged.

  • mono_smoothing_params (MetaHumanRealtimeSmoothingParams): [Read-Write] Smoothing parameters to use for mono video processing

  • neutral_pose_calibration_alpha (double): [Read-Write] Neutral pose calibration alpha parameter, defaults to 1. Changing this will cause a re-bake of Control Rig data

  • neutral_pose_calibration_curves (Array[Name]): [Read-Write] Set of curve names to apply neutral pose calibration to. Changing this will cause a re-bake of Control Rig data

  • neutral_pose_calibration_enabled (bool): [Read-Write] If set to true perform neutral pose calibration for mono solve, default to false. Changing this will cause a re-bake of Control Rig data

  • neutral_pose_calibration_frame (uint32): [Read-Write] Which frame to use as the neutral pose calibration for mono solve (if Enable Neutral Pose Calibration is selected), default to first processed frame. Changing this will cause a re-bake of Control Rig data

  • on_processing_finished_dynamic (OnProcessingFinishedDynamic): [Read-Write] Dynamic delegate called when the pipeline finishes running

  • processing_excluded_frames (Array[FrameRange]): [Read-Only] Frames that the processing has identified as producing bad results and should not be exported

  • realtime_audio (bool): [Read-Write] Flag indicating if we should use realtime audio solve

  • realtime_audio_lookahead (int32): [Read-Write] The amount of time, in milliseconds, that the audio solver looks ahead into the audio stream to produce the current frame of animation. A larger value will produce higher quality animation but will come at the cost of increased latency.

  • realtime_audio_mood (AudioDrivenAnimationMood): [Read-Write] The mood of the realtime audio driven animation solve

  • realtime_audio_mood_intensity (float): [Read-Write] The mood intensity of the realtime audio driven animation solve

  • show_frames_as_they_are_processed (bool): [Read-Write] Flag indicating if editor updates current frame to show the results as frames are processed

  • skip_diagnostics (bool): [Read-Write] Flag indicating whether processing diagnostics should be calculated during processing

  • skip_filtering (bool): [Read-Write] Flag indicating if filtering should be skipped

  • skip_per_vertex_solve (bool): [Read-Write] Flag indicating if per-vertex solve (which is slow to process but gives slightly better animation results) should be skipped

  • skip_preview (bool): [Read-Write] Flag indicating if performance predictive solver preview should be skipped

  • skip_tongue_solve (bool): [Read-Write] Flag indicating if tongue solving should be skipped

  • solve_type (SolveType): [Read-Write] Enum to indicate which type of solve to perform

  • start_frame_to_process (uint32): [Read-Write] The frame to start processing from

  • timecode_alignment (TimecodeAlignment): [Read-Write] Timecode alignment type

  • user_excluded_frames (Array[FrameRange]): [Read-Write] Frames that the user has identified which are to be excluded from the processing, eg part of the footage where the face goes out of frame

  • visualization_mesh (SkeletalMesh): [Read-Write] Set a different Skeletal Mesh (e.g. MetaHuman head) for visualizing the final animation

can_export_animation() bool

Can Export Animation

Return type:

bool

can_process() bool

Can Process

Return type:

bool

cancel_pipeline() None

Cancel Pipeline

contains_animation_data() bool

Returns true if there is at least one animation frame with valid data, false otherwise

Return type:

bool

diagnostics_indicates_processing_issue() Text or None

Diagnostics Indicates Processing Issue

Returns:

out_diagnostics_warning_message (Text):

Return type:

Text or None

export_animation(export_range) None

(DEPRECATED: use UMetaHumanPerformanceExportUtils::ExportAnimation instead) Export an animation sequence targeting the face skeleton. This will ask the user where to place the new animation sequence

Parameters:

export_range (PerformanceExportRange)

get_animation_data(start_frame_number=0, end_frame_number=-1) Array[FrameAnimationData]

Caller is responsible to ensure data will fit into 32bit TArray

Parameters:
  • start_frame_number (int32)

  • end_frame_number (int32)

Return type:

Array[FrameAnimationData]

get_number_of_processed_frames() int32

Get Number Of Processed Frames

Return type:

int32

is_processing() bool

Is Processing

Return type:

bool

property on_processing_finished_dynamic: OnProcessingFinishedDynamic

[Read-Write] Dynamic delegate called when the pipeline finishes running

Type:

(OnProcessingFinishedDynamic)

set_blocking_processing(blocking_processing) None

Set Blocking Processing

Parameters:

blocking_processing (bool)

property skip_diagnostics: bool

[Read-Write] Flag indicating whether processing diagnostics should be calculated during processing

Type:

(bool)

start_pipeline(is_scripted_processing=True) StartPipelineErrorType

Export options

Parameters:

is_scripted_processing (bool)

Return type:

StartPipelineErrorType