unreal.MetaHumanPerformance¶
- class unreal.MetaHumanPerformance(outer: Object | None = None, name: Name | str = 'None')¶
Bases:
ObjectMetaHuman Performance Asset
Produces an Animation Sequence for MetaHuman Control Rig by tracking facial expressions in video-footage from a Capture Source, imported through Capture Manager, using a SkeletalMesh obtained through MetaHuman Identity asset toolkit.
C++ Source:
Plugin: MetaHuman
Module: MetaHumanPerformance
File: MetaHumanPerformance.h
Editor Properties: (see get_editor_property/set_editor_property)
audio(SoundWave): [Read-Write] Audio of performance used with the Audio data input typeaudio_channel_index(uint32): [Read-Write] Specify the audio channel used to solve into animationaudio_driven_animation_models(AudioDrivenAnimationModels): [Read-Write] The models to be used by audio driven animationaudio_driven_animation_output_controls(AudioDrivenAnimationOutputControls): [Read-Write]audio_driven_animation_solve_overrides(AudioDrivenAnimationSolveOverrides): [Read-Write] Settings to change the behavior of the audio driven animation solveauto_choose_head_movement_reference_frame(bool): [Read-Write] If set to true, automatically pick the most front - facing frame as the reference frame for control-rig head movement calculation, default to true. Changing this will cause a re-bake of Control Rig datacamera(str): [Read-Write] Name of camera (view) in the footage capture data calibration to use for display and processingcapture_data_config(str): [Read-Only] Display name of the config to use with the capture datacontrol_rig_class(type(Class)): [Read-Write]default_solver(MetaHumanFaceAnimationSolver): [Read-Write] Solver parameters for processing the footagedefault_tracker(MetaHumanFaceContourTrackerAsset): [Read-Write] Tracker parameters for processing the footagedownmix_channels(bool): [Read-Write] Downmix multi channel audio before solving into animationend_frame_to_process(uint32): [Read-Write] The frame to end processing withfocal_length(float): [Read-Only] The estimated focal length of the footagefootage_capture_data(FootageCaptureData): [Read-Write] Real-world footage data with the performancegenerate_blinks(bool): [Read-Write] Flag indicating if we should generate blinkshead_movement_mode(PerformanceHeadMovementMode): [Read-Write] Head movement typehead_movement_reference_frame(uint32): [Read-Write] Which frame to use as reference frame for head pose (if Auto Choose Head Movement Reference Frame is not selected), default to first processed frame. Changing this will cause a re-bake of Control Rig datahead_stabilization(bool): [Read-Write] Reduces noise in head position and orientation.identity(MetaHumanIdentity): [Read-Write] A digital double of the person performing in the footage, captured in the MetaHuman Identity assetinput_type(DataInputType): [Read-Write] Enum to indicate which data input type is being used for the performancemaximum_scale_difference_from_identity(float): [Read-Write] The maximum allowed percentage difference in estimated head scale between Identity and Performance. Above this value a diagnostic warning will be flagged.maximum_stereo_baseline_difference_from_identity(float): [Read-Write] The maximum allowed percentage difference in stereo baseline between Identity and Performance CaptureData camera calibrations. Above this value a diagnostic warning will be flagged.minimum_depth_map_face_coverage(float): [Read-Write] The minimum percentage of the face region which should have valid depth-map pixels. Below this value a diagnostic warning will be flagged.minimum_depth_map_face_width(float): [Read-Write] The minimum required width of the face region on the depth-map in pixels. Below this value a diagnostic warning will be flagged.mono_smoothing_params(MetaHumanRealtimeSmoothingParams): [Read-Write] Smoothing parameters to use for mono video processingneutral_pose_calibration_alpha(double): [Read-Write] Neutral pose calibration alpha parameter, defaults to 1. Changing this will cause a re-bake of Control Rig dataneutral_pose_calibration_curves(Array[Name]): [Read-Write] Set of curve names to apply neutral pose calibration to. Changing this will cause a re-bake of Control Rig dataneutral_pose_calibration_enabled(bool): [Read-Write] If set to true perform neutral pose calibration for mono solve, default to false. Changing this will cause a re-bake of Control Rig dataneutral_pose_calibration_frame(uint32): [Read-Write] Which frame to use as the neutral pose calibration for mono solve (if Enable Neutral Pose Calibration is selected), default to first processed frame. Changing this will cause a re-bake of Control Rig dataon_processing_finished_dynamic(OnProcessingFinishedDynamic): [Read-Write] Dynamic delegate called when the pipeline finishes runningprocessing_excluded_frames(Array[FrameRange]): [Read-Only] Frames that the processing has identified as producing bad results and should not be exportedrealtime_audio(bool): [Read-Write] Flag indicating if we should use realtime audio solverealtime_audio_lookahead(int32): [Read-Write] The amount of time, in milliseconds, that the audio solver looks ahead into the audio stream to produce the current frame of animation. A larger value will produce higher quality animation but will come at the cost of increased latency.realtime_audio_mood(AudioDrivenAnimationMood): [Read-Write] The mood of the realtime audio driven animation solverealtime_audio_mood_intensity(float): [Read-Write] The mood intensity of the realtime audio driven animation solveshow_frames_as_they_are_processed(bool): [Read-Write] Flag indicating if editor updates current frame to show the results as frames are processedskip_diagnostics(bool): [Read-Write] Flag indicating whether processing diagnostics should be calculated during processingskip_filtering(bool): [Read-Write] Flag indicating if filtering should be skippedskip_per_vertex_solve(bool): [Read-Write] Flag indicating if per-vertex solve (which is slow to process but gives slightly better animation results) should be skippedskip_preview(bool): [Read-Write] Flag indicating if performance predictive solver preview should be skippedskip_tongue_solve(bool): [Read-Write] Flag indicating if tongue solving should be skippedsolve_type(SolveType): [Read-Write] Enum to indicate which type of solve to performstart_frame_to_process(uint32): [Read-Write] The frame to start processing fromtimecode_alignment(TimecodeAlignment): [Read-Write] Timecode alignment typeuser_excluded_frames(Array[FrameRange]): [Read-Write] Frames that the user has identified which are to be excluded from the processing, eg part of the footage where the face goes out of framevisualization_mesh(SkeletalMesh): [Read-Write] Set a different Skeletal Mesh (e.g. MetaHuman head) for visualizing the final animation
- contains_animation_data() bool¶
Returns true if there is at least one animation frame with valid data, false otherwise
- Return type:
- diagnostics_indicates_processing_issue() Text or None¶
Diagnostics Indicates Processing Issue
- Returns:
out_diagnostics_warning_message (Text):
- Return type:
Text or None
- export_animation(export_range) None¶
(DEPRECATED: use UMetaHumanPerformanceExportUtils::ExportAnimation instead) Export an animation sequence targeting the face skeleton. This will ask the user where to place the new animation sequence
- Parameters:
export_range (PerformanceExportRange)
- get_animation_data(start_frame_number=0, end_frame_number=-1) Array[FrameAnimationData]¶
Caller is responsible to ensure data will fit into 32bit TArray
- Parameters:
start_frame_number (int32)
end_frame_number (int32)
- Return type:
- get_number_of_processed_frames() int32¶
Get Number Of Processed Frames
- Return type:
int32
- property on_processing_finished_dynamic: OnProcessingFinishedDynamic¶
[Read-Write] Dynamic delegate called when the pipeline finishes running
- Type:
(OnProcessingFinishedDynamic)
- set_blocking_processing(blocking_processing) None¶
Set Blocking Processing
- Parameters:
blocking_processing (bool)
- property skip_diagnostics: bool¶
[Read-Write] Flag indicating whether processing diagnostics should be calculated during processing
- Type:
(bool)
- start_pipeline(is_scripted_processing=True) StartPipelineErrorType¶
Export options
- Parameters:
is_scripted_processing (bool)
- Return type: