unreal.MetaHumanSpeechProcessingSettings¶
- class unreal.MetaHumanSpeechProcessingSettings(generate_blinks: bool = False, mix_audio_channels: bool = False, audio_channel_index: int = 0, solve_overrides: AudioDrivenAnimationSolveOverrides = Ellipsis, enable_head_movement: bool = False)¶
Bases:
StructBaseMetaHuman Speech Processing Settings
C++ Source:
Plugin: MetaHuman
Module: MetaHumanBatchProcessor
File: MetaHumanSpeechProcessingSettings.h
Editor Properties: (see get_editor_property/set_editor_property)
audio_channel_index(int32): [Read-Write] Audio channel used for processingenable_head_movement(bool): [Read-Write]generate_blinks(bool): [Read-Write] Option to generate blinksmix_audio_channels(bool): [Read-Write] Option to down mix audio channels into single channel before processingoutput_controls(AudioDrivenAnimationOutputControls): [Read-Write] Process the full face or a particular subset of controls.solve_overrides(AudioDrivenAnimationSolveOverrides): [Read-Write] Overrides for the solve.
- property mix_audio_channels: bool¶
[Read-Write] Option to down mix audio channels into single channel before processing
- Type:
(bool)
- property solve_overrides: AudioDrivenAnimationSolveOverrides¶
[Read-Write] Overrides for the solve.