 |
FLearningAgentsImitationTrainerSettings |
The configurable settings for a ULearningAgentsImitationTrainer. |
|
 |
FLearningAgentsImitationTrainerTrainingSettings |
The configurable settings for the training process. |
|
 |
FLearningAgentsRecord |
A single recording of a series of observations and actions. |
|
 |
FLearningAgentsRecorderPathSettings |
The path settings for the recorder. |
|
 |
FLearningAgentsTrainerGameSettings |
The configurable game settings for a ULearningAgentsTrainer. |
|
 |
FLearningAgentsTrainerPathSettings |
The path settings for the trainer. |
|
 |
FLearningAgentsTrainerSettings |
The configurable settings for a ULearningAgentsTrainer. |
|
 |
FLearningAgentsTrainerTrainingSettings |
The configurable settings for the training process. |
|
 |
FLearningAgentsTrainingModule |
|
|
 |
UConditionalCompletion |
A simple boolean completion. |
|
 |
UConditionalReward |
A simple conditional reward that gives some constant reward value when a condition is true. |
|
 |
UFloatReward |
A simple float reward. |
|
 |
ULearningAgentsCompletion |
For functions in this file, we are favoring having more verbose names such as "AddConditionalCompletion" vs simply "Add" in order to keep it easy to find the correct function in blueprints. |
|
 |
ULearningAgentsImitationTrainer |
The ULearningAgentsImitationTrainer enable imitation learning, i.e. learning from human/AI demonstrations. |
|
 |
ULearningAgentsRecorder |
A component that can be used to create recordings of training data for imitation learning. |
|
 |
ULearningAgentsRecording |
Data asset representing an array of records. |
|
 |
ULearningAgentsReward |
For functions in this file, we are favoring having more verbose names such as "AddFloatReward" vs simply "Add" in order to keep it easy to find the correct function in blueprints. |
|
 |
ULearningAgentsTrainer |
The ULearningAgentsTrainer is the core class for reinforcement learning training. |
|
 |
ULocalDirectionalVelocityReward |
A reward for maximizing velocity along a given local axis. |
|
 |
UPlanarPositionDifferenceCompletion |
A completion for if two positions differ by some threshold in a plane, e.g. if the agent gets too far from a starting position. |
|
 |
UPlanarPositionDifferencePenalty |
A penalty for being far from a goal position in a plane. |
|
 |
UPlanarPositionSimilarityCompletion |
A completion for if two positions are near by some threshold in a plane, e.g. if the agent gets close to a position. |
|
 |
UPositionArraySimilarityReward |
A reward for minimizing the distances of positions in the given arrays. |
|
 |
UScalarVelocityReward |
A reward for maximizing speed. |
|
 |
UTimeElapsedCompletion |
A completion for if a given amount of time has elapsed. |
|