Navigation
API > API/Plugins > API/Plugins/LearningAgentsTraining
The configurable settings for the training process.
| Name | FLearningAgentsImitationTrainerTrainingSettings |
| Type | struct |
| Header File | /Engine/Plugins/Experimental/LearningAgents/Source/LearningAgentsTraining/Public/LearningAgentsImitationTrainer.h |
| Include Path | #include "LearningAgentsImitationTrainer.h" |
Syntax
USTRUCT (BlueprintType , Category="LearningAgents")
struct FLearningAgentsImitationTrainerTrainingSettings
Variables
Public
| Name | Type | Remarks | Include Path | Unreal Specifiers |
|---|---|---|---|---|
| ActionEntropyWeight | float | Weighting used for the entropy bonus. | LearningAgentsImitationTrainer.h |
|
| ActionRegularizationWeight | float | Weight used to regularize actions. | LearningAgentsImitationTrainer.h |
|
| BatchCountPerEvaluation | int32 | How many batches to perform evaluation on? Randomly chosen each evaluation. | LearningAgentsImitationTrainer.h |
|
| BatchSize | uint32 | Batch size to use for training. | LearningAgentsImitationTrainer.h |
|
| bRunEvaluation | bool | Should evaluation run during the training process. Currently not used in Python | LearningAgentsImitationTrainer.h |
|
| bSaveSnapshots | bool | If true, snapshots of the trained networks will be emitted to the intermediate directory. | LearningAgentsImitationTrainer.h |
|
| bUseMLflow | bool | If true, MLflow will be used for experiment tracking. | LearningAgentsImitationTrainer.h |
|
| bUseTensorboard | bool | If true, TensorBoard logs will be emitted to the intermediate directory. | LearningAgentsImitationTrainer.h |
|
| Device | ELearningAgentsTrainingDevice | The device to train on. | LearningAgentsImitationTrainer.h |
|
| EvaluationFrequency | int32 | How many training iteration loops between an evaluation run. Currently not used in Python | LearningAgentsImitationTrainer.h |
|
| IterationsPerSnapshot | int32 | If bSaveSnapshots is true, the snapshots will be saved at an interval defined by the specified number of iterations. | LearningAgentsImitationTrainer.h |
|
| LearningRate | float | Learning rate of the policy network. Typical values are between 0.001 and 0.0001. | LearningAgentsImitationTrainer.h |
|
| LearningRateDecay | float | Amount by which to multiply the learning rate every time it decays. | LearningAgentsImitationTrainer.h |
|
| LearningRateDecayStepSize | int32 | The number of iterations to train before updating the learning rate. | LearningAgentsImitationTrainer.h |
|
| MLflowTrackingUri | FString | The URI of the MLflow Tracking Server to log to. | LearningAgentsImitationTrainer.h |
|
| NumberOfIterations | int32 | The number of iterations to run before ending training. | LearningAgentsImitationTrainer.h |
|
| ObservationNoiseScale | float | A multiplicative scaling factor that controls the observation noise that increases the perturbations added to observations. | LearningAgentsImitationTrainer.h |
|
| RandomSeed | int32 | The seed used for any random sampling the trainer will perform, e.g. for weight initialization. | LearningAgentsImitationTrainer.h |
|
| TrainEvalDatasetSplit | float | How much data should be used for evaluation. Currently not used in Python | LearningAgentsImitationTrainer.h |
|
| WeightDecay | float | Amount of weight decay to apply to the network. | LearningAgentsImitationTrainer.h |
|
| Window | uint32 | The number of consecutive steps of observations and actions over which to train the policy. | LearningAgentsImitationTrainer.h |
|
Functions
Public
| Name | Remarks | Include Path | Unreal Specifiers |
|---|---|---|---|
TSharedRef< FJsonObject > AsJsonConfig() |
LearningAgentsImitationTrainer.h |