Accessor method for validationSet
Accessor method for validationSet
Returns a DataPipe2 which calculates the energy of data: T.
Returns a DataPipe2 which calculates the energy of data: T. See: energy below.
Returns a DataPipe which calculates the gradient of the energy, E(.) of data: T with respect to the model hyper-parameters.
Returns a DataPipe which calculates the gradient of the energy, E(.) of data: T with respect to the model hyper-parameters. See: gradEnergy below.
Underlying covariance function of the Gaussian Processes.
Underlying covariance function of the Gaussian Processes.
A Map which stores the current state of the system.
A Map which stores the current state of the system.
Convert from the underlying data structure to Seq[I] where I is the index set of the GP
Convert from the underlying data structure to Seq[I] where I is the index set of the GP
Convert from the underlying data structure to Seq[(I, Y)] where I is the index set of the GP and Y is the value/label type.
Convert from the underlying data structure to Seq[(I, Y)] where I is the index set of the GP and Y is the value/label type.
Calculates the energy of the configuration, required for global optimization routines.
Calculates the energy of the configuration, required for global optimization routines.
Defaults to the base implementation in io.github.mandar2812.dynaml.optimization.GloballyOptimizable in case a validation set is not specified through the validationSet variable.
The value of the hyper-parameters in the configuration space
Optional parameters about configuration
Configuration Energy E(h)
The training data
The training data
Calculates the gradient energy of the configuration and subtracts this from the current value of h to yield a new hyper-parameter configuration.
Calculates the gradient energy of the configuration and subtracts this from the current value of h to yield a new hyper-parameter configuration.
Over ride this function if you aim to implement a gradient based hyper-parameter optimization routine like ML-II
The value of the hyper-parameters in the configuration space
Gradient of the objective function (marginal likelihood) as a Map
Stores the names of the hyper-parameters
Stores the names of the hyper-parameters
The GP is taken to be zero mean, or centered.
The GP is taken to be zero mean, or centered. This is ensured by standardization of the data before being used for further processing.
Cache the training kernel and noise matrices for fast access in future predictions.
Cache the training kernel and noise matrices for fast access in future predictions.
Predict the value of the target variable given a point.
Predict the value of the target variable given a point.
Draw three predictions from the posterior predictive distribution
Draw three predictions from the posterior predictive distribution
Calculates posterior predictive distribution for a particular set of test data points.
Calculates posterior predictive distribution for a particular set of test data points.
A Sequence or Sequence like data structure storing the values of the input patters.
Set the model "state" which contains values of its hyper-parameters with respect to the covariance and noise kernels.
Set the model "state" which contains values of its hyper-parameters with respect to the covariance and noise kernels.
Returns a prediction with error bars for a test set of indexes and labels.
Returns a prediction with error bars for a test set of indexes and labels. (Index, Actual Value, Prediction, Lower Bar, Higher Bar)
Forget the cached kernel & noise matrices.
Forget the cached kernel & noise matrices.
Setting a validation set is optional in case one wants to calculate joint marginal likelihood of the training and validation data as the objective function for hyper-parameter optimization.
Setting a validation set is optional in case one wants to calculate joint marginal likelihood of the training and validation data as the objective function for hyper-parameter optimization. While retaining just the training data set for final calculating predictiveDistribution during final deployment.
Set the validation data, optionally append it to the existing validation data
Set the validation data, optionally append it to the existing validation data
data
Defaults to false
Assigning a value to the processTargets data pipe can be useful in cases where we need to perform operations such as de-normalizing the predicted and actual targets to their original scales.
Assigning a value to the processTargets data pipe can be useful in cases where we need to perform operations such as de-normalizing the predicted and actual targets to their original scales.
scheduled to be removed by DynaML 2.x
If one uses a non empty validation set, then the user can set a custom function of the validation predictions and targets as the objective function for the hyper-parameter optimization routine.
If one uses a non empty validation set, then the user can set a custom function of the validation predictions and targets as the objective function for the hyper-parameter optimization routine.
Currently this defaults to RMSE calculated on the validation data.
sscheduled to be removed by DynaML 2.x