The covariance function/kernel of the GP model, expressed as a LocalScalarKernel instance
Measurement noise covariance of the GP model.
Training data set of generic type T
The number of training data instances.
A basis function representation for the input features, represented as a DataPipe.
A Gaussian prior on the basis function trend coefficients.
Convert from the underlying data structure to Seq[(I, Y)] where I is the index set of the GP and Y is the value/label type.
Convert from the underlying data structure to Seq[(I, Y)] where I is the index set of the GP and Y is the value/label type.
Returns a DataPipe2 which calculates the energy of data: T.
Returns a DataPipe2 which calculates the energy of data: T. See: energy below.
Returns a DataPipe which calculates the gradient of the energy, E(.) of data: T with respect to the model hyper-parameters.
Returns a DataPipe which calculates the gradient of the energy, E(.) of data: T with respect to the model hyper-parameters. See: gradEnergy below.
Underlying covariance function of the Gaussian Processes.
Underlying covariance function of the Gaussian Processes.
A Map which stores the current state of the system.
A Map which stores the current state of the system.
Convert from the underlying data structure to Seq[I] where I is the index set of the GP
Convert from the underlying data structure to Seq[I] where I is the index set of the GP
Calculates the energy of the configuration, in most global optimization algorithms we aim to find an approximate value of the hyper-parameters such that this function is minimized.
Calculates the energy of the configuration, in most global optimization algorithms we aim to find an approximate value of the hyper-parameters such that this function is minimized.
The value of the hyper-parameters in the configuration space
Optional parameters about configuration
Configuration Energy E(h) In this particular case E(h) = -log p(Y|X,h) also known as log likelihood.
The training data
The training data
Calculates the gradient energy of the configuration and subtracts this from the current value of h to yield a new hyper-parameter configuration.
Calculates the gradient energy of the configuration and subtracts this from the current value of h to yield a new hyper-parameter configuration.
Over ride this function if you aim to implement a gradient based hyper-parameter optimization routine like ML-II
The value of the hyper-parameters in the configuration space
Gradient of the objective function (marginal likelihood) as a Map
Stores the names of the hyper-parameters
Stores the names of the hyper-parameters
The GP is taken to be zero mean, or centered.
The GP is taken to be zero mean, or centered. This is ensured by standardization of the data before being used for further processing.
Cache the training kernel and noise matrices for fast access in future predictions.
Cache the training kernel and noise matrices for fast access in future predictions.
Predict the value of the target variable given a point.
Predict the value of the target variable given a point.
Draw three predictions from the posterior predictive distribution
Draw three predictions from the posterior predictive distribution
Calculates posterior predictive distribution for a particular set of test data points.
Calculates posterior predictive distribution for a particular set of test data points.
A Sequence or Sequence like data structure storing the values of the input patters.
Set the model "state" which contains values of its hyper-parameters with respect to the covariance and noise kernels.
Set the model "state" which contains values of its hyper-parameters with respect to the covariance and noise kernels.
Returns a prediction with error bars for a test set of indexes and labels.
Returns a prediction with error bars for a test set of indexes and labels. (Index, Actual Value, Prediction, Lower Bar, Higher Bar)
Forget the cached kernel & noise matrices.
Forget the cached kernel & noise matrices.
Basis Function Gaussian Process Regression
Single-Output Gaussian Process Regression Model Performs gp/spline smoothing/regression with vector inputs and a singular scalar output.
The model incorporates explicit basis functions which are used to parameterize the mean/trend function.
The data structure holding the training data.
The index set over which the Gaussian Process is defined.