Convert from the underlying data structure to Seq[(I, Y)] where I is the index set of the GP and Y is the value/label type.
Convert from the underlying data structure to Seq[(I, Y)] where I is the index set of the GP and Y is the value/label type.
Returns a DataPipe2 which calculates the energy of data: T.
Returns a DataPipe2 which calculates the energy of data: T. See: energy below.
Underlying covariance function of the Gaussian Processes.
Underlying covariance function of the Gaussian Processes.
A Map which stores the current state of the system.
A Map which stores the current state of the system.
Convert from the underlying data structure to Seq[I] where I is the index set of the GP
Convert from the underlying data structure to Seq[I] where I is the index set of the GP
Calculates the energy of the configuration, in most global optimization algorithms we aim to find an approximate value of the hyper-parameters such that this function is minimized.
Calculates the energy of the configuration, in most global optimization algorithms we aim to find an approximate value of the hyper-parameters such that this function is minimized.
The value of the hyper-parameters in the configuration space
Optional parameters about configuration
Configuration Energy E(h)
The training data
Stores the names of the hyper-parameters
Stores the names of the hyper-parameters
Mean Function: Takes a member of the index set (input) and returns the corresponding mean of the distribution corresponding to input.
Mean Function: Takes a member of the index set (input) and returns the corresponding mean of the distribution corresponding to input.
Cache the training kernel and noise matrices for fast access in future predictions.
Cache the training kernel and noise matrices for fast access in future predictions.
Predict the value of the target variable given a point.
Draw three predictions from the posterior predictive distribution 1) Mean or MAP estimate Y 2) Y- : The lower error bar estimate (mean - sigma*stdDeviation) 3) Y+ : The upper error bar.
Draw three predictions from the posterior predictive distribution 1) Mean or MAP estimate Y 2) Y- : The lower error bar estimate (mean - sigma*stdDeviation) 3) Y+ : The upper error bar. (mean + sigma*stdDeviation)
Calculates posterior predictive distribution for a particular set of test data points.
Calculates posterior predictive distribution for a particular set of test data points.
A Sequence or Sequence like data structure storing the values of the input patters.
Set the model "state" which contains values of its hyper-parameters with respect to the covariance and noise kernels.
Returns a prediction with error bars for a test set of indexes and labels.
Returns a prediction with error bars for a test set of indexes and labels. (Index, Actual Value, Prediction, Lower Bar, Higher Bar)
Forget the cached kernel & noise matrices.
Implementation of Extended Skew-Gaussian Process regression model. This is represented with a finite dimensional BlockedMESNRV distribution of Adcock and Schutes.