com.linkedin.photon.ml.hyperparameter.estimators
the covariance kernel
if true, the estimator normalizes labels to a mean of zero before fitting
learn a target function with noise
transformation function to apply for predictions
the number of samples to draw during the burn-in phase of kernel parameter estimation
the number of samples to draw for estimating kernel parameters
Estimates kernel parameters by sampling from the kernel parameter likelihood function
Estimates kernel parameters by sampling from the kernel parameter likelihood function
We assume a uniform prior over the kernel parameters $\theta$ and observed features $x$, therefore:
$l(\theta|x,y) = p(y|theta,x) \propto p(theta|x,y)$
Since the slice sampling algorithm requires that the function be merely proportional to the target distribution, sampling from this function is equivalent to sampling from p(\theta|x,y). These samples can then be used to compute a Monte Carlo estimate of the response for a new query point $q'$ by integrating over values of $\theta$:
$\int r(x', \theta) p(\theta) d\theta$
In this way we (approximately) marginalize over all $\theta$ and arrive at a more robust estimate than would be produced by computing a maximum likelihood point estimate of the parameters.
the observed features
the observed labels
a collection of covariance kernels corresponding to the sampled kernel parameters
Produces a Gaussian Process regression model from the input features and labels
Produces a Gaussian Process regression model from the input features and labels
the observed features
the observed labels
the estimated model
Samples the next theta, given the previous one
Samples the next theta, given the previous one
the previous sample
the observed features
the observed labels
the next theta sample
Estimates a Gaussian Process regression model
"Gaussian Processes for Machine Learning" (GPML), http://www.gaussianprocess.org/gpml/, Chapter 2