Class

io.github.mandar2812.dynaml.models.gp

GPBasisFuncRegressionModel

Related Doc: package gp

Permalink

abstract class GPBasisFuncRegressionModel[T, I] extends AbstractGPRegressionModel[T, I]

Basis Function Gaussian Process Regression

Single-Output Gaussian Process Regression Model Performs gp/spline smoothing/regression with vector inputs and a singular scalar output.

The model incorporates explicit basis functions which are used to parameterize the mean/trend function.

T

The data structure holding the training data.

I

The index set over which the Gaussian Process is defined.

Linear Supertypes
AbstractGPRegressionModel[T, I], GloballyOptWithGrad, GloballyOptimizable, SecondOrderProcessModel[T, I, Double, Double, DenseMatrix[Double], MultGaussianPRV], ContinuousProcessModel[T, I, Double, MultGaussianPRV], StochasticProcessModel[T, I, Double, MultGaussianPRV], Model[T, I, Double], AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. GPBasisFuncRegressionModel
  2. AbstractGPRegressionModel
  3. GloballyOptWithGrad
  4. GloballyOptimizable
  5. SecondOrderProcessModel
  6. ContinuousProcessModel
  7. StochasticProcessModel
  8. Model
  9. AnyRef
  10. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new GPBasisFuncRegressionModel(cov: LocalScalarKernel[I], n: LocalScalarKernel[I], data: T, num: Int, basisFunc: DataPipe[I, DenseVector[Double]], basis_param_prior: MultGaussianRV)(implicit arg0: ClassTag[I])

    Permalink

    cov

    The covariance function/kernel of the GP model, expressed as a LocalScalarKernel instance

    n

    Measurement noise covariance of the GP model.

    data

    Training data set of generic type T

    num

    The number of training data instances.

    basisFunc

    A basis function representation for the input features, represented as a DataPipe.

    basis_param_prior

    A Gaussian prior on the basis function trend coefficients.

Abstract Value Members

  1. abstract def dataAsSeq(data: T): Seq[(I, Double)]

    Permalink

    Convert from the underlying data structure to Seq[(I, Y)] where I is the index set of the GP and Y is the value/label type.

    Convert from the underlying data structure to Seq[(I, Y)] where I is the index set of the GP and Y is the value/label type.

    Definition Classes
    StochasticProcessModel

Concrete Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def _blockSize: Int

    Permalink
    Definition Classes
    AbstractGPRegressionModel
  5. def _current_state: Map[String, Double]

    Permalink
    Definition Classes
    GloballyOptimizable
  6. def _errorSigma: Int

    Permalink
    Definition Classes
    ContinuousProcessModel
  7. def _hyper_parameters: List[String]

    Permalink
    Definition Classes
    GloballyOptimizable
  8. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  9. val b: DenseVector[Double]

    Permalink
  10. var blockSize: Int

    Permalink
    Attributes
    protected
    Definition Classes
    AbstractGPRegressionModel
  11. def blockSize_(b: Int): Unit

    Permalink
    Definition Classes
    AbstractGPRegressionModel
  12. var caching: Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    AbstractGPRegressionModel
  13. def calculateEnergyPipe(h: Map[String, Double], options: Map[String, String]): DataPipe2[Seq[I], PartitionedVector, Double]

    Permalink

    Returns a DataPipe2 which calculates the energy of data: T.

    Returns a DataPipe2 which calculates the energy of data: T. See: energy below.

    Definition Classes
    AbstractGPRegressionModel
  14. def calculateGradEnergyPipe(h: Map[String, Double]): DataPipe2[Seq[I], PartitionedVector, Map[String, Double]]

    Permalink

    Returns a DataPipe which calculates the gradient of the energy, E(.) of data: T with respect to the model hyper-parameters.

    Returns a DataPipe which calculates the gradient of the energy, E(.) of data: T with respect to the model hyper-parameters. See: gradEnergy below.

    Definition Classes
    AbstractGPRegressionModel
  15. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  16. val covB: DenseMatrix[Double]

    Permalink
  17. val covariance: LocalScalarKernel[I]

    Permalink

    Underlying covariance function of the Gaussian Processes.

    Underlying covariance function of the Gaussian Processes.

    Definition Classes
    AbstractGPRegressionModelSecondOrderProcessModel
  18. var current_state: Map[String, Double]

    Permalink

    A Map which stores the current state of the system.

    A Map which stores the current state of the system.

    Attributes
    protected
    Definition Classes
    AbstractGPRegressionModelGloballyOptimizable
  19. def data: T

    Permalink
    Definition Classes
    Model
  20. def dataAsIndexSeq(data: T): Seq[I]

    Permalink

    Convert from the underlying data structure to Seq[I] where I is the index set of the GP

    Convert from the underlying data structure to Seq[I] where I is the index set of the GP

    Definition Classes
    StochasticProcessModel
  21. def energy(h: Map[String, Double], options: Map[String, String]): Double

    Permalink

    Calculates the energy of the configuration, in most global optimization algorithms we aim to find an approximate value of the hyper-parameters such that this function is minimized.

    Calculates the energy of the configuration, in most global optimization algorithms we aim to find an approximate value of the hyper-parameters such that this function is minimized.

    h

    The value of the hyper-parameters in the configuration space

    options

    Optional parameters about configuration

    returns

    Configuration Energy E(h) In this particular case E(h) = -log p(Y|X,h) also known as log likelihood.

    Definition Classes
    AbstractGPRegressionModelGloballyOptimizable
  22. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  23. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  24. def errorSigma_(s: Int): Unit

    Permalink
    Definition Classes
    ContinuousProcessModel
  25. val feature_map_cov: FeatureMapCovariance[I, DenseVector[Double]]

    Permalink
  26. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  27. val g: T

    Permalink

    The training data

    The training data

    Attributes
    protected
    Definition Classes
    AbstractGPRegressionModelModel
  28. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  29. def getCrossKernelMatrix[U <: Seq[I]](test: U): PartitionedMatrix

    Permalink
    Attributes
    protected
    Definition Classes
    GPBasisFuncRegressionModelAbstractGPRegressionModel
  30. def getTestKernelMatrix[U <: Seq[I]](test: U): PartitionedPSDMatrix

    Permalink
    Attributes
    protected
    Definition Classes
    GPBasisFuncRegressionModelAbstractGPRegressionModel
  31. def getTrainKernelMatrix[U <: Seq[I]]: PartitionedPSDMatrix

    Permalink
    Attributes
    protected
    Definition Classes
    GPBasisFuncRegressionModelAbstractGPRegressionModel
  32. def gradEnergy(h: Map[String, Double]): Map[String, Double]

    Permalink

    Calculates the gradient energy of the configuration and subtracts this from the current value of h to yield a new hyper-parameter configuration.

    Calculates the gradient energy of the configuration and subtracts this from the current value of h to yield a new hyper-parameter configuration.

    Over ride this function if you aim to implement a gradient based hyper-parameter optimization routine like ML-II

    h

    The value of the hyper-parameters in the configuration space

    returns

    Gradient of the objective function (marginal likelihood) as a Map

    Definition Classes
    AbstractGPRegressionModelGloballyOptWithGrad
  33. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  34. var hyper_parameters: List[String]

    Permalink

    Stores the names of the hyper-parameters

    Stores the names of the hyper-parameters

    Attributes
    protected
    Definition Classes
    AbstractGPRegressionModelGloballyOptimizable
  35. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  36. var kernelMatrixCache: DenseMatrix[Double]

    Permalink
    Attributes
    protected
    Definition Classes
    AbstractGPRegressionModel
  37. val mean: DataPipe[I, Double]

    Permalink

    The GP is taken to be zero mean, or centered.

    The GP is taken to be zero mean, or centered. This is ensured by standardization of the data before being used for further processing.

    Definition Classes
    GPBasisFuncRegressionModelAbstractGPRegressionModelSecondOrderProcessModel
  38. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  39. val noiseModel: LocalScalarKernel[I]

    Permalink
    Definition Classes
    AbstractGPRegressionModel
  40. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  41. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  42. val npoints: Int

    Permalink
    Definition Classes
    AbstractGPRegressionModel
  43. var partitionedKernelMatrixCache: PartitionedPSDMatrix

    Permalink
    Attributes
    protected
    Definition Classes
    AbstractGPRegressionModel
  44. def persist(state: Map[String, Double]): Unit

    Permalink

    Cache the training kernel and noise matrices for fast access in future predictions.

    Cache the training kernel and noise matrices for fast access in future predictions.

    Definition Classes
    AbstractGPRegressionModelGloballyOptimizable
  45. def predict(point: I): Double

    Permalink

    Predict the value of the target variable given a point.

    Predict the value of the target variable given a point.

    Definition Classes
    AbstractGPRegressionModelModel
  46. def predictionWithErrorBars[U <: Seq[I]](testData: U, sigma: Int): Seq[(I, Double, Double, Double)]

    Permalink

    Draw three predictions from the posterior predictive distribution

    Draw three predictions from the posterior predictive distribution

    • Mean or MAP estimate Y
    • Y- : The lower error bar estimate (mean - sigma*stdDeviation)
    • Y+ : The upper error bar. (mean + sigma*stdDeviation)
    Definition Classes
    AbstractGPRegressionModelContinuousProcessModel
  47. def predictiveDistribution[U <: Seq[I]](test: U): MultGaussianPRV

    Permalink

    Calculates posterior predictive distribution for a particular set of test data points.

    Calculates posterior predictive distribution for a particular set of test data points.

    test

    A Sequence or Sequence like data structure storing the values of the input patters.

    Definition Classes
    AbstractGPRegressionModelStochasticProcessModel
  48. def setState(s: Map[String, Double]): GPBasisFuncRegressionModel.this.type

    Permalink

    Set the model "state" which contains values of its hyper-parameters with respect to the covariance and noise kernels.

    Set the model "state" which contains values of its hyper-parameters with respect to the covariance and noise kernels.

    Definition Classes
    AbstractGPRegressionModel
  49. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  50. def test(testData: T): Seq[(I, Double, Double, Double, Double)]

    Permalink

    Returns a prediction with error bars for a test set of indexes and labels.

    Returns a prediction with error bars for a test set of indexes and labels. (Index, Actual Value, Prediction, Lower Bar, Higher Bar)

    Definition Classes
    ContinuousProcessModel
  51. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  52. lazy val trainingData: Seq[I]

    Permalink
    Attributes
    protected
    Definition Classes
    AbstractGPRegressionModel
  53. lazy val trainingDataLabels: PartitionedVector

    Permalink
    Attributes
    protected
    Definition Classes
    AbstractGPRegressionModel
  54. def unpersist(): Unit

    Permalink

    Forget the cached kernel & noise matrices.

    Forget the cached kernel & noise matrices.

    Definition Classes
    AbstractGPRegressionModel
  55. implicit val vf: VectorField

    Permalink
  56. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  57. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  58. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AbstractGPRegressionModel[T, I]

Inherited from GloballyOptWithGrad

Inherited from GloballyOptimizable

Inherited from SecondOrderProcessModel[T, I, Double, Double, DenseMatrix[Double], MultGaussianPRV]

Inherited from ContinuousProcessModel[T, I, Double, MultGaussianPRV]

Inherited from StochasticProcessModel[T, I, Double, MultGaussianPRV]

Inherited from Model[T, I, Double]

Inherited from AnyRef

Inherited from Any

Ungrouped