Package

io.github.mandar2812.dynaml

optimization

Permalink

package optimization

Visibility
  1. Public
  2. All

Type Members

  1. abstract class AbstractCSA[M <: GloballyOptimizable, M1] extends AbstractGridSearch[M, M1]

    Permalink

    Skeleton implementation of the Coupled Simulated Annealing algorithm

  2. abstract class AbstractGridSearch[M <: GloballyOptimizable, M1] extends ModelTuner[M, M1]

    Permalink

  3. class BackPropagation extends RegularizedOptimizer[FFNeuralGraph, DenseVector[Double], DenseVector[Double], Stream[(DenseVector[Double], DenseVector[Double])]]

    Permalink

    Implementation of the standard back pro-pogation with momentum using the "generalized delta rule".

  4. trait BasicUpdater[P] extends Serializable

    Permalink
  5. class CommitteeModelSolver extends RegularizedOptimizer[DenseVector[Double], DenseVector[Double], Double, Stream[(DenseVector[Double], Double)]]

    Permalink

    Solves the optimization problem pertaining to the weights of a committee model.

  6. class ConjugateGradient extends RegularizedOptimizer[DenseVector[Double], DenseVector[Double], Double, Iterable[CausalEdge]]

    Permalink

  7. class ConjugateGradientSpark extends RegularizedOptimizer[DenseVector[Double], DenseVector[Double], Double, RDD[LabeledPoint]]

    Permalink

  8. class CoupledSimulatedAnnealing[M <: GloballyOptimizable] extends AbstractCSA[M, M] with GlobalOptimizer[M]

    Permalink

    Implementation of the Coupled Simulated Annealing algorithm for global optimization.

  9. class FFBackProp extends GradBasedBackPropagation[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]]

    Permalink
  10. class FFLayerUpdater extends MomentumUpdater[Seq[(DenseMatrix[Double], DenseVector[Double])]]

    Permalink
  11. class GPMixtureMachine[T, I] extends MixtureMachine[T, I, Double, PartitionedVector, PartitionedPSDMatrix, BlockedMultiVariateGaussian, MultGaussianPRV, AbstractGPRegressionModel[T, I]]

    Permalink

    Constructs a gaussian process mixture model from a single AbstractGPRegressionModel instance.

    Constructs a gaussian process mixture model from a single AbstractGPRegressionModel instance.

    T

    The type of the GP training data

    I

    The index set/input domain of the GP model.

  12. trait GeneralGradient extends AnyRef

    Permalink
  13. trait GlobalOptimizer[T <: GloballyOptimizable] extends ModelTuner[T, T]

    Permalink

    High level interface defining the core functions of a global optimizer

  14. trait GloballyOptWithGrad extends GloballyOptimizable

    Permalink
  15. trait GloballyOptimizable extends AnyRef

    Permalink

    A common binding characteristic between all "globally optimizable" models i.e.

    A common binding characteristic between all "globally optimizable" models i.e. models where hyper-parameters can be optimized/tuned.

  16. abstract class GradBasedBackPropagation[LayerP, I] extends RegularizedOptimizer[NeuralStack[LayerP, I], I, I, Stream[(I, I)]]

    Permalink

    LayerP

    The type of the parameters for each layer

    I

    The type of input/output patterns.

  17. class GradBasedGlobalOptimizer[M <: GloballyOptWithGrad] extends GlobalOptimizer[M]

    Permalink

  18. abstract class Gradient extends Serializable

    Permalink

    Class used to compute the gradient for a loss function, given a single data point.

  19. class GradientDescent extends RegularizedOptimizer[DenseVector[Double], DenseVector[Double], Double, Stream[(DenseVector[Double], Double)]]

    Permalink

    Implements Gradient Descent on the graph generated to calculate approximate optimal values of the model parameters.

  20. class GradientDescentSpark extends RegularizedOptimizer[DenseVector[Double], DenseVector[Double], Double, RDD[LabeledPoint]]

    Permalink

    Implementation of SGD on spark RDD

  21. class GridSearch[M <: GloballyOptimizable] extends AbstractGridSearch[M, M] with GlobalOptimizer[M]

    Permalink

  22. abstract class HessianUpdater extends Updater

    Permalink
  23. class HingeGradient extends Gradient

    Permalink

    Compute gradient and loss for a Hinge loss function, as used in SVM binary classification.

    Compute gradient and loss for a Hinge loss function, as used in SVM binary classification. NOTE: This assumes that the labels are {0,1}

  24. class L1Updater extends Updater

    Permalink

    Updater for L1 regularized problems.

    Updater for L1 regularized problems. R(w) = ||w||_1 Uses a step-size decreasing with the square root of the number of iterations.

    Instead of subgradient of the regularizer, the proximal operator for the L1 regularization is applied after the gradient step. This is known to result in better sparsity of the intermediate solution.

    The corresponding proximal operator for the L1 norm is the soft-thresholding function. That is, each weight component is shrunk towards 0 by shrinkageVal.

    If w > shrinkageVal, set weight component to w-shrinkageVal. If w < -shrinkageVal, set weight component to w+shrinkageVal. If -shrinkageVal < w < shrinkageVal, set weight component to 0.

    Equivalently, set weight component to signum(w) * max(0.0, abs(w) - shrinkageVal)

  25. class LSSVMLinearSolver extends RegularizedOptimizer[DenseVector[Double], DenseVector[Double], Double, (DenseMatrix[Double], DenseVector[Double])]

    Permalink

    Solves the linear problem resulting from applying the Karush-Kuhn-Tucker conditions on the Dual Least Squares SVM optimization problem.

  26. class LaplacePosteriorMode[I] extends RegularizedOptimizer[DenseVector[Double], I, Double, (DenseMatrix[Double], DenseVector[Double])]

    Permalink

    Created by mandar on 6/4/16.

  27. class LeastSquaresGradient extends Gradient

    Permalink

    Compute gradient and loss for a Least-squared loss function, as used in linear regression.

    Compute gradient and loss for a Least-squared loss function, as used in linear regression. This is correct for the averaged least squares loss function (mean squared error) L = 1/2 ||weights . phi(x) - y||**2

  28. class LeastSquaresSVMGradient extends Gradient

    Permalink

    Compute gradient and loss for a Least-squared loss function, as used in LS SVM.

    Compute gradient and loss for a Least-squared loss function, as used in LS SVM. This is correct for the averaged least squares loss function (mean squared error) L = 1/2 (1 - y * weights dot x)**2 See also the documentation for the precise formulation.

  29. class LogisticGradient extends Gradient

    Permalink

    Compute gradient and loss for a logistic loss function, as used in binary classification.

  30. abstract class MixtureMachine[T, I, Y, YDomain, YDomainVar, BaseDistr <: ContinuousDistr[YDomain] with Moments[YDomain, YDomainVar] with HasErrorBars[YDomain], W1 <: ContinuousRVWithDistr[YDomain, BaseDistr], BaseProcess <: ContinuousProcessModel[T, I, Y, W1] with SecondOrderProcessModel[T, I, Y, Double, DenseMatrix[Double], W1] with GloballyOptimizable] extends AbstractCSA[BaseProcess, GenContinuousMixtureModel[T, I, Y, YDomain, YDomainVar, BaseDistr, W1, BaseProcess]]

    Permalink

  31. trait ModelTuner[T <: GloballyOptimizable, T1] extends AnyRef

    Permalink

    A model tuner takes a model which implements GloballyOptimizable and "tunes" it, returning (possibly) a model of a different type.

  32. trait MomentumUpdater[P] extends BasicUpdater[P]

    Permalink

    An updater which uses both the local gradient and the inertia of the system to perform update to its parameters.

  33. trait Optimizer[P, Q, R, S] extends Serializable

    Permalink

    Trait for optimization problem solvers.

    Trait for optimization problem solvers.

    P

    The type of the parameters of the model to be optimized.

    Q

    The type of the predictor variable

    R

    The type of the target variable

    S

    The type of the edge containing the features and label.

  34. class ProbGPCommMachine[T, I] extends CoupledSimulatedAnnealing[AbstractGPRegressionModel[T, I]]

    Permalink

    Build GP committee model after performing the CSA routine

  35. class ProbitGradient extends Gradient

    Permalink
  36. class QuasiNewtonOptimizer extends RegularizedOptimizer[DenseVector[Double], DenseVector[Double], Double, Stream[(DenseVector[Double], Double)]]

    Permalink

  37. class RDDCommitteeSolver extends RegularizedOptimizer[DenseVector[Double], DenseVector[Double], Double, RDD[(DenseVector[Double], Double)]]

    Permalink
  38. class RegularizedLSSolver extends RegularizedOptimizer[DenseVector[Double], DenseVector[Double], Double, (DenseMatrix[Double], DenseVector[Double])]

    Permalink

  39. abstract class RegularizedOptimizer[P, Q, R, S] extends Optimizer[P, Q, R, S] with Serializable

    Permalink
  40. class SimpleBFGSUpdater extends HessianUpdater

    Permalink
  41. class SimpleUpdater extends Updater

    Permalink
  42. class SquaredL2Updater extends Updater

    Permalink

    Updater for L2 regularized problems.

    Updater for L2 regularized problems. R(w) = 1/2 ||w||**2 Uses a step-size decreasing with the square root of the number of iterations.

  43. abstract class Updater extends BasicUpdater[DenseVector[Double]]

    Permalink
  44. class VectorAccumulator extends AccumulatorParam[DenseVector[Double]]

    Permalink

Value Members

  1. object AbstractCSA

    Permalink
  2. object BackPropagation extends Serializable

    Permalink
  3. object ConjugateGradient extends Serializable

    Permalink
  4. object ConjugateGradientSpark extends Serializable

    Permalink
  5. object GlobalOptimizer

    Permalink
  6. object GloballyOptimizable

    Permalink
  7. object GradientDescent extends Serializable

    Permalink
  8. object GradientDescentSpark extends Serializable

    Permalink
  9. object LaplacePosteriorMode extends Serializable

    Permalink
  10. object MixtureMachine

    Permalink
  11. object ProbGPCommMachine

    Permalink
  12. object QuasiNewtonOptimizer extends Serializable

    Permalink
  13. object mcmc

    Permalink

Ungrouped