Class

io.github.mandar2812.dynaml.optimization

FFBackProp

Related Doc: package optimization

Permalink

class FFBackProp extends GradBasedBackPropagation[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]]

Linear Supertypes
GradBasedBackPropagation[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]], RegularizedOptimizer[NeuralStack[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]], DenseVector[Double], DenseVector[Double], Stream[(DenseVector[Double], DenseVector[Double])]], Optimizer[NeuralStack[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]], DenseVector[Double], DenseVector[Double], Stream[(DenseVector[Double], DenseVector[Double])]], Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. FFBackProp
  2. GradBasedBackPropagation
  3. RegularizedOptimizer
  4. Optimizer
  5. Serializable
  6. Serializable
  7. AnyRef
  8. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new FFBackProp(stackF: NeuralStackFactory[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]])

    Permalink

Type Members

  1. type PatternType = DenseVector[Double]

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def _momentum: Double

    Permalink
    Definition Classes
    GradBasedBackPropagation
  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. val backPropagate: MetaPipe[(DenseMatrix[Double], DenseVector[Double]), Stream[(PatternType, PatternType)], Stream[DenseVector[Double]]]

    Permalink

    A meta pipeline which for a particular value of the layer parameters, returns a data pipe which takes as input Stream of Tuple2 consisting of delta's and gradients of activation function with respect to their local fields (calculated via Activation.grad).

    A meta pipeline which for a particular value of the layer parameters, returns a data pipe which takes as input Stream of Tuple2 consisting of delta's and gradients of activation function with respect to their local fields (calculated via Activation.grad).

    Definition Classes
    FFBackPropGradBasedBackPropagation
  7. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. val computeOutputDelta: StreamMapPipe[(PatternType, PatternType, PatternType), (DenseVector[Double], Double)]

    Permalink

    A data pipeline which takes Tuple3 consisting of output layer activations, targets and gradients of output activations with respect to their local fields, respectively and returns the output layer delta values.

    A data pipeline which takes Tuple3 consisting of output layer activations, targets and gradients of output activations with respect to their local fields, respectively and returns the output layer delta values.

    Definition Classes
    FFBackPropGradBasedBackPropagation
  9. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  11. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  13. val gradCompute: DataPipe[Stream[(PatternType, PatternType)], (DenseMatrix[Double], DenseVector[Double])]

    Permalink

    A data pipeline which takes as input a Stream of Tuple2 whose first element is the activation and second element the delta value and outputs the gradient of the layer parameters.

    A data pipeline which takes as input a Stream of Tuple2 whose first element is the activation and second element the delta value and outputs the gradient of the layer parameters.

    Definition Classes
    FFBackPropGradBasedBackPropagation
  14. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  15. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  16. var miniBatchFraction: Double

    Permalink
    Attributes
    protected
    Definition Classes
    RegularizedOptimizer
  17. var momentum: Double

    Permalink
    Attributes
    protected
    Definition Classes
    GradBasedBackPropagation
  18. def momentum_(m: Double): GradBasedBackPropagation[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]]

    Permalink
    Definition Classes
    GradBasedBackPropagation
  19. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  20. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  21. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  22. var numIterations: Int

    Permalink
    Attributes
    protected
    Definition Classes
    RegularizedOptimizer
  23. def optimize(nPoints: Long, data: Stream[(DenseVector[Double], DenseVector[Double])], initialStack: NeuralStack[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]]): NeuralStack[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]]

    Permalink

    Solve the optimization problem of determining NeuralStack weights, from training data.

    Solve the optimization problem of determining NeuralStack weights, from training data.

    nPoints

    The number of training data points

    data

    Training data

    initialStack

    The initial NeuralStack before training

    returns

    A NeuralStack with the learned layer weights and biases.

    Definition Classes
    GradBasedBackPropagationOptimizer
  24. var regParam: Double

    Permalink
    Attributes
    protected
    Definition Classes
    RegularizedOptimizer
  25. def setMiniBatchFraction(fraction: Double): FFBackProp.this.type

    Permalink

    Set fraction of data to be used for each SGD iteration.

    Set fraction of data to be used for each SGD iteration. Default 1.0 (corresponding to deterministic/classical gradient descent)

    Definition Classes
    RegularizedOptimizer
  26. def setNumIterations(iters: Int): FFBackProp.this.type

    Permalink

    Set the number of iterations for SGD.

    Set the number of iterations for SGD. Default 100.

    Definition Classes
    RegularizedOptimizer
  27. def setRegParam(regParam: Double): FFBackProp.this.type

    Permalink

    Set the regularization parameter.

    Set the regularization parameter. Default 0.0.

    Definition Classes
    RegularizedOptimizer
  28. def setStepSize(step: Double): FFBackProp.this.type

    Permalink

    Set the initial step size of SGD for the first step.

    Set the initial step size of SGD for the first step. Default 1.0. In subsequent steps, the step size will decrease with stepSize/sqrt(t)

    Definition Classes
    RegularizedOptimizer
  29. val stackFactory: NeuralStackFactory[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]]

    Permalink
    Definition Classes
    FFBackPropGradBasedBackPropagation
  30. var stepSize: Double

    Permalink
    Attributes
    protected
    Definition Classes
    RegularizedOptimizer
  31. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  32. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  33. val updater: FFLayerUpdater

    Permalink

    Performs the actual update to the layer parameters after all the gradients have been calculated.

    Performs the actual update to the layer parameters after all the gradients have been calculated.

    Definition Classes
    FFBackPropGradBasedBackPropagation
  34. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  35. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  36. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from GradBasedBackPropagation[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]]

Inherited from RegularizedOptimizer[NeuralStack[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]], DenseVector[Double], DenseVector[Double], Stream[(DenseVector[Double], DenseVector[Double])]]

Inherited from Optimizer[NeuralStack[(DenseMatrix[Double], DenseVector[Double]), DenseVector[Double]], DenseVector[Double], DenseVector[Double], Stream[(DenseVector[Double], DenseVector[Double])]]

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped