Class

io.github.mandar2812.dynaml.optimization

GradBasedBackPropagation

Related Doc: package optimization

Permalink

abstract class GradBasedBackPropagation[LayerP, I] extends RegularizedOptimizer[NeuralStack[LayerP, I], I, I, Stream[(I, I)]]

LayerP

The type of the parameters for each layer

I

The type of input/output patterns.

Linear Supertypes
RegularizedOptimizer[NeuralStack[LayerP, I], I, I, Stream[(I, I)]], Optimizer[NeuralStack[LayerP, I], I, I, Stream[(I, I)]], Serializable, Serializable, AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. GradBasedBackPropagation
  2. RegularizedOptimizer
  3. Optimizer
  4. Serializable
  5. Serializable
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new GradBasedBackPropagation()

    Permalink

Abstract Value Members

  1. abstract val backPropagate: MetaPipe[LayerP, Stream[(I, I)], Stream[I]]

    Permalink

    A meta pipeline which for a particular value of the layer parameters, returns a data pipe which takes as input Stream of Tuple2 consisting of delta's and gradients of activation function with respect to their local fields (calculated via Activation.grad).

  2. abstract val computeOutputDelta: StreamMapPipe[(I, I, I), (I, Double)]

    Permalink

    A data pipeline which takes Tuple3 consisting of output layer activations, targets and gradients of output activations with respect to their local fields, respectively and returns the output layer delta values and the loss.

  3. abstract val gradCompute: DataPipe[Stream[(I, I)], LayerP]

    Permalink

    A data pipeline which takes as input a Stream of Tuple2 whose first element is the activation and second element the delta value and outputs the gradient of the layer parameters.

  4. abstract val stackFactory: NeuralStackFactory[LayerP, I]

    Permalink
  5. abstract val updater: MomentumUpdater[Seq[LayerP]]

    Permalink

    Performs the actual update to the layer parameters after all the gradients have been calculated.

Concrete Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def _momentum: Double

    Permalink
  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  9. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  10. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  11. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  12. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  13. var miniBatchFraction: Double

    Permalink
    Attributes
    protected
    Definition Classes
    RegularizedOptimizer
  14. var momentum: Double

    Permalink
    Attributes
    protected
  15. def momentum_(m: Double): GradBasedBackPropagation[LayerP, I]

    Permalink
  16. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  17. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  18. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  19. var numIterations: Int

    Permalink
    Attributes
    protected
    Definition Classes
    RegularizedOptimizer
  20. def optimize(nPoints: Long, data: Stream[(I, I)], initialStack: NeuralStack[LayerP, I]): NeuralStack[LayerP, I]

    Permalink

    Solve the optimization problem of determining NeuralStack weights, from training data.

    Solve the optimization problem of determining NeuralStack weights, from training data.

    nPoints

    The number of training data points

    data

    Training data

    initialStack

    The initial NeuralStack before training

    returns

    A NeuralStack with the learned layer weights and biases.

    Definition Classes
    GradBasedBackPropagationOptimizer
  21. var regParam: Double

    Permalink
    Attributes
    protected
    Definition Classes
    RegularizedOptimizer
  22. def setMiniBatchFraction(fraction: Double): GradBasedBackPropagation.this.type

    Permalink

    Set fraction of data to be used for each SGD iteration.

    Set fraction of data to be used for each SGD iteration. Default 1.0 (corresponding to deterministic/classical gradient descent)

    Definition Classes
    RegularizedOptimizer
  23. def setNumIterations(iters: Int): GradBasedBackPropagation.this.type

    Permalink

    Set the number of iterations for SGD.

    Set the number of iterations for SGD. Default 100.

    Definition Classes
    RegularizedOptimizer
  24. def setRegParam(regParam: Double): GradBasedBackPropagation.this.type

    Permalink

    Set the regularization parameter.

    Set the regularization parameter. Default 0.0.

    Definition Classes
    RegularizedOptimizer
  25. def setStepSize(step: Double): GradBasedBackPropagation.this.type

    Permalink

    Set the initial step size of SGD for the first step.

    Set the initial step size of SGD for the first step. Default 1.0. In subsequent steps, the step size will decrease with stepSize/sqrt(t)

    Definition Classes
    RegularizedOptimizer
  26. var stepSize: Double

    Permalink
    Attributes
    protected
    Definition Classes
    RegularizedOptimizer
  27. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  28. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  29. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  30. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  31. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from RegularizedOptimizer[NeuralStack[LayerP, I], I, I, Stream[(I, I)]]

Inherited from Optimizer[NeuralStack[LayerP, I], I, I, Stream[(I, I)]]

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped