package plugins

Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. plugins
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. trait Builtins extends ImplicitsSingleton with Layers with Weights with Logging with Names with Operators with FloatTraining with FloatLiterals with FloatWeights with FloatLayers with CumulativeFloatLayers with DoubleTraining with DoubleLiterals with DoubleWeights with DoubleLayers with CumulativeDoubleLayers

    A plugin that enables all other DeepLearning.scala built-in plugins.

    A plugin that enables all other DeepLearning.scala built-in plugins.

    Author:

    杨博 (Yang Bo)

  2. trait CumulativeDoubleLayers extends DoubleLayers

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.

    Author:

    杨博 (Yang Bo)

    Examples:
    1. Given a DoubleWeight,

      import com.thoughtworks.deeplearning.plugins._
      import com.thoughtworks.feature.Factory
      val hyperparameters = Factory[DoubleTraining with ImplicitsSingleton with Operators with CumulativeDoubleLayers with DoubleWeights].newInstance()
      import hyperparameters.implicits._
      val weight1 = hyperparameters.DoubleWeight(10)

      then the training result should be applied on it

      weight1.train.map { result =>
        result should be(10.0f)
      
        weight1.data should be < 10.0f
      }
    2. ,
    3. Given two DoubleWeights,

      import com.thoughtworks.deeplearning.plugins._
      import com.thoughtworks.feature.Factory
      val hyperparameters = Factory[DoubleTraining with ImplicitsSingleton with Operators with CumulativeDoubleLayers with DoubleWeights].newInstance()
      import hyperparameters.implicits._
      val weight1 = hyperparameters.DoubleWeight(10)
      val weight2 = hyperparameters.DoubleWeight(300)

      when adding them together,

      val weight1PlusWeight2 = weight1 + weight2

      then the training result should be applied on both weight

      weight1PlusWeight2.train.map { result =>
        result should be(310.0f)
      
        weight2.data should be < 300.0f
        weight1.data should be < 10.0f
      }
    Note

    Unlike DoubleLayers, DoubleLayer in this CumulativeDoubleLayers will share Tapes created in forward pass pass for all dependencies, avoiding re-evaluation in the case of diamond dependencies in a neural network.

  3. trait CumulativeFloatLayers extends FloatLayers

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.

    Author:

    杨博 (Yang Bo)

    Examples:
    1. Given a FloatWeight,

      import com.thoughtworks.deeplearning.plugins._
      import com.thoughtworks.feature.Factory
      val hyperparameters = Factory[FloatTraining with ImplicitsSingleton with Operators with CumulativeFloatLayers with FloatWeights].newInstance()
      import hyperparameters.implicits._
      val weight1 = hyperparameters.FloatWeight(10)

      then the training result should be applied on it

      weight1.train.map { result =>
        result should be(10.0f)
      
        weight1.data should be < 10.0f
      }
    2. ,
    3. Given two FloatWeights,

      import com.thoughtworks.deeplearning.plugins._
      import com.thoughtworks.feature.Factory
      val hyperparameters = Factory[FloatTraining with ImplicitsSingleton with Operators with CumulativeFloatLayers with FloatWeights].newInstance()
      import hyperparameters.implicits._
      val weight1 = hyperparameters.FloatWeight(10)
      val weight2 = hyperparameters.FloatWeight(300)

      when adding them together,

      val weight1PlusWeight2 = weight1 + weight2

      then the training result should be applied on both weight

      weight1PlusWeight2.train.map { result =>
        result should be(310.0f)
      
        weight2.data should be < 300.0f
        weight1.data should be < 10.0f
      }
    Note

    Unlike FloatLayers, FloatLayer in this CumulativeFloatLayers will share Tapes created in forward pass pass for all dependencies, avoiding re-evaluation in the case of diamond dependencies in a neural network.

  4. trait DoubleLayers extends Layers

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.

    Author:

    杨博 (Yang Bo)

    Note

    By default, the computation in a DoubleLayer will re-evaluate again and again if the DoubleLayer is used by multiple other operations. This behavior is very inefficient if there is are diamond dependencies in a neural network. It's wise to use CumulativeDoubleLayers instead of this DoubleLayers in such neural network.

  5. trait DoubleLiterals extends AnyRef

    A plugin that enables scala.Double in neural networks.

  6. trait DoubleTraining extends Training

    A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Double.

    A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Double.

    Author:

    杨博 (Yang Bo)

  7. trait DoubleWeights extends Weights

    A plugin to create scala.Double weights.

    A plugin to create scala.Double weights.

    Author:

    杨博 (Yang Bo)

    Note

    Custom optimization algorithm for updating DoubleWeight can be implemented by creating a plugin that provides an overridden DoubleOptimizer that provides an overridden DoubleOptimizer.delta.

  8. trait FloatLayers extends Layers

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.

    Author:

    杨博 (Yang Bo)

    Note

    By default, the computation in a FloatLayer will re-evaluate again and again if the FloatLayer is used by multiple other operations. This behavior is very inefficient if there is are diamond dependencies in a neural network. It's wise to use CumulativeFloatLayers instead of this FloatLayers in such neural network.

  9. trait FloatLiterals extends AnyRef

    A plugin that enables scala.Float in neural networks.

  10. trait FloatTraining extends Training

    A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Float.

    A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Float.

    Author:

    杨博 (Yang Bo)

  11. trait FloatWeights extends Weights

    A plugin to create scala.Float weights.

    A plugin to create scala.Float weights.

    Author:

    杨博 (Yang Bo)

    Note

    Custom optimization algorithm for updating FloatWeight can be implemented by creating a plugin that provides an overridden FloatOptimizer that provides an overridden FloatOptimizer.delta.

  12. trait ImplicitsSingleton extends AnyRef

    A plugin that creates the instance of implicits.

    A plugin that creates the instance of implicits.

    Any fields and methods in Implicits added by other plugins will be mixed-in and present in implicits.

  13. trait Layers extends AnyRef

    A plugin that enables Layer in neural networks.

  14. trait Logging extends Layers with Weights

    A plugin that logs uncaught exceptions raised from Layer and Weight.

    A plugin that logs uncaught exceptions raised from Layer and Weight.

    Author:

    杨博 (Yang Bo)

  15. trait Names extends Layers with Weights

    A plugin that automatically names Layers and Weights.

    A plugin that automatically names Layers and Weights.

    Author:

    杨博 (Yang Bo)

  16. trait Operators extends AnyRef

    A plugin that contains definitions of polymorphic functions and methods.

    A plugin that contains definitions of polymorphic functions and methods.

    The implementations of polymorphic functions and methods can be found in FloatLayers.Implicits, DoubleLayers.Implicits and INDArrayLayers.Implicits.

    Author:

    杨博 (Yang Bo)

    See also

    Shapeless's Documentations for the underlying mechanism of polymorphic functions.

  17. trait Training extends AnyRef

    A DeepLearning.scala plugin that enables methods defined in DeepLearning.Ops for neural networks.

    A DeepLearning.scala plugin that enables methods defined in DeepLearning.Ops for neural networks.

    Author:

    杨博 (Yang Bo)

  18. trait Weights extends AnyRef

    A plugin that enables Weight in neural networks.

    A plugin that enables Weight in neural networks.

    Author:

    杨博 (Yang Bo)

Value Members

  1. object Logging
  2. object Operators

Inherited from AnyRef

Inherited from Any

Ungrouped