package plugins
- Alphabetic
- By Inheritance
- plugins
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Type Members
-
trait
Builtins
extends ImplicitsSingleton with Layers with Weights with Logging with Names with Operators with FloatTraining with FloatLiterals with FloatWeights with FloatLayers with CumulativeFloatLayers with DoubleTraining with DoubleLiterals with DoubleWeights with DoubleLayers with CumulativeDoubleLayers
A plugin that enables all other DeepLearning.scala built-in plugins.
A plugin that enables all other DeepLearning.scala built-in plugins.
Author:
杨博 (Yang Bo)
-
trait
CumulativeDoubleLayers
extends DoubleLayers
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.
Author:
杨博 (Yang Bo)
Given a DoubleWeight,
import com.thoughtworks.deeplearning.plugins._ import com.thoughtworks.feature.Factory val hyperparameters = Factory[DoubleTraining with ImplicitsSingleton with Operators with CumulativeDoubleLayers with DoubleWeights].newInstance() import hyperparameters.implicits._ val weight1 = hyperparameters.DoubleWeight(10)
then the training result should be applied on it
weight1.train.map { result => result should be(10.0f) weight1.data should be < 10.0f }
, Given two DoubleWeights,
import com.thoughtworks.deeplearning.plugins._ import com.thoughtworks.feature.Factory val hyperparameters = Factory[DoubleTraining with ImplicitsSingleton with Operators with CumulativeDoubleLayers with DoubleWeights].newInstance() import hyperparameters.implicits._ val weight1 = hyperparameters.DoubleWeight(10) val weight2 = hyperparameters.DoubleWeight(300)
when adding them together,
val weight1PlusWeight2 = weight1 + weight2
then the training result should be applied on both weight
weight1PlusWeight2.train.map { result => result should be(310.0f) weight2.data should be < 300.0f weight1.data should be < 10.0f }
- Note
Unlike DoubleLayers, DoubleLayer in this
CumulativeDoubleLayers
will share Tapes created in forward pass pass for all dependencies, avoiding re-evaluation in the case of diamond dependencies in a neural network.
Examples: -
trait
CumulativeFloatLayers
extends FloatLayers
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.
Author:
杨博 (Yang Bo)
Given a FloatWeight,
import com.thoughtworks.deeplearning.plugins._ import com.thoughtworks.feature.Factory val hyperparameters = Factory[FloatTraining with ImplicitsSingleton with Operators with CumulativeFloatLayers with FloatWeights].newInstance() import hyperparameters.implicits._ val weight1 = hyperparameters.FloatWeight(10)
then the training result should be applied on it
weight1.train.map { result => result should be(10.0f) weight1.data should be < 10.0f }
, Given two FloatWeights,
import com.thoughtworks.deeplearning.plugins._ import com.thoughtworks.feature.Factory val hyperparameters = Factory[FloatTraining with ImplicitsSingleton with Operators with CumulativeFloatLayers with FloatWeights].newInstance() import hyperparameters.implicits._ val weight1 = hyperparameters.FloatWeight(10) val weight2 = hyperparameters.FloatWeight(300)
when adding them together,
val weight1PlusWeight2 = weight1 + weight2
then the training result should be applied on both weight
weight1PlusWeight2.train.map { result => result should be(310.0f) weight2.data should be < 300.0f weight1.data should be < 10.0f }
- Note
Unlike FloatLayers, FloatLayer in this
CumulativeFloatLayers
will share Tapes created in forward pass pass for all dependencies, avoiding re-evaluation in the case of diamond dependencies in a neural network.
Examples: -
trait
DoubleLayers
extends Layers
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.
Author:
杨博 (Yang Bo)
- Note
By default, the computation in a DoubleLayer will re-evaluate again and again if the
DoubleLayer
is used by multiple other operations. This behavior is very inefficient if there is are diamond dependencies in a neural network. It's wise to use CumulativeDoubleLayers instead of thisDoubleLayers
in such neural network.
-
trait
DoubleLiterals
extends AnyRef
A plugin that enables scala.Double in neural networks.
-
trait
DoubleTraining
extends Training
A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Double.
A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Double.
Author:
杨博 (Yang Bo)
-
trait
DoubleWeights
extends Weights
A plugin to create scala.Double weights.
A plugin to create scala.Double weights.
Author:
杨博 (Yang Bo)
- Note
Custom optimization algorithm for updating DoubleWeight can be implemented by creating a plugin that provides an overridden DoubleOptimizer that provides an overridden DoubleOptimizer.delta.
-
trait
FloatLayers
extends Layers
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.
Author:
杨博 (Yang Bo)
- Note
By default, the computation in a FloatLayer will re-evaluate again and again if the
FloatLayer
is used by multiple other operations. This behavior is very inefficient if there is are diamond dependencies in a neural network. It's wise to use CumulativeFloatLayers instead of thisFloatLayers
in such neural network.
-
trait
FloatLiterals
extends AnyRef
A plugin that enables scala.Float in neural networks.
-
trait
FloatTraining
extends Training
A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Float.
A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Float.
Author:
杨博 (Yang Bo)
-
trait
FloatWeights
extends Weights
A plugin to create scala.Float weights.
A plugin to create scala.Float weights.
Author:
杨博 (Yang Bo)
- Note
Custom optimization algorithm for updating FloatWeight can be implemented by creating a plugin that provides an overridden FloatOptimizer that provides an overridden FloatOptimizer.delta.
-
trait
ImplicitsSingleton
extends AnyRef
A plugin that creates the instance of implicits.
-
trait
Layers
extends AnyRef
A plugin that enables Layer in neural networks.
-
trait
Logging
extends Layers with Weights
A plugin that logs uncaught exceptions raised from Layer and Weight.
- trait Names extends Layers with Weights
-
trait
Operators
extends AnyRef
A plugin that contains definitions of polymorphic functions and methods.
A plugin that contains definitions of polymorphic functions and methods.
The implementations of polymorphic functions and methods can be found in FloatLayers.Implicits, DoubleLayers.Implicits and INDArrayLayers.Implicits.
Author:
杨博 (Yang Bo)
- See also
Shapeless's Documentations for the underlying mechanism of polymorphic functions.
-
trait
Training
extends AnyRef
A DeepLearning.scala plugin that enables methods defined in DeepLearning.Ops for neural networks.
A DeepLearning.scala plugin that enables methods defined in DeepLearning.Ops for neural networks.
Author:
杨博 (Yang Bo)
-
trait
Weights
extends AnyRef
A plugin that enables Weight in neural networks.
A plugin that enables Weight in neural networks.
Author:
杨博 (Yang Bo)