Modifier and Type | Method and Description |
---|---|
Layer |
Layer.clone()
Clone the layer
|
Layer |
Layer.transpose()
Return a transposed copy of the weights/bias
(this means reverse the number of inputs and outputs on the weights)
|
Modifier and Type | Method and Description |
---|---|
void |
Layer.merge(Layer layer,
int batchSize)
Parameter averaging
|
void |
Updater.setStateViewArray(Layer layer,
org.nd4j.linalg.api.ndarray.INDArray viewArray,
boolean initialize)
Set the internal (historical) state view array for this updater
|
int |
Updater.stateSizeForLayer(Layer layer)
Calculate and return the state size for this updater (for the given layer).
|
void |
Updater.update(Layer layer,
Gradient gradient,
int iteration,
int miniBatchSize)
Updater: updates the model
|
Modifier and Type | Interface and Description |
---|---|
interface |
IOutputLayer
Interface for output layers (those that calculate gradients with respect to a labels array)
|
interface |
RecurrentLayer
Created by Alex on 28/08/2016.
|
Modifier and Type | Method and Description |
---|---|
Layer |
SubsamplingLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
OutputLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
BatchNormalization.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
GravesLSTM.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
EmbeddingLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
ConvolutionLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
AutoEncoder.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
abstract Layer |
Layer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
GravesBidirectionalLSTM.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
RBM.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
LocalResponseNormalization.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
RnnOutputLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
DenseLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
ActivationLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Modifier and Type | Field and Description |
---|---|
protected Layer[] |
ComputationGraph.layers
A list of layers.
|
Modifier and Type | Method and Description |
---|---|
Layer |
ComputationGraph.getLayer(int idx)
Get the layer by the number of that layer, in range 0 to getNumLayers()-1
NOTE: This is different from the internal GraphVertex index for the layer
|
Layer |
ComputationGraph.getLayer(String name)
Get a given layer by name.
|
Layer[] |
ComputationGraph.getLayers()
Get all layers in the ComputationGraph
|
Layer |
ComputationGraph.getOutputLayer(int outputLayerIdx)
Get the specified output layer, by index.
|
Modifier and Type | Method and Description |
---|---|
Layer |
GraphVertex.getLayer()
Get the Layer (if any).
|
Modifier and Type | Method and Description |
---|---|
Layer |
LayerVertex.getLayer() |
Layer |
SubsetVertex.getLayer() |
Layer |
PreprocessorVertex.getLayer() |
Layer |
MergeVertex.getLayer() |
Layer |
ElementWiseVertex.getLayer() |
Layer |
InputVertex.getLayer() |
Constructor and Description |
---|
LayerVertex(ComputationGraph graph,
String name,
int vertexIndex,
Layer layer,
InputPreProcessor layerPreProcessor,
boolean outputVertex)
Create a network input vertex:
|
LayerVertex(ComputationGraph graph,
String name,
int vertexIndex,
VertexIndices[] inputVertices,
VertexIndices[] outputVertices,
Layer layer,
InputPreProcessor layerPreProcessor,
boolean outputVertex) |
Modifier and Type | Method and Description |
---|---|
Layer |
LastTimeStepVertex.getLayer() |
Layer |
DuplicateToTimeSeriesVertex.getLayer() |
Modifier and Type | Class and Description |
---|---|
class |
ActivationLayer
Activation Layer
Used to apply activation on input and corresponding derivative on epsilon.
|
class |
BaseLayer<LayerConfT extends Layer>
A layer with a bias
and activation function
|
class |
BaseOutputLayer<LayerConfT extends BaseOutputLayer>
Output layer with different objective
in co-occurrences for different objectives.
|
class |
BasePretrainNetwork<LayerConfT extends BasePretrainNetwork>
Baseline class for any Neural Network used
as a layer in a deep network *
|
class |
OutputLayer
Output layer with different objective
incooccurrences for different objectives.
|
Modifier and Type | Method and Description |
---|---|
Layer |
BaseLayer.clone() |
Layer |
BaseLayer.transpose() |
Layer |
ActivationLayer.transpose() |
Modifier and Type | Method and Description |
---|---|
void |
BaseLayer.merge(Layer l,
int batchSize)
Averages the given logistic regression from a mini batch into this layer
|
void |
ActivationLayer.merge(Layer layer,
int batchSize) |
Modifier and Type | Class and Description |
---|---|
class |
ConvolutionLayer
Convolution layer
|
Modifier and Type | Method and Description |
---|---|
Layer |
ConvolutionLayer.transpose() |
Modifier and Type | Method and Description |
---|---|
void |
ConvolutionLayer.merge(Layer layer,
int batchSize) |
Modifier and Type | Class and Description |
---|---|
class |
SubsamplingLayer
Subsampling layer.
|
Modifier and Type | Method and Description |
---|---|
Layer |
SubsamplingLayer.transpose() |
Modifier and Type | Method and Description |
---|---|
void |
SubsamplingLayer.merge(Layer layer,
int batchSize) |
Modifier and Type | Class and Description |
---|---|
class |
AutoEncoder
Autoencoder.
|
Modifier and Type | Class and Description |
---|---|
class |
DenseLayer |
Modifier and Type | Class and Description |
---|---|
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
Modifier and Type | Class and Description |
---|---|
class |
RBM
Restricted Boltzmann Machine.
|
Modifier and Type | Method and Description |
---|---|
Layer |
RBM.transpose() |
Modifier and Type | Class and Description |
---|---|
class |
BatchNormalization
Batch normalization layer.
|
class |
LocalResponseNormalization
Deep neural net normalization approach normalizes activations between layers
"brightness normalization"
Used for nets like AlexNet
|
Modifier and Type | Method and Description |
---|---|
Layer |
BatchNormalization.clone() |
Layer |
BatchNormalization.transpose() |
Layer |
LocalResponseNormalization.transpose() |
Modifier and Type | Method and Description |
---|---|
void |
BatchNormalization.merge(Layer layer,
int batchSize) |
void |
LocalResponseNormalization.merge(Layer layer,
int batchSize) |
Modifier and Type | Class and Description |
---|---|
class |
BaseRecurrentLayer<LayerConfT extends Layer> |
class |
GravesBidirectionalLSTM
RNN tutorial: http://deeplearning4j.org/usingrnns.html
READ THIS FIRST
Bdirectional LSTM layer implementation.
|
class |
GravesLSTM
LSTM layer implementation.
|
class |
RnnOutputLayer
Recurrent Neural Network Output Layer.
Handles calculation of gradients etc for various objective functions. Functionally the same as OutputLayer, but handles output and label reshaping automatically. Input and output activations are same as other RNN layers: 3 dimensions with shape [miniBatchSize,nIn,timeSeriesLength] and [miniBatchSize,nOut,timeSeriesLength] respectively. |
Modifier and Type | Method and Description |
---|---|
Layer |
GravesLSTM.transpose() |
Layer |
GravesBidirectionalLSTM.transpose() |
Modifier and Type | Method and Description |
---|---|
static FwdPassReturn |
LSTMHelpers.activateHelper(Layer layer,
NeuralNetConfiguration conf,
org.nd4j.linalg.api.ndarray.INDArray input,
org.nd4j.linalg.api.ndarray.INDArray recurrentWeights,
org.nd4j.linalg.api.ndarray.INDArray originalInputWeights,
org.nd4j.linalg.api.ndarray.INDArray biases,
boolean training,
org.nd4j.linalg.api.ndarray.INDArray originalPrevOutputActivations,
org.nd4j.linalg.api.ndarray.INDArray originalPrevMemCellState,
boolean forBackprop,
boolean forwards,
String inputWeightKey)
Returns FwdPassReturn object with activations/INDArrays.
|
Modifier and Type | Class and Description |
---|---|
class |
MultiLayerNetwork
MultiLayerNetwork is a neural network with multiple layers in a stack, and usually an output layer.
|
Modifier and Type | Field and Description |
---|---|
protected Layer[] |
MultiLayerNetwork.layers |
Modifier and Type | Field and Description |
---|---|
protected LinkedHashMap<String,Layer> |
MultiLayerNetwork.layerMap |
Modifier and Type | Method and Description |
---|---|
Layer |
MultiLayerNetwork.getLayer(int i) |
Layer |
MultiLayerNetwork.getLayer(String name) |
Layer[] |
MultiLayerNetwork.getLayers() |
Layer |
MultiLayerNetwork.getOutputLayer()
Get the output layer
|
Layer |
MultiLayerNetwork.transpose() |
Modifier and Type | Method and Description |
---|---|
void |
MultiLayerNetwork.merge(Layer layer,
int batchSize)
Averages the given logistic regression
from a mini batch in to this one
|
void |
MultiLayerNetwork.setLayers(Layer[] layers) |
Modifier and Type | Method and Description |
---|---|
void |
LayerUpdater.applyLrDecayPolicy(LearningRatePolicy decay,
Layer layer,
int iteration,
String variable)
Update learning rate based on policy
|
void |
BaseUpdater.applyLrDecayPolicy(LearningRatePolicy decay,
Layer layer,
int iteration,
String variable)
Deprecated.
Update learning rate based on policy
|
void |
LayerUpdater.applyMomentumDecayPolicy(Layer layer,
int iteration,
String variable)
Update momentum if schedule exist
|
void |
BaseUpdater.applyMomentumDecayPolicy(Layer layer,
int iteration,
String variable)
Deprecated.
Update momentum if schedule exist
|
org.nd4j.linalg.learning.GradientUpdater |
AdaGradUpdater.init(String variable,
Layer layer)
Deprecated.
|
org.nd4j.linalg.learning.GradientUpdater |
SgdUpdater.init(String variable,
Layer layer)
Deprecated.
|
org.nd4j.linalg.learning.GradientUpdater |
NoOpUpdater.init(String variable,
Layer layer)
Deprecated.
|
org.nd4j.linalg.learning.GradientUpdater |
NesterovsUpdater.init(String variable,
Layer layer)
Deprecated.
|
org.nd4j.linalg.learning.GradientUpdater |
RmsPropUpdater.init(String variable,
Layer layer)
Deprecated.
|
org.nd4j.linalg.learning.GradientUpdater |
LayerUpdater.init(String variable,
Layer layer) |
org.nd4j.linalg.learning.GradientUpdater |
AdamUpdater.init(String variable,
Layer layer)
Deprecated.
|
abstract org.nd4j.linalg.learning.GradientUpdater |
BaseUpdater.init(String variable,
Layer layer)
Deprecated.
|
org.nd4j.linalg.learning.GradientUpdater |
AdaDeltaUpdater.init(String variable,
Layer layer)
Deprecated.
|
void |
LayerUpdater.postApply(Layer layer,
org.nd4j.linalg.api.ndarray.INDArray gradient,
String param,
int miniBatchSize)
Apply the regularization
|
void |
BaseUpdater.postApply(Layer layer,
org.nd4j.linalg.api.ndarray.INDArray gradient,
String param,
int miniBatchSize)
Deprecated.
Apply the regularization
|
void |
LayerUpdater.preApply(Layer layer,
Gradient gradient,
int iteration)
Apply gradient normalization: scale based on L2, clipping etc.
|
void |
BaseUpdater.preApply(Layer layer,
Gradient gradient,
int iteration)
Deprecated.
Apply gradient normalization: scale based on L2, clipping etc.
|
void |
LayerUpdater.setStateViewArray(Layer layer,
org.nd4j.linalg.api.ndarray.INDArray viewArray,
boolean initialize) |
void |
MultiLayerUpdater.setStateViewArray(Layer layer,
org.nd4j.linalg.api.ndarray.INDArray viewArray,
boolean initialize) |
void |
BaseUpdater.setStateViewArray(Layer layer,
org.nd4j.linalg.api.ndarray.INDArray viewArray,
boolean initialize)
Deprecated.
|
int |
LayerUpdater.stateSizeForLayer(Layer layer) |
int |
MultiLayerUpdater.stateSizeForLayer(Layer layer) |
int |
BaseUpdater.stateSizeForLayer(Layer layer)
Deprecated.
|
void |
LayerUpdater.update(Layer layer,
Gradient gradient,
int iteration,
int miniBatchSize) |
void |
MultiLayerUpdater.update(Layer layer,
Gradient gradient,
int iteration,
int batchSize) |
void |
BaseUpdater.update(Layer layer,
Gradient gradient,
int iteration,
int miniBatchSize)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
static Layer |
NetSaverLoaderUtils.loadLayerParameters(Layer layer,
String paramPath)
Load existing parameters to the layer
|
Modifier and Type | Method and Description |
---|---|
static org.nd4j.linalg.api.ndarray.INDArray |
Dropout.applyDropConnect(Layer layer,
String variable)
Apply drop connect to the given variable
|
static Layer |
NetSaverLoaderUtils.loadLayerParameters(Layer layer,
String paramPath)
Load existing parameters to the layer
|
Copyright © 2016. All Rights Reserved.