Modifier and Type | Method and Description |
---|---|
static boolean |
GradientCheckUtil.checkGradientsPretrainLayer(Layer layer,
double epsilon,
double maxRelError,
double minAbsoluteError,
boolean print,
boolean exitOnFirstError,
org.nd4j.linalg.api.ndarray.INDArray input,
int rngSeed)
Check backprop gradients for a pretrain layer
NOTE: gradient checking pretrain layers can be difficult...
|
Modifier and Type | Method and Description |
---|---|
Layer |
Layer.clone()
Clone the layer
|
Layer[] |
NeuralNetworkPrototype.getLayers() |
Layer |
Layer.transpose()
Return a transposed copy of the weights/bias
(this means reverse the number of inputs and outputs on the weights)
|
Modifier and Type | Method and Description |
---|---|
void |
Layer.merge(Layer layer,
int batchSize)
Deprecated.
As of 0.7.3 - Feb 2017. No longer used. Merging (for parameter averaging) done via alternative means
|
void |
Updater.setStateViewArray(Layer layer,
org.nd4j.linalg.api.ndarray.INDArray viewArray,
boolean initialize)
Set the internal (historical) state view array for this updater
|
void |
Updater.update(Layer layer,
Gradient gradient,
int iteration,
int miniBatchSize)
Updater: updates the model
|
Modifier and Type | Interface and Description |
---|---|
interface |
IOutputLayer
Interface for output layers (those that calculate gradients with respect to a labels array)
|
interface |
RecurrentLayer
Created by Alex on 28/08/2016.
|
Modifier and Type | Method and Description |
---|---|
Layer |
AutoEncoder.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
SubsamplingLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
OutputLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
abstract Layer |
Layer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
LSTM.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
RBM.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
EmbeddingLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
BatchNormalization.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
RnnOutputLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
GravesBidirectionalLSTM.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
GravesLSTM.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
GlobalPoolingLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
ActivationLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
LocalResponseNormalization.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
ConvolutionLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
DropoutLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
CenterLossOutputLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
Subsampling1DLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
LossLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
Convolution1DLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
ZeroPaddingLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Layer |
DenseLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Modifier and Type | Method and Description |
---|---|
Layer |
FrozenLayer.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Modifier and Type | Method and Description |
---|---|
Layer |
VariationalAutoencoder.instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
Modifier and Type | Field and Description |
---|---|
protected Layer[] |
ComputationGraph.layers
A list of layers.
|
Modifier and Type | Method and Description |
---|---|
Layer |
ComputationGraph.getLayer(int idx)
Get the layer by the number of that layer, in range 0 to getNumLayers()-1
NOTE: This is different from the internal GraphVertex index for the layer
|
Layer |
ComputationGraph.getLayer(String name)
Get a given layer by name.
|
Layer[] |
ComputationGraph.getLayers()
Get all layers in the ComputationGraph
|
Layer |
ComputationGraph.getOutputLayer(int outputLayerIdx)
Get the specified output layer, by index.
|
Modifier and Type | Method and Description |
---|---|
Layer |
GraphVertex.getLayer()
Get the Layer (if any).
|
Modifier and Type | Method and Description |
---|---|
Layer |
ScaleVertex.getLayer() |
Layer |
PreprocessorVertex.getLayer() |
Layer |
PoolHelperVertex.getLayer() |
Layer |
LayerVertex.getLayer() |
Layer |
MergeVertex.getLayer() |
Layer |
InputVertex.getLayer() |
Layer |
L2NormalizeVertex.getLayer() |
Layer |
SubsetVertex.getLayer() |
Layer |
ShiftVertex.getLayer() |
Layer |
ElementWiseVertex.getLayer() |
Layer |
UnstackVertex.getLayer() |
Layer |
StackVertex.getLayer() |
Layer |
L2Vertex.getLayer() |
Layer |
ReshapeVertex.getLayer() |
Constructor and Description |
---|
LayerVertex(ComputationGraph graph,
String name,
int vertexIndex,
Layer layer,
InputPreProcessor layerPreProcessor,
boolean outputVertex)
Create a network input vertex:
|
LayerVertex(ComputationGraph graph,
String name,
int vertexIndex,
VertexIndices[] inputVertices,
VertexIndices[] outputVertices,
Layer layer,
InputPreProcessor layerPreProcessor,
boolean outputVertex) |
Modifier and Type | Method and Description |
---|---|
Layer |
DuplicateToTimeSeriesVertex.getLayer() |
Layer |
LastTimeStepVertex.getLayer() |
Modifier and Type | Class and Description |
---|---|
class |
AbstractLayer<LayerConfT extends Layer>
A layer with input and output, no parameters or gradients
|
class |
ActivationLayer
Activation Layer
Used to apply activation on input and corresponding derivative on epsilon.
|
class |
BaseLayer<LayerConfT extends BaseLayer>
A layer with parameters
|
class |
BaseOutputLayer<LayerConfT extends BaseOutputLayer>
Output layer with different objective
in co-occurrences for different objectives.
|
class |
BasePretrainNetwork<LayerConfT extends BasePretrainNetwork>
Baseline class for any Neural Network used
as a layer in a deep network *
|
class |
DropoutLayer
Created by davekale on 12/7/16.
|
class |
FrozenLayer
For purposes of transfer learning
A frozen layers wraps another dl4j layer within it.
|
class |
LossLayer
LossLayer is a flexible output "layer" that performs a loss function on
an input without MLP logic.
|
class |
OutputLayer
Output layer with different objective
incooccurrences for different objectives.
|
Modifier and Type | Method and Description |
---|---|
Layer |
FrozenLayer.clone() |
Layer |
ActivationLayer.clone() |
abstract Layer |
AbstractLayer.clone() |
Layer |
BaseLayer.clone() |
Layer |
FrozenLayer.getInsideLayer() |
Layer |
FrozenLayer.transpose() |
Layer |
ActivationLayer.transpose() |
Layer |
DropoutLayer.transpose() |
Layer |
AbstractLayer.transpose() |
Layer |
LossLayer.transpose() |
Layer |
BaseLayer.transpose() |
Modifier and Type | Method and Description |
---|---|
void |
FrozenLayer.merge(Layer layer,
int batchSize) |
void |
ActivationLayer.merge(Layer layer,
int batchSize) |
void |
DropoutLayer.merge(Layer layer,
int batchSize) |
void |
AbstractLayer.merge(Layer l,
int batchSize)
Averages the given logistic regression from a mini batch into this layer
|
void |
LossLayer.merge(Layer layer,
int batchSize) |
void |
BaseLayer.merge(Layer l,
int batchSize)
Averages the given logistic regression from a mini batch into this layer
|
Constructor and Description |
---|
FrozenLayer(Layer insideLayer) |
Modifier and Type | Class and Description |
---|---|
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
ConvolutionLayer
Convolution layer
|
class |
ZeroPaddingLayer
Zero padding layer for convolutional neural networks.
|
Modifier and Type | Method and Description |
---|---|
Layer |
ZeroPaddingLayer.clone() |
Layer |
ConvolutionLayer.transpose() |
Modifier and Type | Method and Description |
---|---|
void |
ConvolutionLayer.merge(Layer layer,
int batchSize) |
Modifier and Type | Class and Description |
---|---|
class |
Subsampling1DLayer
1D (temporal) subsampling layer.
|
class |
SubsamplingLayer
Subsampling layer.
|
Modifier and Type | Method and Description |
---|---|
Layer |
SubsamplingLayer.clone() |
Layer |
SubsamplingLayer.transpose() |
Modifier and Type | Method and Description |
---|---|
void |
SubsamplingLayer.merge(Layer layer,
int batchSize) |
Modifier and Type | Class and Description |
---|---|
class |
AutoEncoder
Autoencoder.
|
Modifier and Type | Class and Description |
---|---|
class |
DenseLayer |
Modifier and Type | Class and Description |
---|---|
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
Modifier and Type | Class and Description |
---|---|
class |
RBM
Restricted Boltzmann Machine.
|
Modifier and Type | Method and Description |
---|---|
Layer |
RBM.transpose()
Deprecated.
|
Modifier and Type | Class and Description |
---|---|
class |
BatchNormalization
Batch normalization layer.
|
class |
LocalResponseNormalization
Deep neural net normalization approach normalizes activations between layers
"brightness normalization"
Used for nets like AlexNet
|
Modifier and Type | Method and Description |
---|---|
Layer |
BatchNormalization.clone() |
Layer |
LocalResponseNormalization.clone() |
Layer |
BatchNormalization.transpose() |
Layer |
LocalResponseNormalization.transpose() |
Modifier and Type | Method and Description |
---|---|
void |
BatchNormalization.merge(Layer layer,
int batchSize) |
void |
LocalResponseNormalization.merge(Layer layer,
int batchSize) |
Modifier and Type | Class and Description |
---|---|
class |
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
Supports the following PoolingType s: SUM, AVG, MAX, PNORMGlobal pooling layer can also handle mask arrays when dealing with variable length inputs. |
Modifier and Type | Method and Description |
---|---|
Layer |
GlobalPoolingLayer.clone() |
Modifier and Type | Class and Description |
---|---|
class |
BaseRecurrentLayer<LayerConfT extends BaseLayer> |
class |
GravesBidirectionalLSTM
RNN tutorial: http://deeplearning4j.org/usingrnns.html
READ THIS FIRST
Bdirectional LSTM layer implementation.
|
class |
GravesLSTM
LSTM layer implementation.
|
class |
LSTM
LSTM layer implementation.
|
class |
RnnOutputLayer
Recurrent Neural Network Output Layer.
Handles calculation of gradients etc for various objective functions. Functionally the same as OutputLayer, but handles output and label reshaping automatically. Input and output activations are same as other RNN layers: 3 dimensions with shape [miniBatchSize,nIn,timeSeriesLength] and [miniBatchSize,nOut,timeSeriesLength] respectively. |
Modifier and Type | Method and Description |
---|---|
Layer |
LSTM.transpose() |
Layer |
GravesBidirectionalLSTM.transpose() |
Layer |
GravesLSTM.transpose() |
Modifier and Type | Method and Description |
---|---|
FwdPassReturn |
LSTMHelper.activate(Layer layer,
NeuralNetConfiguration conf,
org.nd4j.linalg.activations.IActivation gateActivationFn,
org.nd4j.linalg.api.ndarray.INDArray input,
org.nd4j.linalg.api.ndarray.INDArray recurrentWeights,
org.nd4j.linalg.api.ndarray.INDArray inputWeights,
org.nd4j.linalg.api.ndarray.INDArray biases,
boolean training,
org.nd4j.linalg.api.ndarray.INDArray prevOutputActivations,
org.nd4j.linalg.api.ndarray.INDArray prevMemCellState,
boolean forBackprop,
boolean forwards,
String inputWeightKey,
org.nd4j.linalg.api.ndarray.INDArray maskArray,
boolean hasPeepholeConnections) |
Modifier and Type | Class and Description |
---|---|
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
Modifier and Type | Class and Description |
---|---|
class |
VariationalAutoencoder
Variational Autoencoder layer
|
Modifier and Type | Method and Description |
---|---|
Layer |
VariationalAutoencoder.clone() |
Layer |
VariationalAutoencoder.transpose() |
Modifier and Type | Method and Description |
---|---|
void |
VariationalAutoencoder.merge(Layer layer,
int batchSize) |
Modifier and Type | Class and Description |
---|---|
class |
MultiLayerNetwork
MultiLayerNetwork is a neural network with multiple layers in a stack, and usually an output layer.
|
Modifier and Type | Field and Description |
---|---|
protected Layer[] |
MultiLayerNetwork.layers |
Modifier and Type | Field and Description |
---|---|
protected LinkedHashMap<String,Layer> |
MultiLayerNetwork.layerMap |
Modifier and Type | Method and Description |
---|---|
Layer |
MultiLayerNetwork.getLayer(int i) |
Layer |
MultiLayerNetwork.getLayer(String name) |
Layer[] |
MultiLayerNetwork.getLayers() |
Layer |
MultiLayerNetwork.getOutputLayer()
Get the output layer
|
Layer |
MultiLayerNetwork.transpose() |
Modifier and Type | Method and Description |
---|---|
void |
MultiLayerNetwork.merge(Layer layer,
int batchSize)
Deprecated.
Not supported and not used
|
void |
MultiLayerNetwork.setLayers(Layer[] layers) |
Modifier and Type | Field and Description |
---|---|
protected Map<String,Layer> |
BaseMultiLayerUpdater.layersByName |
Modifier and Type | Method and Description |
---|---|
protected abstract Layer[] |
BaseMultiLayerUpdater.getOrderedLayers() |
protected Layer[] |
MultiLayerUpdater.getOrderedLayers() |
protected Layer[] |
LayerUpdater.getOrderedLayers() |
Modifier and Type | Method and Description |
---|---|
static boolean |
UpdaterUtils.lrSchedulesEqual(Layer layer1,
String param1,
Layer layer2,
String param2) |
void |
UpdaterBlock.postApply(Layer layer,
String paramName,
org.nd4j.linalg.api.ndarray.INDArray gradientView,
org.nd4j.linalg.api.ndarray.INDArray paramsView)
Apply L1 and L2 regularization, if necessary.
|
void |
BaseMultiLayerUpdater.preApply(Layer layer,
Gradient gradient,
int iteration)
Pre-apply: Apply gradient normalization/clipping
|
void |
BaseMultiLayerUpdater.setStateViewArray(Layer layer,
org.nd4j.linalg.api.ndarray.INDArray viewArray,
boolean initialize) |
void |
BaseMultiLayerUpdater.update(Layer layer,
Gradient gradient,
int iteration,
int batchSize) |
static boolean |
UpdaterUtils.updaterConfigurationsEquals(Layer layer1,
String param1,
Layer layer2,
String param2) |
Constructor and Description |
---|
LayerUpdater(Layer layer) |
LayerUpdater(Layer layer,
org.nd4j.linalg.api.ndarray.INDArray updaterState) |
Modifier and Type | Field and Description |
---|---|
protected Layer[] |
ComputationGraphUpdater.orderedLayers |
Modifier and Type | Method and Description |
---|---|
protected Layer[] |
ComputationGraphUpdater.getOrderedLayers() |
Modifier and Type | Method and Description |
---|---|
static org.nd4j.linalg.api.ndarray.INDArray |
Dropout.applyDropConnect(Layer layer,
String variable)
Apply drop connect to the given variable
|
Copyright © 2017. All rights reserved.