Modifier and Type | Interface and Description |
---|---|
interface |
Layer
Interface for a layer of a neural network.
|
Modifier and Type | Method and Description |
---|---|
void |
Updater.setStateViewArray(Trainable layer,
INDArray viewArray,
boolean initialize)
Set the internal (historical) state view array for this updater
|
void |
Updater.update(Trainable layer,
Gradient gradient,
int iteration,
int epoch,
int miniBatchSize,
LayerWorkspaceMgr workspaceMgr)
Updater: updates the model
|
Modifier and Type | Interface and Description |
---|---|
interface |
IOutputLayer
Interface for output layers (those that calculate gradients with respect to a labels array)
|
interface |
RecurrentLayer
Created by Alex on 28/08/2016.
|
Modifier and Type | Interface and Description |
---|---|
interface |
GraphVertex
A GraphVertex is a vertex in the computation graph.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseGraphVertex
BaseGraphVertex defines a set of common functionality for GraphVertex instances.
|
class |
BaseWrapperVertex
A base class for wrapper vertices: i.e., those vertices that have another vertex inside.
|
Modifier and Type | Class and Description |
---|---|
class |
ElementWiseVertex
An ElementWiseVertex is used to combine the activations of two or more layer in an element-wise manner
For example, the activations may be combined by addition, subtraction or multiplication or by selecting the maximum. |
class |
FrozenVertex
FrozenVertex is used for the purposes of transfer learning
A frozen layers wraps another DL4J GraphVertex within it.
|
class |
InputVertex
An InputVertex simply defines the location (and connection structure) of inputs to the ComputationGraph.
|
class |
L2NormalizeVertex
L2NormalizeVertex performs L2 normalization on a single input.
|
class |
L2Vertex
L2Vertex calculates the L2 least squares error of two inputs.
|
class |
LayerVertex
LayerVertex is a GraphVertex with a neural network Layer (and, optionally an
InputPreProcessor ) in it |
class |
MergeVertex
A MergeVertex is used to combine the activations of two or more layers/GraphVertex by means of concatenation/merging.
Exactly how this is done depends on the type of input. For 2d (feed forward layer) inputs: MergeVertex([numExamples,layerSize1],[numExamples,layerSize2]) -> [numExamples,layerSize1 + layerSize2] For 3d (time series) inputs: MergeVertex([numExamples,layerSize1,timeSeriesLength],[numExamples,layerSize2,timeSeriesLength]) -> [numExamples,layerSize1 + layerSize2,timeSeriesLength] For 4d (convolutional) inputs: MergeVertex([numExamples,depth1,width,height],[numExamples,depth2,width,height]) -> [numExamples,depth1 + depth2,width,height] |
class |
PoolHelperVertex
A custom layer for removing the first column and row from an input.
|
class |
PreprocessorVertex
PreprocessorVertex is a simple adaptor class that allows a
InputPreProcessor to be used in a ComputationGraph
GraphVertex, without it being associated with a layer. |
class |
ReshapeVertex
Adds the ability to reshape and flatten the tensor in the computation graph.
|
class |
ScaleVertex
A ScaleVertex is used to scale the size of activations of a single layer
For example, ResNet activations can be scaled in repeating blocks to keep variance under control. |
class |
ShiftVertex
A ShiftVertex is used to shift the activations of a single layer
One could use it to add a bias or as part of some other calculation. |
class |
StackVertex
StackVertex allows for stacking of inputs so that they may be forwarded through
a network.
|
class |
SubsetVertex
SubsetVertex is used to select a subset of the activations out of another GraphVertex.
For example, a subset of the activations out of a layer. Note that this subset is specifying by means of an interval of the original activations. |
class |
UnstackVertex
UnstackVertex allows for unstacking of inputs so that they may be forwarded through
a network.
|
Modifier and Type | Class and Description |
---|---|
class |
DuplicateToTimeSeriesVertex
DuplicateToTimeSeriesVertex is a vertex that goes from 2d activations to a 3d time series activations, by means of
duplication.
|
class |
LastTimeStepVertex
LastTimeStepVertex is used in the context of recurrent neural network activations, to go from 3d (time series)
activations to 2d activations, by extracting out the last time step of activations for each example.
This can be used for example in sequence to sequence architectures, and potentially for sequence classification. |
class |
ReverseTimeSeriesVertex
ReverseTimeSeriesVertex is used in recurrent neural networks to revert the order of time series.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractLayer<LayerConfT extends Layer>
A layer with input and output, no parameters or gradients
|
class |
ActivationLayer
Activation Layer
Used to apply activation on input and corresponding derivative on epsilon.
|
class |
BaseLayer<LayerConfT extends BaseLayer>
A layer with parameters
|
class |
BaseOutputLayer<LayerConfT extends BaseOutputLayer>
Output layer with different objective
in co-occurrences for different objectives.
|
class |
BasePretrainNetwork<LayerConfT extends BasePretrainNetwork>
Baseline class for any Neural Network used
as a layer in a deep network *
|
class |
DropoutLayer
Created by davekale on 12/7/16.
|
class |
FrozenLayer
For purposes of transfer learning
A frozen layers wraps another dl4j layer within it.
|
class |
FrozenLayerWithBackprop
Frozen layer freezes parameters of the layer it wraps, but allows the backpropagation to continue.
|
class |
LossLayer
LossLayer is a flexible output "layer" that performs a loss function on
an input without MLP logic.
|
class |
OutputLayer
Output layer with different objective
incooccurrences for different objectives.
|
class |
RepeatVector
RepeatVector layer.
|
Modifier and Type | Class and Description |
---|---|
class |
CnnLossLayer
Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: CnnLossLayer does not have any parameters. |
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
Convolution3DLayer
3D convolution layer implementation.
|
class |
ConvolutionLayer
Convolution layer
|
class |
Cropping1DLayer
Zero cropping layer for 1D convolutional neural networks.
|
class |
Cropping2DLayer
Zero cropping layer for convolutional neural networks.
|
class |
Cropping3DLayer
Cropping layer for 3D convolutional neural networks.
|
class |
Deconvolution2DLayer
2D deconvolution layer implementation.
|
class |
DepthwiseConvolution2DLayer
2D depth-wise convolution layer configuration.
|
class |
SeparableConvolution2DLayer
2D Separable convolution layer implementation
Separable convolutions split a regular convolution operation into two
simpler operations, which are usually computationally more efficient.
|
class |
SpaceToBatch
Space to batch utility layer for convolutional input types.
|
class |
SpaceToDepth
Space to channels utility layer for convolutional input types.
|
class |
ZeroPadding1DLayer
Zero padding 1D layer for convolutional neural networks.
|
class |
ZeroPadding3DLayer
Zero padding 3D layer for convolutional neural networks.
|
class |
ZeroPaddingLayer
Zero padding layer for convolutional neural networks.
|
Modifier and Type | Class and Description |
---|---|
class |
Subsampling1DLayer
1D (temporal) subsampling layer.
|
class |
Subsampling3DLayer
Subsampling 3D layer, used for downsampling a 3D convolution
|
class |
SubsamplingLayer
Subsampling layer.
|
Modifier and Type | Class and Description |
---|---|
class |
Upsampling1D
1D Upsampling layer.
|
class |
Upsampling2D
2D Upsampling layer.
|
class |
Upsampling3D
3D Upsampling layer.
|
Modifier and Type | Class and Description |
---|---|
class |
PReLU
Parametrized Rectified Linear Unit (PReLU)
f(x) = alpha * x for x < 0, f(x) = x for x >= 0
alpha has the same shape as x and is a learned parameter.
|
Modifier and Type | Class and Description |
---|---|
class |
AutoEncoder
Autoencoder.
|
Modifier and Type | Class and Description |
---|---|
class |
DenseLayer |
Modifier and Type | Class and Description |
---|---|
class |
ElementWiseMultiplicationLayer
Elementwise multiplication layer with weights: implements out = activationFn(input .* w + b) where:
- w is a learnable weight vector of length nOut - ".*" is element-wise multiplication - b is a bias vector Note that the input and output sizes of the element-wise layer are the same for this layer |
Modifier and Type | Class and Description |
---|---|
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
class |
EmbeddingSequenceLayer
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
Modifier and Type | Class and Description |
---|---|
class |
BatchNormalization
Batch normalization layer.
|
class |
LocalResponseNormalization
Deep neural net normalization approach normalizes activations between layers
"brightness normalization"
Used for nets like AlexNet
|
Modifier and Type | Class and Description |
---|---|
class |
Yolo2OutputLayer
Output (loss) layer for YOLOv2 object detection model, based on the papers:
YOLO9000: Better, Faster, Stronger - Redmon & Farhadi (2016) - https://arxiv.org/abs/1612.08242
and You Only Look Once: Unified, Real-Time Object Detection - Redmon et al. |
Modifier and Type | Class and Description |
---|---|
class |
OCNNOutputLayer
Layer implementation for
OCNNOutputLayer
See OCNNOutputLayer
for details. |
Modifier and Type | Class and Description |
---|---|
class |
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
Supports the following PoolingType s: SUM, AVG, MAX, PNORMGlobal pooling layer can also handle mask arrays when dealing with variable length inputs. mask arrays are assumed to be 2d, and are fed forward through the network during training or post-training forward pass: - Time series (RNNs, 1d CNNs): mask arrays are shape [miniBatchSize, maxTimeSeriesLength] and contain values 0 or 1 only - CNNs (2d): mask have shape [miniBatchSize, 1, height, 1] or [miniBatchSize, 1, 1, width] or [minibatch, 1, height, width]. |
Modifier and Type | Class and Description |
---|---|
class |
BaseRecurrentLayer<LayerConfT extends BaseLayer> |
class |
BidirectionalLayer
Bidirectional is a "wrapper" layer: it wraps any uni-directional RNN layer to make it bidirectional.
Note that multiple different modes are supported - these specify how the activations should be combined from the forward and backward RNN networks. |
class |
GravesBidirectionalLSTM
RNN tutorial: http://deeplearning4j.org/usingrnns.html
READ THIS FIRST
Bdirectional LSTM layer implementation.
|
class |
GravesLSTM
Deprecated.
Will be eventually removed. Use
LSTM instead, which has similar prediction accuracy, but supports
CuDNN for faster network training on CUDA (Nvidia) GPUs |
class |
LastTimeStepLayer
LastTimeStep is a "wrapper" layer: it wraps any RNN layer, and extracts out the last time step during forward pass,
and returns it as a row vector (per example).
|
class |
LSTM
LSTM layer implementation.
|
class |
MaskZeroLayer
Masks timesteps with 0 activation.
|
class |
RnnLossLayer
Recurrent Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: Unlike RnnOutputLayer this RnnLossLayer does not have any parameters - i.e., there is no time
distributed dense component here. |
class |
RnnOutputLayer
Recurrent Neural Network Output Layer.
Handles calculation of gradients etc for various objective functions. Functionally the same as OutputLayer, but handles output and label reshaping automatically. Input and output activations are same as other RNN layers: 3 dimensions with shape [miniBatchSize,nIn,timeSeriesLength] and [miniBatchSize,nOut,timeSeriesLength] respectively. |
class |
SimpleRnn
Simple RNN - aka "vanilla" RNN is the simplest type of recurrent neural network layer.
|
Modifier and Type | Class and Description |
---|---|
class |
SameDiffGraphVertex
Implementation of a SameDiff graph vertex.
|
class |
SameDiffLayer |
class |
SameDiffOutputLayer |
Modifier and Type | Class and Description |
---|---|
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
Modifier and Type | Class and Description |
---|---|
class |
MaskLayer
MaskLayer applies the mask array to the forward pass activations, and backward pass gradients, passing through
this layer.
|
Modifier and Type | Class and Description |
---|---|
class |
VariationalAutoencoder
Variational Autoencoder layer
|
Modifier and Type | Class and Description |
---|---|
class |
BaseWrapperLayer
Abstract wrapper layer.
|
Modifier and Type | Class and Description |
---|---|
class |
MultiLayerNetwork
MultiLayerNetwork is a neural network with multiple layers in a stack, and usually an output layer.
|
Modifier and Type | Field and Description |
---|---|
protected Map<String,Trainable> |
BaseMultiLayerUpdater.layersByName |
Modifier and Type | Method and Description |
---|---|
protected Trainable[] |
LayerUpdater.getOrderedLayers() |
protected abstract Trainable[] |
BaseMultiLayerUpdater.getOrderedLayers() |
protected Trainable[] |
MultiLayerUpdater.getOrderedLayers() |
Modifier and Type | Method and Description |
---|---|
void |
UpdaterBlock.postApply(Trainable layer,
String paramName,
INDArray gradientView,
INDArray paramsView)
Apply L1 and L2 regularization, if necessary.
|
void |
BaseMultiLayerUpdater.preApply(Trainable layer,
Gradient gradient,
int iteration)
Pre-apply: Apply gradient normalization/clipping
|
void |
BaseMultiLayerUpdater.setStateViewArray(Trainable layer,
INDArray viewArray,
boolean initialize) |
void |
BaseMultiLayerUpdater.update(Trainable layer,
Gradient gradient,
int iteration,
int epoch,
int batchSize,
LayerWorkspaceMgr workspaceMgr) |
static boolean |
UpdaterUtils.updaterConfigurationsEquals(Trainable layer1,
String param1,
Trainable layer2,
String param2) |
Modifier and Type | Field and Description |
---|---|
protected Trainable[] |
ComputationGraphUpdater.orderedLayers |
Modifier and Type | Method and Description |
---|---|
protected Trainable[] |
ComputationGraphUpdater.getOrderedLayers() |
Copyright © 2018. All rights reserved.