Modifier and Type | Method and Description |
---|---|
TrainingConfig |
Trainable.getConfig() |
Modifier and Type | Class and Description |
---|---|
class |
AbstractLSTM
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
class |
ActivationLayer |
class |
AutoEncoder
Autoencoder.
|
class |
BaseLayer
A neural network layer.
|
class |
BaseOutputLayer |
class |
BasePretrainNetwork |
class |
BaseRecurrentLayer |
class |
BaseUpsamplingLayer
Upsampling base layer
|
class |
BatchNormalization
Batch normalization configuration
|
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
class |
CnnLossLayer
Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: CnnLossLayer does not have any parameters. |
class |
Convolution1D
1D convolution layer
|
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
Convolution2D
2D convolution layer
|
class |
Convolution3D
3D convolution layer configuration
|
class |
ConvolutionLayer |
class |
Deconvolution2D
2D deconvolution layer configuration
Deconvolutions are also known as transpose convolutions or fractionally strided convolutions.
|
class |
DenseLayer
Dense layer: fully connected feed forward layer trainable by backprop.
|
class |
DepthwiseConvolution2D
2D depth-wise convolution layer configuration.
|
class |
DropoutLayer |
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
class |
EmbeddingSequenceLayer
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
class |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
class |
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
Supports the following PoolingType s: SUM, AVG, MAX, PNORMGlobal pooling layer can also handle mask arrays when dealing with variable length inputs. |
class |
GravesBidirectionalLSTM
Deprecated.
use
Bidirectional instead. With the
Bidirectional layer wrapper you can make any recurrent layer bidirectional, in particular GravesLSTM.
Note that this layer adds the output of both directions, which translates into "ADD" mode in Bidirectional.
Usage: .layer(new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build())) |
class |
GravesLSTM
Deprecated.
Will be eventually removed. Use
LSTM instead, which has similar prediction accuracy, but supports
CuDNN for faster network training on CUDA (Nvidia) GPUs |
class |
Layer
A neural network layer.
|
class |
LocallyConnected1D |
class |
LocallyConnected2D |
class |
LocalResponseNormalization
Created by nyghtowl on 10/29/15.
|
class |
LossLayer
LossLayer is a flexible output "layer" that performs a loss function on
an input without MLP logic.
|
class |
LSTM
LSTM recurrent net without peephole connections.
|
class |
NoParamLayer |
class |
OutputLayer
Output layer with different objective co-occurrences for different objectives.
|
class |
Pooling1D
1D Pooling layer.
|
class |
Pooling2D
2D Pooling layer.
|
class |
PReLULayer
Parametrized Rectified Linear Unit (PReLU)
f(x) = alpha * x for x < 0, f(x) = x for x >= 0
alpha has the same shape as x and is a learned parameter.
|
class |
RnnLossLayer
Recurrent Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: Unlike RnnOutputLayer this RnnLossLayer does not have any parameters - i.e., there is no time
distributed dense component here. |
class |
RnnOutputLayer |
class |
SeparableConvolution2D
2D Separable convolution layer configuration.
|
class |
SpaceToBatchLayer
Space to batch utility layer configuration for convolutional input types.
|
class |
SpaceToDepthLayer
Space to channels utility layer configuration for convolutional input types.
|
class |
Subsampling1DLayer
1D (temporal) subsampling layer.
|
class |
Subsampling3DLayer
3D subsampling / pooling layer for convolutional neural networks
|
class |
SubsamplingLayer
Subsampling layer also referred to as pooling in convolution neural nets
Supports the following pooling types: MAX, AVG, SUM, PNORM, NONE
|
class |
Upsampling1D
Upsampling 1D layer
|
class |
Upsampling2D
Upsampling 2D layer
|
class |
Upsampling3D
Upsampling 3D layer
|
class |
ZeroPadding1DLayer
Zero padding 1D layer for convolutional neural networks.
|
class |
ZeroPadding3DLayer
Zero padding 3D layer for convolutional neural networks.
|
class |
ZeroPaddingLayer
Zero padding layer for convolutional neural networks.
|
Modifier and Type | Class and Description |
---|---|
class |
Cropping1D
Cropping layer for convolutional (1d) neural networks.
|
class |
Cropping2D
Cropping layer for convolutional (2d) neural networks.
|
class |
Cropping3D
Cropping layer for convolutional (3d) neural networks.
|
Modifier and Type | Class and Description |
---|---|
class |
ElementWiseMultiplicationLayer
Elementwise multiplication layer with weights: implements out = activationFn(input .* w + b) where:
- w is a learnable weight vector of length nOut - ".*" is element-wise multiplication - b is a bias vector Note that the input and output sizes of the element-wise layer are the same for this layer |
class |
FrozenLayer
Created by Alex on 10/07/2017.
|
class |
FrozenLayerWithBackprop
Frozen layer freezes parameters of the layer it wraps, but allows the backpropagation to continue.
|
class |
RepeatVector
RepeatVector layer configuration.
|
Modifier and Type | Class and Description |
---|---|
class |
Yolo2OutputLayer
Output (loss) layer for YOLOv2 object detection model, based on the papers:
YOLO9000: Better, Faster, Stronger - Redmon & Farhadi (2016) - https://arxiv.org/abs/1612.08242
and You Only Look Once: Unified, Real-Time Object Detection - Redmon et al. |
Modifier and Type | Class and Description |
---|---|
class |
Bidirectional
Bidirectional is a "wrapper" layer: it wraps any uni-directional RNN layer to make it bidirectional.
Note that multiple different modes are supported - these specify how the activations should be combined from the forward and backward RNN networks. |
class |
LastTimeStep
LastTimeStep is a "wrapper" layer: it wraps any RNN layer, and extracts out the last time step during forward pass,
and returns it as a row vector (per example).
|
class |
SimpleRnn
Simple RNN - aka "vanilla" RNN is the simplest type of recurrent neural network layer.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractSameDiffLayer |
class |
SameDiffLambdaLayer |
class |
SameDiffLambdaVertex |
class |
SameDiffLayer
A base layer used for implementing Deeplearning4j layers using SameDiff.
|
class |
SameDiffOutputLayer
A base layer used for implementing Deeplearning4j Output layers using SameDiff.
|
class |
SameDiffVertex
A SameDiff-based GraphVertex.
|
Modifier and Type | Class and Description |
---|---|
class |
MaskLayer
MaskLayer applies the mask array to the forward pass activations, and backward pass gradients, passing through
this layer.
|
class |
MaskZeroLayer |
Modifier and Type | Class and Description |
---|---|
class |
VariationalAutoencoder
Variational Autoencoder layer
|
Modifier and Type | Class and Description |
---|---|
class |
BaseWrapperLayer
Base wrapper layer: the idea is to pass through all methods to the underlying layer, and selectively override
them as required.
|
Modifier and Type | Class and Description |
---|---|
class |
DummyConfig
A 'dummy' training configuration for use in frozen layers
|
Modifier and Type | Class and Description |
---|---|
class |
OCNNOutputLayer
An implementation of one class neural networks from:
https://arxiv.org/pdf/1802.06360.pdf
The one class neural network approach is an extension of the standard output layer
with a single set of weights, an activation function, and a bias to:
2 sets of weights, a learnable "r" parameter that is held static
1 traditional set of weights.
|
Modifier and Type | Method and Description |
---|---|
TrainingConfig |
BaseWrapperVertex.getConfig() |
TrainingConfig |
BaseGraphVertex.getConfig() |
Modifier and Type | Method and Description |
---|---|
TrainingConfig |
FrozenVertex.getConfig() |
TrainingConfig |
LayerVertex.getConfig() |
Modifier and Type | Method and Description |
---|---|
TrainingConfig |
FrozenLayer.getConfig() |
TrainingConfig |
AbstractLayer.getConfig() |
Modifier and Type | Method and Description |
---|---|
TrainingConfig |
BidirectionalLayer.getConfig() |
Modifier and Type | Method and Description |
---|---|
TrainingConfig |
SameDiffGraphVertex.getConfig() |
Modifier and Type | Method and Description |
---|---|
TrainingConfig |
VariationalAutoencoder.getConfig() |
Modifier and Type | Method and Description |
---|---|
TrainingConfig |
BaseWrapperLayer.getConfig() |
Modifier and Type | Method and Description |
---|---|
TrainingConfig |
MultiLayerNetwork.getConfig() |
Copyright © 2018. All rights reserved.