Modifier and Type | Class and Description |
---|---|
class |
AbstractLSTM
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
class |
AutoEncoder
Autoencoder layer.
|
class |
BaseOutputLayer |
class |
BasePretrainNetwork |
class |
BaseRecurrentLayer |
class |
BatchNormalization
Batch normalization layer
See: Ioffe and Szegedy, 2014, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift https://arxiv.org/abs/1502.03167 |
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces intraclass consistency and doesn't require feed
forward of multiple examples.
|
class |
Cnn3DLossLayer
3D Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various loss (objective) functions. NOTE: Cnn3DLossLayer does not have any parameters. |
class |
CnnLossLayer
Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various loss (objective) functions. NOTE: CnnLossLayer does not have any parameters. |
class |
Convolution1D
1D convolution layer.
|
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
Convolution2D
2D convolution layer
|
class |
Convolution3D
3D convolution layer configuration
|
class |
ConvolutionLayer
2D Convolution layer (for example, spatial convolution over images).
|
class |
Deconvolution2D
2D deconvolution layer configuration
Deconvolutions are also known as transpose convolutions or fractionally strided convolutions. |
class |
DenseLayer
Dense layer: a standard fully connected feed forward layer
|
class |
DepthwiseConvolution2D
2D depth-wise convolution layer configuration.
|
class |
DropoutLayer
Dropout layer.
|
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to
numClass-1) as input.
|
class |
EmbeddingSequenceLayer
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
class |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
class |
GravesBidirectionalLSTM
Deprecated.
use
Bidirectional instead. With the Bidirectional
layer wrapper you can make any recurrent layer bidirectional, in particular GravesLSTM. Note that this layer adds the
output of both directions, which translates into "ADD" mode in Bidirectional.
Usage: .layer(new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build())) |
class |
GravesLSTM
Deprecated.
Will be eventually removed. Use
LSTM instead, which has similar prediction accuracy, but supports
CuDNN for faster network training on CUDA (Nvidia) GPUs |
class |
LossLayer
LossLayer is a flexible output layer that performs a loss function on an input without MLP logic.
LossLayer is similar to OutputLayer in that both perform loss calculations for network outputs vs. |
class |
LSTM
LSTM recurrent neural network layer without peephole connections.
|
class |
OutputLayer
Output layer used for training via backpropagation based on labels and a specified loss function.
|
class |
PReLULayer
Parametrized Rectified Linear Unit (PReLU)
|
class |
RnnLossLayer
Recurrent Neural Network Loss Layer.
Handles calculation of gradients etc for various objective (loss) functions. Note: Unlike RnnOutputLayer this RnnLossLayer does not have any parameters - i.e., there is no
time distributed dense component here. |
class |
RnnOutputLayer
A version of
OutputLayer for recurrent neural networks. |
class |
SeparableConvolution2D
2D Separable convolution layer configuration.
|
Modifier and Type | Method and Description |
---|---|
BaseLayer |
BaseLayer.clone() |
Modifier and Type | Class and Description |
---|---|
class |
ElementWiseMultiplicationLayer
Elementwise multiplication layer with weights: implements
out = activationFn(input .* w + b) where:- w is a learnable weight vector of length nOut - ".*" is element-wise multiplication - b is a bias vector Note that the input and output sizes of the element-wise layer are the same for this layer |
class |
RepeatVector
RepeatVector layer configuration.
|
Modifier and Type | Class and Description |
---|---|
class |
SimpleRnn
Simple RNN - aka "vanilla" RNN is the simplest type of recurrent neural network layer.
|
Modifier and Type | Class and Description |
---|---|
class |
VariationalAutoencoder
Variational Autoencoder layer
|
Modifier and Type | Class and Description |
---|---|
class |
OCNNOutputLayer
An implementation of one class neural networks from:
https://arxiv.org/pdf/1802.06360.pdf
The one class neural network approach is an extension of the standard output layer with a single set of weights, an
activation function, and a bias to: 2 sets of weights, a learnable "r" parameter that is held static 1 traditional
set of weights.
|
Modifier and Type | Method and Description |
---|---|
protected void |
BaseNetConfigDeserializer.handleL1L2BackwardCompatibility(BaseLayer baseLayer,
org.nd4j.shade.jackson.databind.node.ObjectNode on) |
protected void |
BaseNetConfigDeserializer.handleUpdaterBackwardCompatibility(BaseLayer layer,
org.nd4j.shade.jackson.databind.node.ObjectNode on) |
protected void |
BaseNetConfigDeserializer.handleWeightInitBackwardCompatibility(BaseLayer baseLayer,
org.nd4j.shade.jackson.databind.node.ObjectNode on) |
Modifier and Type | Class and Description |
---|---|
class |
BaseLayer<LayerConfT extends BaseLayer>
A layer with parameters
|
Modifier and Type | Class and Description |
---|---|
class |
BaseRecurrentLayer<LayerConfT extends BaseLayer> |
Copyright © 2019. All rights reserved.