Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
ConvolutionLayer.AlgoMode
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters
from the
ConvolutionLayer.FwdAlgo , ConvolutionLayer.BwdFilterAlgo , and ConvolutionLayer.BwdDataAlgo lists, but they
may be very memory intensive, so if weird errors occur when using cuDNN, please try the
"NO_WORKSPACE" mode. |
Layer
A neural network layer.
|
Class and Description |
---|
AbstractLSTM
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
AbstractLSTM.Builder |
ActivationLayer |
ActivationLayer.Builder |
AutoEncoder
Autoencoder.
|
AutoEncoder.Builder |
BaseLayer
A neural network layer.
|
BaseLayer.Builder |
BaseOutputLayer |
BaseOutputLayer.Builder |
BasePretrainNetwork |
BasePretrainNetwork.Builder |
BaseRecurrentLayer |
BaseRecurrentLayer.Builder |
BaseUpsamplingLayer
Upsampling base layer
|
BaseUpsamplingLayer.UpsamplingBuilder |
BatchNormalization
Batch normalization configuration
|
BatchNormalization.Builder |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
CenterLossOutputLayer.Builder |
CnnLossLayer
Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: CnnLossLayer does not have any parameters. |
CnnLossLayer.Builder |
Convolution1DLayer
1D (temporal) convolutional layer.
|
Convolution1DLayer.Builder |
Convolution3D
3D convolution layer configuration
|
Convolution3D.Builder |
Convolution3D.DataFormat |
ConvolutionLayer |
ConvolutionLayer.AlgoMode
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters
from the
ConvolutionLayer.FwdAlgo , ConvolutionLayer.BwdFilterAlgo , and ConvolutionLayer.BwdDataAlgo lists, but they
may be very memory intensive, so if weird errors occur when using cuDNN, please try the
"NO_WORKSPACE" mode. |
ConvolutionLayer.BaseConvBuilder |
ConvolutionLayer.Builder |
ConvolutionLayer.BwdDataAlgo
The backward data algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
ConvolutionLayer.BwdFilterAlgo
The backward filter algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
ConvolutionLayer.FwdAlgo
The forward algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
Deconvolution2D
2D deconvolution layer configuration
Deconvolutions are also known as transpose convolutions or fractionally strided convolutions.
|
Deconvolution2D.Builder |
DenseLayer
Dense layer: fully connected feed forward layer trainable by backprop.
|
DenseLayer.Builder |
DepthwiseConvolution2D
2D depth-wise convolution layer configuration.
|
DepthwiseConvolution2D.Builder |
DropoutLayer |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
EmbeddingLayer.Builder |
EmbeddingSequenceLayer
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
EmbeddingSequenceLayer.Builder |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
FeedForwardLayer.Builder |
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
Supports the following PoolingType s: SUM, AVG, MAX, PNORMGlobal pooling layer can also handle mask arrays when dealing with variable length inputs. |
GlobalPoolingLayer.Builder |
GravesBidirectionalLSTM
Deprecated.
use
Bidirectional instead. With the
Bidirectional layer wrapper you can make any recurrent layer bidirectional, in particular GravesLSTM.
Note that this layer adds the output of both directions, which translates into "ADD" mode in Bidirectional.
Usage: .layer(new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build())) |
GravesBidirectionalLSTM.Builder
Deprecated.
|
GravesLSTM
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
Layer
A neural network layer.
|
Layer.Builder |
LocalResponseNormalization
Created by nyghtowl on 10/29/15.
|
LocalResponseNormalization.Builder |
LossLayer
LossLayer is a flexible output "layer" that performs a loss function on
an input without MLP logic.
|
LossLayer.Builder |
LSTM
LSTM recurrent net without peephole connections.
|
NoParamLayer |
OutputLayer
Output layer with different objective co-occurrences for different objectives.
|
OutputLayer.Builder |
PoolingType
Created by Alex on 17/01/2017.
|
RnnLossLayer
Recurrent Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: Unlike RnnOutputLayer this RnnLossLayer does not have any parameters - i.e., there is no time
distributed dense component here. |
RnnLossLayer.Builder |
RnnOutputLayer |
SeparableConvolution2D
2D Separable convolution layer configuration.
|
SeparableConvolution2D.Builder |
SpaceToBatchLayer
Space to batch utility layer configuration for convolutional input types.
|
SpaceToBatchLayer.Builder |
SpaceToDepthLayer
Space to channels utility layer configuration for convolutional input types.
|
SpaceToDepthLayer.Builder |
SpaceToDepthLayer.DataFormat |
Subsampling1DLayer
1D (temporal) subsampling layer.
|
Subsampling1DLayer.Builder |
Subsampling3DLayer
3D subsampling / pooling layer for convolutional neural networks
|
Subsampling3DLayer.BaseSubsamplingBuilder |
Subsampling3DLayer.Builder |
Subsampling3DLayer.PoolingType |
SubsamplingLayer
Subsampling layer also referred to as pooling in convolution neural nets
Supports the following pooling types: MAX, AVG, SUM, PNORM, NONE
|
SubsamplingLayer.BaseSubsamplingBuilder |
SubsamplingLayer.Builder |
SubsamplingLayer.PoolingType |
Upsampling1D
Upsampling 1D layer
|
Upsampling1D.Builder |
Upsampling2D
Upsampling 2D layer
|
Upsampling2D.Builder |
Upsampling3D
Upsampling 3D layer
|
Upsampling3D.Builder |
ZeroPadding1DLayer
Zero padding 1D layer for convolutional neural networks.
|
ZeroPadding3DLayer
Zero padding 3D layer for convolutional neural networks.
|
ZeroPaddingLayer
Zero padding layer for convolutional neural networks.
|
Class and Description |
---|
Layer
A neural network layer.
|
Layer.Builder |
NoParamLayer |
Class and Description |
---|
BaseLayer
A neural network layer.
|
BaseLayer.Builder |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
FeedForwardLayer.Builder |
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
BaseLayer
A neural network layer.
|
BaseLayer.Builder |
BaseRecurrentLayer |
BaseRecurrentLayer.Builder |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
FeedForwardLayer.Builder |
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
BaseLayer
A neural network layer.
|
BaseLayer.Builder |
BasePretrainNetwork |
BasePretrainNetwork.Builder |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
FeedForwardLayer.Builder |
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
BaseLayer
A neural network layer.
|
BaseLayer.Builder |
BaseOutputLayer |
BaseOutputLayer.Builder |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
FeedForwardLayer.Builder |
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
BaseLayer
A neural network layer.
|
Layer
A neural network layer.
|
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
BaseOutputLayer |
BasePretrainNetwork |
Layer
A neural network layer.
|
Class and Description |
---|
ConvolutionLayer.AlgoMode
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters
from the
ConvolutionLayer.FwdAlgo , ConvolutionLayer.BwdFilterAlgo , and ConvolutionLayer.BwdDataAlgo lists, but they
may be very memory intensive, so if weird errors occur when using cuDNN, please try the
"NO_WORKSPACE" mode. |
ConvolutionLayer.BwdDataAlgo
The backward data algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
ConvolutionLayer.BwdFilterAlgo
The backward filter algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
ConvolutionLayer.FwdAlgo
The forward algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
Class and Description |
---|
PoolingType
Created by Alex on 17/01/2017.
|
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
AbstractLSTM
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
GravesBidirectionalLSTM
Deprecated.
use
Bidirectional instead. With the
Bidirectional layer wrapper you can make any recurrent layer bidirectional, in particular GravesLSTM.
Note that this layer adds the output of both directions, which translates into "ADD" mode in Bidirectional.
Usage: .layer(new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build())) |
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
ConvolutionLayer.AlgoMode
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters
from the
ConvolutionLayer.FwdAlgo , ConvolutionLayer.BwdFilterAlgo , and ConvolutionLayer.BwdDataAlgo lists, but they
may be very memory intensive, so if weird errors occur when using cuDNN, please try the
"NO_WORKSPACE" mode. |
Layer
A neural network layer.
|
Class and Description |
---|
PoolingType
Created by Alex on 17/01/2017.
|
Copyright © 2018. All rights reserved.