Modifier and Type | Class and Description |
---|---|
class |
BaseOutputLayer<LayerConfT extends BaseOutputLayer>
Output layer with different objective
in co-occurrences for different objectives.
|
class |
BasePretrainNetwork<LayerConfT extends BasePretrainNetwork>
Baseline class for any Neural Network used
as a layer in a deep network *
|
class |
DropoutLayer
Created by davekale on 12/7/16.
|
class |
LossLayer
LossLayer is a flexible output "layer" that performs a loss function on
an input without MLP logic.
|
class |
OutputLayer
Output layer with different objective
incooccurrences for different objectives.
|
Modifier and Type | Class and Description |
---|---|
class |
CnnLossLayer
Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: CnnLossLayer does not have any parameters. |
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
Convolution3DLayer
3D convolution layer implementation.
|
class |
ConvolutionLayer
Convolution layer
|
class |
Deconvolution2DLayer
2D deconvolution layer implementation.
|
class |
DepthwiseConvolution2DLayer
2D depth-wise convolution layer configuration.
|
class |
SeparableConvolution2DLayer
2D Separable convolution layer implementation
Separable convolutions split a regular convolution operation into two
simpler operations, which are usually computationally more efficient.
|
Modifier and Type | Class and Description |
---|---|
class |
PReLU
Parametrized Rectified Linear Unit (PReLU)
f(x) = alpha * x for x < 0, f(x) = x for x >= 0
alpha has the same shape as x and is a learned parameter.
|
Modifier and Type | Class and Description |
---|---|
class |
AutoEncoder
Autoencoder.
|
Modifier and Type | Class and Description |
---|---|
class |
DenseLayer |
Modifier and Type | Class and Description |
---|---|
class |
ElementWiseMultiplicationLayer
Elementwise multiplication layer with weights: implements out = activationFn(input .* w + b) where:
- w is a learnable weight vector of length nOut - ".*" is element-wise multiplication - b is a bias vector Note that the input and output sizes of the element-wise layer are the same for this layer |
Modifier and Type | Class and Description |
---|---|
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
class |
EmbeddingSequenceLayer
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
Modifier and Type | Class and Description |
---|---|
class |
BatchNormalization
Batch normalization layer.
|
Modifier and Type | Class and Description |
---|---|
class |
OCNNOutputLayer
Layer implementation for
OCNNOutputLayer
See OCNNOutputLayer
for details. |
Modifier and Type | Class and Description |
---|---|
class |
BaseRecurrentLayer<LayerConfT extends BaseLayer> |
class |
GravesBidirectionalLSTM
RNN tutorial: http://deeplearning4j.org/usingrnns.html
READ THIS FIRST
Bdirectional LSTM layer implementation.
|
class |
GravesLSTM
Deprecated.
Will be eventually removed. Use
LSTM instead, which has similar prediction accuracy, but supports
CuDNN for faster network training on CUDA (Nvidia) GPUs |
class |
LSTM
LSTM layer implementation.
|
class |
RnnLossLayer
Recurrent Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: Unlike RnnOutputLayer this RnnLossLayer does not have any parameters - i.e., there is no time
distributed dense component here. |
class |
RnnOutputLayer
Recurrent Neural Network Output Layer.
Handles calculation of gradients etc for various objective functions. Functionally the same as OutputLayer, but handles output and label reshaping automatically. Input and output activations are same as other RNN layers: 3 dimensions with shape [miniBatchSize,nIn,timeSeriesLength] and [miniBatchSize,nOut,timeSeriesLength] respectively. |
class |
SimpleRnn
Simple RNN - aka "vanilla" RNN is the simplest type of recurrent neural network layer.
|
Modifier and Type | Method and Description |
---|---|
static FwdPassReturn |
LSTMHelpers.activateHelper(BaseLayer layer,
NeuralNetConfiguration conf,
IActivation gateActivationFn,
INDArray input,
INDArray recurrentWeights,
INDArray originalInputWeights,
INDArray biases,
boolean training,
INDArray originalPrevOutputActivations,
INDArray originalPrevMemCellState,
boolean forBackprop,
boolean forwards,
String inputWeightKey,
INDArray maskArray,
boolean hasPeepholeConnections,
LSTMHelper helper,
CacheMode cacheMode,
LayerWorkspaceMgr workspaceMgr)
Returns FwdPassReturn object with activations/INDArrays.
|
Modifier and Type | Class and Description |
---|---|
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
Copyright © 2018. All rights reserved.