Modifier and Type | Method and Description |
---|---|
List<String> |
ParamInitializer.biasKeys(Layer layer)
Bias parameter keys given the layer configuration
|
boolean |
ParamInitializer.isBiasParam(Layer layer,
String key)
Is the specified parameter a bias?
|
boolean |
ParamInitializer.isWeightParam(Layer layer,
String key)
Is the specified parameter a weight?
|
int |
ParamInitializer.numParams(Layer layer) |
List<String> |
ParamInitializer.paramKeys(Layer layer)
Get a list of all parameter keys given the layer configuration
|
List<String> |
ParamInitializer.weightKeys(Layer layer)
Weight parameter keys given the layer configuration
|
Modifier and Type | Field and Description |
---|---|
protected Layer |
NeuralNetConfiguration.layer |
protected Layer |
NeuralNetConfiguration.Builder.layer |
Modifier and Type | Method and Description |
---|---|
ComputationGraphConfiguration.GraphBuilder |
ComputationGraphConfiguration.GraphBuilder.addLayer(String layerName,
Layer layer,
InputPreProcessor preProcessor,
String... layerInputs)
Add a layer and an
InputPreProcessor , with the specified name and specified inputs. |
ComputationGraphConfiguration.GraphBuilder |
ComputationGraphConfiguration.GraphBuilder.addLayer(String layerName,
Layer layer,
String... layerInputs)
Add a layer, with no
InputPreProcessor , with the specified name and specified inputs. |
NeuralNetConfiguration.ListBuilder |
NeuralNetConfiguration.ListBuilder.layer(int ind,
Layer layer) |
NeuralNetConfiguration.ListBuilder |
NeuralNetConfiguration.ListBuilder.layer(Layer layer) |
NeuralNetConfiguration.Builder |
NeuralNetConfiguration.Builder.layer(Layer layer)
Layer class.
|
ComputationGraphConfiguration.GraphBuilder |
ComputationGraphConfiguration.GraphBuilder.layer(String layerName,
Layer layer,
InputPreProcessor preProcessor,
String... layerInputs)
Add a layer and an
InputPreProcessor , with the specified name and specified inputs. |
ComputationGraphConfiguration.GraphBuilder |
ComputationGraphConfiguration.GraphBuilder.layer(String layerName,
Layer layer,
String... layerInputs)
Add a layer, with no
InputPreProcessor , with the specified name and specified inputs. |
NeuralNetConfiguration.ListBuilder |
NeuralNetConfiguration.Builder.list(Layer... layers)
Create a ListBuilder (for creating a MultiLayerConfiguration) with the specified layers
Usage: |
Modifier and Type | Class and Description |
---|---|
class |
AbstractLSTM
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
class |
ActivationLayer |
class |
AutoEncoder
Autoencoder.
|
class |
BaseLayer
A neural network layer.
|
class |
BaseOutputLayer |
class |
BasePretrainNetwork |
class |
BaseRecurrentLayer |
class |
BaseUpsamplingLayer
Upsampling base layer
|
class |
BatchNormalization
Batch normalization configuration
|
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
class |
CnnLossLayer
Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: CnnLossLayer does not have any parameters. |
class |
Convolution1D
1D convolution layer
|
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
Convolution2D
2D convolution layer
|
class |
ConvolutionLayer |
class |
Deconvolution2D
2D deconvolution layer configuration
Deconvolutions are also known as transpose convolutions or fractionally strided convolutions.
|
class |
DenseLayer
Dense layer: fully connected feed forward layer trainable by backprop.
|
class |
DropoutLayer |
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
class |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
class |
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
Supports the following PoolingType s: SUM, AVG, MAX, PNORMGlobal pooling layer can also handle mask arrays when dealing with variable length inputs. |
class |
GravesBidirectionalLSTM
Deprecated.
use
Bidirectional instead. With the
Bidirectional layer wrapper you can make any recurrent layer bidirectional, in particular GravesLSTM.
Note that this layer adds the output of both directions, which translates into "ADD" mode in Bidirectional.
Usage: .layer(new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build())) |
class |
GravesLSTM
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
class |
LocalResponseNormalization
Created by nyghtowl on 10/29/15.
|
class |
LossLayer
LossLayer is a flexible output "layer" that performs a loss function on
an input without MLP logic.
|
class |
LSTM
LSTM recurrent net without peephole connections.
|
class |
NoParamLayer |
class |
OutputLayer
Output layer with different objective co-occurrences for different objectives.
|
class |
Pooling1D
1D Pooling layer.
|
class |
Pooling2D
2D Pooling layer.
|
class |
RnnLossLayer
Recurrent Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: Unlike RnnOutputLayer this RnnLossLayer does not have any parameters - i.e., there is no time
distributed dense component here. |
class |
RnnOutputLayer |
class |
SeparableConvolution2D
2D Separable convolution layer configuration.
|
class |
SpaceToBatchLayer
Space to batch utility layer configuration for convolutional input types.
|
class |
SpaceToDepthLayer
Space to depth utility layer configuration for convolutional input types.
|
class |
Subsampling1DLayer
1D (temporal) subsampling layer.
|
class |
SubsamplingLayer
Subsampling layer also referred to as pooling in convolution neural nets
Supports the following pooling types: MAX, AVG, SUM, PNORM, NONE
|
class |
Upsampling1D
Upsampling 1D layer
|
class |
Upsampling2D
Upsampling 2D layer
|
class |
ZeroPadding1DLayer
Zero padding 1D layer for convolutional neural networks.
|
class |
ZeroPaddingLayer
Zero padding layer for convolutional neural networks.
|
Modifier and Type | Method and Description |
---|---|
abstract <E extends Layer> |
Layer.Builder.build() |
Modifier and Type | Method and Description |
---|---|
Layer |
Layer.clone() |
Modifier and Type | Method and Description |
---|---|
static void |
LayerValidation.generalValidation(String layerName,
Layer layer,
IDropout iDropout,
double l2,
double l2Bias,
double l1,
double l1Bias,
Distribution dist,
List<LayerConstraint> allParamConstraints,
List<LayerConstraint> weightConstraints,
List<LayerConstraint> biasConstraints) |
static void |
LayerValidation.generalValidation(String layerName,
Layer layer,
IDropout iDropOut,
Double l2,
Double l2Bias,
Double l1,
Double l1Bias,
Distribution dist,
List<LayerConstraint> allParamConstraints,
List<LayerConstraint> weightConstraints,
List<LayerConstraint> biasConstraints) |
Modifier and Type | Class and Description |
---|---|
class |
Cropping2D
Cropping layer for convolutional (2d) neural networks.
|
Modifier and Type | Class and Description |
---|---|
class |
ElementWiseMultiplicationLayer
Elementwise multiplication layer with weights: implements out = activationFn(input .* w + b) where:
- w is a learnable weight vector of length nOut - ".*" is element-wise multiplication - b is a bias vector Note that the input and output sizes of the element-wise layer are the same for this layer |
class |
FrozenLayer
Created by Alex on 10/07/2017.
|
Modifier and Type | Field and Description |
---|---|
protected Layer |
FrozenLayer.layer |
Modifier and Type | Method and Description |
---|---|
Layer |
FrozenLayer.clone() |
Modifier and Type | Method and Description |
---|---|
FrozenLayer.Builder |
FrozenLayer.Builder.layer(Layer layer) |
Constructor and Description |
---|
FrozenLayer(Layer layer) |
Modifier and Type | Class and Description |
---|---|
class |
Yolo2OutputLayer
Output (loss) layer for YOLOv2 object detection model, based on the papers:
YOLO9000: Better, Faster, Stronger - Redmon & Farhadi (2016) - https://arxiv.org/abs/1612.08242
and You Only Look Once: Unified, Real-Time Object Detection - Redmon et al. |
Modifier and Type | Class and Description |
---|---|
class |
Bidirectional
Bidirectional is a "wrapper" layer: it wraps any uni-directional RNN layer to make it bidirectional.
Note that multiple different modes are supported - these specify how the activations should be combined from the forward and backward RNN networks. |
class |
LastTimeStep
LastTimeStep is a "wrapper" layer: it wraps any RNN layer, and extracts out the last time step during forward pass,
and returns it as a row vector (per example).
|
class |
SimpleRnn
Simple RNN - aka "vanilla" RNN is the simplest type of recurrent neural network layer.
|
Modifier and Type | Method and Description |
---|---|
Layer |
LastTimeStep.getUnderlying() |
Modifier and Type | Method and Description |
---|---|
Bidirectional.Builder |
Bidirectional.Builder.rnnLayer(Layer layer) |
Constructor and Description |
---|
Bidirectional(Bidirectional.Mode mode,
Layer layer)
Create a Bidirectional wrapper for the specified layer
|
Bidirectional(Layer layer)
Create a Bidirectional wrapper, with the default Mode (CONCAT) for the specified layer
|
LastTimeStep(Layer underlying) |
Modifier and Type | Class and Description |
---|---|
class |
AbstractSameDiffLayer |
class |
BaseSameDiffLayer
A base layer used for implementing Deeplearning4j layers using SameDiff.
|
Modifier and Type | Class and Description |
---|---|
class |
MaskLayer
MaskLayer applies the mask array to the forward pass activations, and backward pass gradients, passing through
this layer.
|
class |
MaskZeroLayer |
Constructor and Description |
---|
MaskZeroLayer(Layer underlying) |
Modifier and Type | Class and Description |
---|---|
class |
VariationalAutoencoder
Variational Autoencoder layer
|
Modifier and Type | Class and Description |
---|---|
class |
BaseWrapperLayer
Base wrapper layer: the idea is to pass through all methods to the underlying layer, and selectively override
them as required.
|
Modifier and Type | Field and Description |
---|---|
protected Layer |
BaseWrapperLayer.underlying |
Constructor and Description |
---|
BaseWrapperLayer(Layer underlying) |
Modifier and Type | Method and Description |
---|---|
protected boolean |
BaseNetConfigDeserializer.requiresDropoutFromLegacy(Layer[] layers) |
protected boolean |
BaseNetConfigDeserializer.requiresIUpdaterFromLegacy(Layer[] layers) |
Modifier and Type | Class and Description |
---|---|
class |
AbstractLayer<LayerConfT extends Layer>
A layer with input and output, no parameters or gradients
|
Modifier and Type | Method and Description |
---|---|
List<String> |
BidirectionalParamInitializer.biasKeys(Layer layer) |
List<String> |
SeparableConvolutionParamInitializer.biasKeys(Layer layer) |
List<String> |
WrapperLayerParamInitializer.biasKeys(Layer layer) |
List<String> |
DefaultParamInitializer.biasKeys(Layer layer) |
List<String> |
LSTMParamInitializer.biasKeys(Layer layer) |
List<String> |
SimpleRnnParamInitializer.biasKeys(Layer layer) |
List<String> |
GravesLSTMParamInitializer.biasKeys(Layer layer) |
List<String> |
VariationalAutoencoderParamInitializer.biasKeys(Layer layer) |
List<String> |
EmptyParamInitializer.biasKeys(Layer layer) |
List<String> |
SameDiffParamInitializer.biasKeys(Layer layer) |
List<String> |
BatchNormalizationParamInitializer.biasKeys(Layer layer) |
List<String> |
FrozenLayerParamInitializer.biasKeys(Layer layer) |
List<String> |
GravesBidirectionalLSTMParamInitializer.biasKeys(Layer layer) |
List<String> |
ConvolutionParamInitializer.biasKeys(Layer layer) |
protected boolean |
DefaultParamInitializer.hasBias(Layer layer) |
boolean |
BidirectionalParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
SeparableConvolutionParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
WrapperLayerParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
DefaultParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
LSTMParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
SimpleRnnParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
GravesLSTMParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
VariationalAutoencoderParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
EmptyParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
SameDiffParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
BatchNormalizationParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
FrozenLayerParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
GravesBidirectionalLSTMParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
ConvolutionParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
BidirectionalParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
SeparableConvolutionParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
WrapperLayerParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
DefaultParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
LSTMParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
SimpleRnnParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
GravesLSTMParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
VariationalAutoencoderParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
EmptyParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
SameDiffParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
BatchNormalizationParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
FrozenLayerParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
GravesBidirectionalLSTMParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
ConvolutionParamInitializer.isWeightParam(Layer layer,
String key) |
int |
BidirectionalParamInitializer.numParams(Layer layer) |
int |
ElementWiseParamInitializer.numParams(Layer layer) |
int |
SeparableConvolutionParamInitializer.numParams(Layer l) |
int |
WrapperLayerParamInitializer.numParams(Layer layer) |
int |
DefaultParamInitializer.numParams(Layer l) |
int |
LSTMParamInitializer.numParams(Layer l) |
int |
SimpleRnnParamInitializer.numParams(Layer layer) |
int |
GravesLSTMParamInitializer.numParams(Layer l) |
int |
EmptyParamInitializer.numParams(Layer layer) |
int |
SameDiffParamInitializer.numParams(Layer layer) |
int |
BatchNormalizationParamInitializer.numParams(Layer l) |
int |
FrozenLayerParamInitializer.numParams(Layer layer) |
int |
GravesBidirectionalLSTMParamInitializer.numParams(Layer l) |
int |
ConvolutionParamInitializer.numParams(Layer l) |
List<String> |
BidirectionalParamInitializer.paramKeys(Layer layer) |
List<String> |
SeparableConvolutionParamInitializer.paramKeys(Layer layer) |
List<String> |
WrapperLayerParamInitializer.paramKeys(Layer layer) |
List<String> |
DefaultParamInitializer.paramKeys(Layer layer) |
List<String> |
LSTMParamInitializer.paramKeys(Layer layer) |
List<String> |
SimpleRnnParamInitializer.paramKeys(Layer layer) |
List<String> |
GravesLSTMParamInitializer.paramKeys(Layer layer) |
List<String> |
VariationalAutoencoderParamInitializer.paramKeys(Layer l) |
List<String> |
EmptyParamInitializer.paramKeys(Layer layer) |
List<String> |
SameDiffParamInitializer.paramKeys(Layer layer) |
List<String> |
BatchNormalizationParamInitializer.paramKeys(Layer layer) |
List<String> |
FrozenLayerParamInitializer.paramKeys(Layer layer) |
List<String> |
GravesBidirectionalLSTMParamInitializer.paramKeys(Layer layer) |
List<String> |
ConvolutionParamInitializer.paramKeys(Layer layer) |
List<String> |
BidirectionalParamInitializer.weightKeys(Layer layer) |
List<String> |
SeparableConvolutionParamInitializer.weightKeys(Layer layer) |
List<String> |
WrapperLayerParamInitializer.weightKeys(Layer layer) |
List<String> |
DefaultParamInitializer.weightKeys(Layer layer) |
List<String> |
LSTMParamInitializer.weightKeys(Layer layer) |
List<String> |
SimpleRnnParamInitializer.weightKeys(Layer layer) |
List<String> |
GravesLSTMParamInitializer.weightKeys(Layer layer) |
List<String> |
VariationalAutoencoderParamInitializer.weightKeys(Layer layer) |
List<String> |
EmptyParamInitializer.weightKeys(Layer layer) |
List<String> |
SameDiffParamInitializer.weightKeys(Layer layer) |
List<String> |
BatchNormalizationParamInitializer.weightKeys(Layer layer) |
List<String> |
FrozenLayerParamInitializer.weightKeys(Layer layer) |
List<String> |
GravesBidirectionalLSTMParamInitializer.weightKeys(Layer layer) |
List<String> |
ConvolutionParamInitializer.weightKeys(Layer layer) |
Modifier and Type | Method and Description |
---|---|
TransferLearning.Builder |
TransferLearning.Builder.addLayer(Layer layer)
Add layers to the net
Required if layers are removed.
|
TransferLearning.GraphBuilder |
TransferLearning.GraphBuilder.addLayer(String layerName,
Layer layer,
InputPreProcessor preProcessor,
String... layerInputs)
Add a layer with a specified preprocessor
|
TransferLearning.GraphBuilder |
TransferLearning.GraphBuilder.addLayer(String layerName,
Layer layer,
String... layerInputs)
Add a layer of the specified configuration to the computation graph
|
Copyright © 2018. All rights reserved.