Modifier and Type | Method and Description |
---|---|
List<String> |
ParamInitializer.biasKeys(Layer layer)
Bias parameter keys given the layer configuration
|
boolean |
ParamInitializer.isBiasParam(Layer layer,
String key)
Is the specified parameter a bias?
|
boolean |
ParamInitializer.isWeightParam(Layer layer,
String key)
Is the specified parameter a weight?
|
long |
ParamInitializer.numParams(Layer layer) |
List<String> |
ParamInitializer.paramKeys(Layer layer)
Get a list of all parameter keys given the layer configuration
|
List<String> |
ParamInitializer.weightKeys(Layer layer)
Weight parameter keys given the layer configuration
|
Modifier and Type | Field and Description |
---|---|
protected Layer |
NeuralNetConfiguration.layer |
protected Layer |
NeuralNetConfiguration.Builder.layer |
Modifier and Type | Method and Description |
---|---|
ComputationGraphConfiguration.GraphBuilder |
ComputationGraphConfiguration.GraphBuilder.addLayer(String layerName,
Layer layer,
InputPreProcessor preProcessor,
String... layerInputs)
Add a layer and an
InputPreProcessor , with the specified name and specified inputs. |
ComputationGraphConfiguration.GraphBuilder |
ComputationGraphConfiguration.GraphBuilder.addLayer(String layerName,
Layer layer,
String... layerInputs)
Add a layer, with no
InputPreProcessor , with the specified name and specified inputs. |
NeuralNetConfiguration.ListBuilder |
NeuralNetConfiguration.ListBuilder.layer(int ind,
Layer layer) |
ComputationGraphConfiguration.GraphBuilder |
ComputationGraphConfiguration.GraphBuilder.layer(int layerName,
Layer layer,
String... layerInputs)
Add a layer, with no
InputPreProcessor , with the specified name and specified inputs. |
NeuralNetConfiguration.ListBuilder |
NeuralNetConfiguration.ListBuilder.layer(Layer layer) |
NeuralNetConfiguration.Builder |
NeuralNetConfiguration.Builder.layer(Layer layer)
Layer class.
|
ComputationGraphConfiguration.GraphBuilder |
ComputationGraphConfiguration.GraphBuilder.layer(String layerName,
Layer layer,
InputPreProcessor preProcessor,
String... layerInputs)
Add a layer and an
InputPreProcessor , with the specified name and specified inputs. |
ComputationGraphConfiguration.GraphBuilder |
ComputationGraphConfiguration.GraphBuilder.layer(String layerName,
Layer layer,
String... layerInputs)
Add a layer, with no
InputPreProcessor , with the specified name and specified inputs. |
NeuralNetConfiguration.ListBuilder |
NeuralNetConfiguration.Builder.list(Layer... layers)
Create a ListBuilder (for creating a MultiLayerConfiguration) with the specified layers
Usage: |
Modifier and Type | Class and Description |
---|---|
class |
AbstractLSTM
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
class |
ActivationLayer
Activation layer is a simple layer that applies the specified activation function to the input activations
|
class |
AutoEncoder
Autoencoder layer.
|
class |
BaseLayer
A neural network layer.
|
class |
BaseOutputLayer |
class |
BasePretrainNetwork |
class |
BaseRecurrentLayer |
class |
BaseUpsamplingLayer
Upsampling base layer
|
class |
BatchNormalization
Batch normalization layer
See: Ioffe and Szegedy, 2014, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift https://arxiv.org/abs/1502.03167 |
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
class |
Cnn3DLossLayer
3D Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various loss (objective) functions. NOTE: Cnn3DLossLayer does not have any parameters. |
class |
CnnLossLayer
Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various loss (objective) functions. NOTE: CnnLossLayer does not have any parameters. |
class |
Convolution1D
1D convolution layer.
|
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
Convolution2D
2D convolution layer
|
class |
Convolution3D
3D convolution layer configuration
|
class |
ConvolutionLayer
2D Convolution layer (for example, spatial convolution over images).
|
class |
Deconvolution2D
2D deconvolution layer configuration
Deconvolutions are also known as transpose convolutions or fractionally strided convolutions. |
class |
DenseLayer
Dense layer: a standard fully connected feed forward layer
|
class |
DepthwiseConvolution2D
2D depth-wise convolution layer configuration.
|
class |
DropoutLayer
Dropout layer.
|
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
class |
EmbeddingSequenceLayer
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
class |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
class |
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
Supports the following PoolingType s: SUM, AVG, MAX, PNORMGlobal pooling layer can also handle mask arrays when dealing with variable length inputs. |
class |
GravesBidirectionalLSTM
Deprecated.
use
Bidirectional instead. With the
Bidirectional layer wrapper you can make any recurrent layer bidirectional, in particular GravesLSTM.
Note that this layer adds the output of both directions, which translates into "ADD" mode in Bidirectional.
Usage: .layer(new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build())) |
class |
GravesLSTM
Deprecated.
Will be eventually removed. Use
LSTM instead, which has similar prediction accuracy, but supports
CuDNN for faster network training on CUDA (Nvidia) GPUs |
class |
LocallyConnected1D
SameDiff version of a 1D locally connected layer.
|
class |
LocallyConnected2D
SameDiff version of a 2D locally connected layer.
|
class |
LocalResponseNormalization
Local response normalization layer
See section 3.3 of http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf |
class |
LossLayer
LossLayer is a flexible output layer that performs a loss function on an input without MLP logic.
LossLayer is similar to OutputLayer in that both perform loss calculations for network outputs vs. |
class |
LSTM
LSTM recurrent neural network layer without peephole connections.
|
class |
NoParamLayer |
class |
OutputLayer
Output layer used for training via backpropagation based on labels and a specified loss function.
|
class |
Pooling1D
1D Pooling (subsampling) layer.
|
class |
Pooling2D
2D Pooling (subsampling) layer.
|
class |
PReLULayer
Parametrized Rectified Linear Unit (PReLU)
|
class |
RnnLossLayer
Recurrent Neural Network Loss Layer.
Handles calculation of gradients etc for various objective (loss) functions. Note: Unlike RnnOutputLayer this RnnLossLayer does not have any parameters - i.e., there is no time
distributed dense component here. |
class |
RnnOutputLayer
A version of
OutputLayer for recurrent neural networks. |
class |
SeparableConvolution2D
2D Separable convolution layer configuration.
|
class |
SpaceToBatchLayer
Space to batch utility layer configuration for convolutional input types.
|
class |
SpaceToDepthLayer
Space to channels utility layer configuration for convolutional input types.
|
class |
Subsampling1DLayer
1D (temporal) subsampling layer - also known as pooling layer.
Expects input of shape [minibatch, nIn, sequenceLength] . |
class |
Subsampling3DLayer
3D subsampling / pooling layer for convolutional neural networks
|
class |
SubsamplingLayer
Subsampling layer also referred to as pooling in convolution neural nets
Supports the following pooling types: MAX, AVG, SUM, PNORM
|
class |
Upsampling1D
Upsampling 1D layer
Repeats each step size times along the temporal/sequence axis (dimension 2)For input shape [minibatch, channels, sequenceLength] output has shape [minibatch, channels, size * sequenceLength] Example: |
class |
Upsampling2D
Upsampling 2D layer
Repeats each value (or rather, set of depth values) in the height and width dimensions by size[0] and size[1] times respectively. If input has shape [minibatch, channels, height, width] then output has shape
[minibatch, channels, height*size[0], width*size[1]] Example: |
class |
Upsampling3D
Upsampling 3D layer
Repeats each value (all channel values for each x/y/z location) by size[0], size[1] and size[2] If input has shape [minibatch, channels, depth, height, width] then output has shape
[minibatch, channels, size[0] * depth, size[1] * height, size[2] * width] |
class |
ZeroPadding1DLayer
Zero padding 1D layer for convolutional neural networks.
|
class |
ZeroPadding3DLayer
Zero padding 3D layer for convolutional neural networks.
|
class |
ZeroPaddingLayer
Zero padding layer for convolutional neural networks (2D CNNs).
|
Modifier and Type | Method and Description |
---|---|
abstract <E extends Layer> |
Layer.Builder.build() |
Modifier and Type | Method and Description |
---|---|
Layer |
Layer.clone() |
Modifier and Type | Method and Description |
---|---|
static void |
LayerValidation.generalValidation(String layerName,
Layer layer,
IDropout iDropout,
double l2,
double l2Bias,
double l1,
double l1Bias,
Distribution dist,
List<LayerConstraint> allParamConstraints,
List<LayerConstraint> weightConstraints,
List<LayerConstraint> biasConstraints) |
static void |
LayerValidation.generalValidation(String layerName,
Layer layer,
IDropout iDropOut,
Double l2,
Double l2Bias,
Double l1,
Double l1Bias,
Distribution dist,
List<LayerConstraint> allParamConstraints,
List<LayerConstraint> weightConstraints,
List<LayerConstraint> biasConstraints) |
Modifier and Type | Class and Description |
---|---|
class |
Cropping1D
Cropping layer for convolutional (1d) neural networks.
|
class |
Cropping2D
Cropping layer for convolutional (2d) neural networks.
|
class |
Cropping3D
Cropping layer for convolutional (3d) neural networks.
|
Modifier and Type | Class and Description |
---|---|
class |
ElementWiseMultiplicationLayer
Elementwise multiplication layer with weights: implements
out = activationFn(input .* w + b) where:- w is a learnable weight vector of length nOut - ".*" is element-wise multiplication - b is a bias vector Note that the input and output sizes of the element-wise layer are the same for this layer |
class |
FrozenLayer
FrozenLayer is used for the purposes of transfer learning.
A frozen layer wraps another DL4J Layer within it. |
class |
FrozenLayerWithBackprop
Frozen layer freezes parameters of the layer it wraps, but allows the backpropagation to continue.
|
class |
RepeatVector
RepeatVector layer configuration.
|
Modifier and Type | Field and Description |
---|---|
protected Layer |
FrozenLayer.layer |
Modifier and Type | Method and Description |
---|---|
Layer |
FrozenLayer.clone() |
Layer |
FrozenLayerWithBackprop.clone() |
Modifier and Type | Method and Description |
---|---|
FrozenLayer.Builder |
FrozenLayer.Builder.layer(Layer layer) |
Constructor and Description |
---|
FrozenLayer(Layer layer) |
FrozenLayerWithBackprop(Layer layer) |
Modifier and Type | Class and Description |
---|---|
class |
Yolo2OutputLayer
Output (loss) layer for YOLOv2 object detection model, based on the papers:
YOLO9000: Better, Faster, Stronger - Redmon & Farhadi (2016) - https://arxiv.org/abs/1612.08242
and You Only Look Once: Unified, Real-Time Object Detection - Redmon et al. |
Modifier and Type | Class and Description |
---|---|
class |
Bidirectional
Bidirectional is a "wrapper" layer: it wraps any uni-directional RNN layer to make it bidirectional.
Note that multiple different modes are supported - these specify how the activations should be combined from the forward and backward RNN networks. |
class |
LastTimeStep
LastTimeStep is a "wrapper" layer: it wraps any RNN (or CNN1D) layer, and extracts out the last time step during forward pass,
and returns it as a row vector (per example).
|
class |
SimpleRnn
Simple RNN - aka "vanilla" RNN is the simplest type of recurrent neural network layer.
|
Modifier and Type | Method and Description |
---|---|
Layer |
LastTimeStep.getUnderlying() |
Modifier and Type | Method and Description |
---|---|
Bidirectional.Builder |
Bidirectional.Builder.rnnLayer(Layer layer) |
Constructor and Description |
---|
Bidirectional(Bidirectional.Mode mode,
Layer layer)
Create a Bidirectional wrapper for the specified layer
|
Bidirectional(Layer layer)
Create a Bidirectional wrapper, with the default Mode (CONCAT) for the specified layer
|
LastTimeStep(Layer underlying) |
Modifier and Type | Class and Description |
---|---|
class |
AbstractSameDiffLayer |
class |
SameDiffLambdaLayer
SameDiffLambdaLayer is defined to be used as the base class for implementing lambda layers using SameDiff
Lambda layers are layers without parameters - and as a result, have a much simpler API - users need only extend SameDiffLambdaLayer and implement a single method |
class |
SameDiffLayer
A base layer used for implementing Deeplearning4j layers using SameDiff.
|
class |
SameDiffOutputLayer
A base layer used for implementing Deeplearning4j Output layers using SameDiff.
|
Modifier and Type | Class and Description |
---|---|
class |
MaskLayer
MaskLayer applies the mask array to the forward pass activations, and backward pass gradients, passing through
this layer.
|
class |
MaskZeroLayer
Wrapper which masks timesteps with activation equal to the specified masking value (0.0 default).
|
Modifier and Type | Method and Description |
---|---|
MaskZeroLayer.Builder |
MaskZeroLayer.Builder.setUnderlying(Layer underlying) |
Constructor and Description |
---|
MaskZeroLayer(Layer underlying,
double maskingValue) |
Modifier and Type | Class and Description |
---|---|
class |
VariationalAutoencoder
Variational Autoencoder layer
|
Modifier and Type | Class and Description |
---|---|
class |
BaseWrapperLayer
Base wrapper layer: the idea is to pass through all methods to the underlying layer, and selectively override
them as required.
|
Modifier and Type | Field and Description |
---|---|
protected Layer |
BaseWrapperLayer.underlying |
Constructor and Description |
---|
BaseWrapperLayer(Layer underlying) |
Modifier and Type | Class and Description |
---|---|
class |
OCNNOutputLayer
An implementation of one class neural networks from:
https://arxiv.org/pdf/1802.06360.pdf
The one class neural network approach is an extension of the standard output layer
with a single set of weights, an activation function, and a bias to:
2 sets of weights, a learnable "r" parameter that is held static
1 traditional set of weights.
|
Modifier and Type | Method and Description |
---|---|
Layer |
FrozenLayerDeserializer.deserialize(org.nd4j.shade.jackson.core.JsonParser jp,
org.nd4j.shade.jackson.databind.DeserializationContext deserializationContext) |
Modifier and Type | Method and Description |
---|---|
protected boolean |
BaseNetConfigDeserializer.requiresDropoutFromLegacy(Layer[] layers) |
protected boolean |
BaseNetConfigDeserializer.requiresIUpdaterFromLegacy(Layer[] layers) |
Modifier and Type | Method and Description |
---|---|
static void |
LegacyLayerDeserializer.registerLegacyClassDefaultName(Class<? extends Layer> clazz) |
static void |
LegacyLayerDeserializer.registerLegacyClassSpecifiedName(String name,
Class<? extends Layer> clazz) |
Modifier and Type | Class and Description |
---|---|
class |
AbstractLayer<LayerConfT extends Layer>
A layer with input and output, no parameters or gradients
|
Modifier and Type | Method and Description |
---|---|
List<String> |
OCNNParamInitializer.biasKeys(Layer layer) |
boolean |
OCNNParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
OCNNParamInitializer.isWeightParam(Layer layer,
String key) |
long |
OCNNParamInitializer.numParams(Layer layer) |
List<String> |
OCNNParamInitializer.paramKeys(Layer layer) |
List<String> |
OCNNParamInitializer.weightKeys(Layer layer) |
Modifier and Type | Class and Description |
---|---|
class |
IdentityLayer
Identity layer, passes data through unaltered.
|
Modifier and Type | Method and Description |
---|---|
List<String> |
WrapperLayerParamInitializer.biasKeys(Layer layer) |
List<String> |
GravesLSTMParamInitializer.biasKeys(Layer layer) |
List<String> |
SimpleRnnParamInitializer.biasKeys(Layer layer) |
List<String> |
SeparableConvolutionParamInitializer.biasKeys(Layer layer) |
List<String> |
DepthwiseConvolutionParamInitializer.biasKeys(Layer layer) |
List<String> |
PReLUParamInitializer.biasKeys(Layer layer) |
List<String> |
EmptyParamInitializer.biasKeys(Layer layer) |
List<String> |
FrozenLayerWithBackpropParamInitializer.biasKeys(Layer layer) |
List<String> |
DefaultParamInitializer.biasKeys(Layer layer) |
List<String> |
SameDiffParamInitializer.biasKeys(Layer layer) |
List<String> |
VariationalAutoencoderParamInitializer.biasKeys(Layer layer) |
List<String> |
BidirectionalParamInitializer.biasKeys(Layer layer) |
List<String> |
GravesBidirectionalLSTMParamInitializer.biasKeys(Layer layer) |
List<String> |
LSTMParamInitializer.biasKeys(Layer layer) |
List<String> |
ConvolutionParamInitializer.biasKeys(Layer layer) |
List<String> |
FrozenLayerParamInitializer.biasKeys(Layer layer) |
List<String> |
BatchNormalizationParamInitializer.biasKeys(Layer layer) |
protected boolean |
DefaultParamInitializer.hasBias(Layer layer) |
boolean |
WrapperLayerParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
GravesLSTMParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
SimpleRnnParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
SeparableConvolutionParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
DepthwiseConvolutionParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
PReLUParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
EmptyParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
FrozenLayerWithBackpropParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
DefaultParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
SameDiffParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
VariationalAutoencoderParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
BidirectionalParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
GravesBidirectionalLSTMParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
LSTMParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
ConvolutionParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
FrozenLayerParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
BatchNormalizationParamInitializer.isBiasParam(Layer layer,
String key) |
boolean |
WrapperLayerParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
GravesLSTMParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
SimpleRnnParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
SeparableConvolutionParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
DepthwiseConvolutionParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
PReLUParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
EmptyParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
FrozenLayerWithBackpropParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
DefaultParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
SameDiffParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
VariationalAutoencoderParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
BidirectionalParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
GravesBidirectionalLSTMParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
LSTMParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
ConvolutionParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
FrozenLayerParamInitializer.isWeightParam(Layer layer,
String key) |
boolean |
BatchNormalizationParamInitializer.isWeightParam(Layer layer,
String key) |
long |
WrapperLayerParamInitializer.numParams(Layer layer) |
long |
GravesLSTMParamInitializer.numParams(Layer l) |
long |
SimpleRnnParamInitializer.numParams(Layer layer) |
long |
ElementWiseParamInitializer.numParams(Layer layer) |
long |
SeparableConvolutionParamInitializer.numParams(Layer l) |
long |
DepthwiseConvolutionParamInitializer.numParams(Layer l) |
long |
PReLUParamInitializer.numParams(Layer l) |
long |
EmptyParamInitializer.numParams(Layer layer) |
long |
FrozenLayerWithBackpropParamInitializer.numParams(Layer layer) |
long |
DefaultParamInitializer.numParams(Layer l) |
long |
Convolution3DParamInitializer.numParams(Layer l) |
long |
SameDiffParamInitializer.numParams(Layer layer) |
long |
BidirectionalParamInitializer.numParams(Layer layer) |
long |
GravesBidirectionalLSTMParamInitializer.numParams(Layer l) |
long |
LSTMParamInitializer.numParams(Layer l) |
long |
ConvolutionParamInitializer.numParams(Layer l) |
long |
FrozenLayerParamInitializer.numParams(Layer layer) |
long |
BatchNormalizationParamInitializer.numParams(Layer l) |
List<String> |
WrapperLayerParamInitializer.paramKeys(Layer layer) |
List<String> |
GravesLSTMParamInitializer.paramKeys(Layer layer) |
List<String> |
SimpleRnnParamInitializer.paramKeys(Layer layer) |
List<String> |
SeparableConvolutionParamInitializer.paramKeys(Layer layer) |
List<String> |
DepthwiseConvolutionParamInitializer.paramKeys(Layer layer) |
List<String> |
PReLUParamInitializer.paramKeys(Layer layer) |
List<String> |
EmptyParamInitializer.paramKeys(Layer layer) |
List<String> |
FrozenLayerWithBackpropParamInitializer.paramKeys(Layer layer) |
List<String> |
DefaultParamInitializer.paramKeys(Layer layer) |
List<String> |
SameDiffParamInitializer.paramKeys(Layer layer) |
List<String> |
VariationalAutoencoderParamInitializer.paramKeys(Layer l) |
List<String> |
BidirectionalParamInitializer.paramKeys(Layer layer) |
List<String> |
GravesBidirectionalLSTMParamInitializer.paramKeys(Layer layer) |
List<String> |
LSTMParamInitializer.paramKeys(Layer layer) |
List<String> |
ConvolutionParamInitializer.paramKeys(Layer layer) |
List<String> |
FrozenLayerParamInitializer.paramKeys(Layer layer) |
List<String> |
BatchNormalizationParamInitializer.paramKeys(Layer layer) |
List<String> |
WrapperLayerParamInitializer.weightKeys(Layer layer) |
List<String> |
GravesLSTMParamInitializer.weightKeys(Layer layer) |
List<String> |
SimpleRnnParamInitializer.weightKeys(Layer layer) |
List<String> |
SeparableConvolutionParamInitializer.weightKeys(Layer layer) |
List<String> |
DepthwiseConvolutionParamInitializer.weightKeys(Layer layer) |
List<String> |
PReLUParamInitializer.weightKeys(Layer layer) |
List<String> |
EmptyParamInitializer.weightKeys(Layer layer) |
List<String> |
FrozenLayerWithBackpropParamInitializer.weightKeys(Layer layer) |
List<String> |
DefaultParamInitializer.weightKeys(Layer layer) |
List<String> |
SameDiffParamInitializer.weightKeys(Layer layer) |
List<String> |
VariationalAutoencoderParamInitializer.weightKeys(Layer layer) |
List<String> |
BidirectionalParamInitializer.weightKeys(Layer layer) |
List<String> |
GravesBidirectionalLSTMParamInitializer.weightKeys(Layer layer) |
List<String> |
LSTMParamInitializer.weightKeys(Layer layer) |
List<String> |
ConvolutionParamInitializer.weightKeys(Layer layer) |
List<String> |
FrozenLayerParamInitializer.weightKeys(Layer layer) |
List<String> |
BatchNormalizationParamInitializer.weightKeys(Layer layer) |
Modifier and Type | Method and Description |
---|---|
TransferLearning.Builder |
TransferLearning.Builder.addLayer(Layer layer)
Add layers to the net
Required if layers are removed.
|
TransferLearning.GraphBuilder |
TransferLearning.GraphBuilder.addLayer(String layerName,
Layer layer,
InputPreProcessor preProcessor,
String... layerInputs)
Add a layer with a specified preprocessor
|
TransferLearning.GraphBuilder |
TransferLearning.GraphBuilder.addLayer(String layerName,
Layer layer,
String... layerInputs)
Add a layer of the specified configuration to the computation graph
|
Modifier and Type | Method and Description |
---|---|
static void |
OutputLayerUtil.validateOutputLayer(String layerName,
Layer layer)
Validate the output layer (or loss layer) configuration, to detect invalid consfiugrations.
|
Copyright © 2018. All rights reserved.