public class ConvolutionLayer extends FeedForwardLayer
Modifier and Type | Class and Description |
---|---|
static class |
ConvolutionLayer.AlgoMode
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters from the
ConvolutionLayer.FwdAlgo ,
ConvolutionLayer.BwdFilterAlgo , and ConvolutionLayer.BwdDataAlgo lists, but they may be very memory intensive, so if weird errors
occur when using cuDNN, please try the "NO_WORKSPACE" mode. |
static class |
ConvolutionLayer.BaseConvBuilder<T extends ConvolutionLayer.BaseConvBuilder<T>> |
static class |
ConvolutionLayer.Builder |
static class |
ConvolutionLayer.BwdDataAlgo
The backward data algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
static class |
ConvolutionLayer.BwdFilterAlgo
The backward filter algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
static class |
ConvolutionLayer.FwdAlgo
The forward algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
Modifier and Type | Field and Description |
---|---|
protected CNN2DFormat |
cnn2dDataFormat |
protected ConvolutionMode |
convolutionMode |
protected ConvolutionLayer.AlgoMode |
cudnnAlgoMode
Defaults to "PREFER_FASTEST", but "NO_WORKSPACE" uses less memory.
|
protected boolean |
cudnnAllowFallback |
protected ConvolutionLayer.BwdDataAlgo |
cudnnBwdDataAlgo |
protected ConvolutionLayer.BwdFilterAlgo |
cudnnBwdFilterAlgo |
protected ConvolutionLayer.FwdAlgo |
cudnnFwdAlgo |
protected int[] |
dilation |
protected boolean |
hasBias |
protected int[] |
kernelSize |
protected int[] |
padding |
protected int[] |
stride |
nIn, nOut, timeDistributedFormat
activationFn, biasInit, biasUpdater, gainInit, gradientNormalization, gradientNormalizationThreshold, iUpdater, regularization, regularizationBias, weightInitFn, weightNoise
constraints, iDropout, layerName
Modifier | Constructor and Description |
---|---|
protected |
ConvolutionLayer(ConvolutionLayer.BaseConvBuilder<?> builder)
ConvolutionLayer nIn in the input layer is the number of channels nOut is the number of filters to be used in the
net or in other words the channels The builder specifies the filter/kernel size, the stride and padding The
pooling layer takes the kernel size
|
Modifier and Type | Method and Description |
---|---|
ConvolutionLayer |
clone() |
LayerMemoryReport |
getMemoryReport(InputType inputType)
This is a report of the estimated memory consumption for the given layer
|
InputType |
getOutputType(int layerIndex,
InputType inputType)
For a given type of input to this layer, what is the type of the output?
|
InputPreProcessor |
getPreProcessorForInputType(InputType inputType)
For the given type of input to this layer, what preprocessor (if any) is required?
Returns null if no preprocessor is required, otherwise returns an appropriate InputPreProcessor for this layer, such as a CnnToFeedForwardPreProcessor |
boolean |
hasBias() |
ParamInitializer |
initializer() |
Layer |
instantiate(NeuralNetConfiguration conf,
Collection<TrainingListener> trainingListeners,
int layerIndex,
INDArray layerParamsView,
boolean initializeParams,
DataType networkDataType) |
void |
setNIn(InputType inputType,
boolean override)
Set the nIn value (number of inputs, or input channels for CNNs) based on the given input
type
|
isPretrainParam
getGradientNormalization, getRegularizationByParam, getUpdaterByParam, resetLayerDefaultConfig
initializeConstraints, setDataType
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
getGradientNormalizationThreshold, getLayerName
protected boolean hasBias
protected ConvolutionMode convolutionMode
protected int[] dilation
protected int[] kernelSize
protected int[] stride
protected int[] padding
protected boolean cudnnAllowFallback
protected CNN2DFormat cnn2dDataFormat
protected ConvolutionLayer.AlgoMode cudnnAlgoMode
protected ConvolutionLayer.FwdAlgo cudnnFwdAlgo
protected ConvolutionLayer.BwdFilterAlgo cudnnBwdFilterAlgo
protected ConvolutionLayer.BwdDataAlgo cudnnBwdDataAlgo
protected ConvolutionLayer(ConvolutionLayer.BaseConvBuilder<?> builder)
public boolean hasBias()
public ConvolutionLayer clone()
public Layer instantiate(NeuralNetConfiguration conf, Collection<TrainingListener> trainingListeners, int layerIndex, INDArray layerParamsView, boolean initializeParams, DataType networkDataType)
instantiate
in class Layer
public ParamInitializer initializer()
initializer
in class Layer
public InputType getOutputType(int layerIndex, InputType inputType)
Layer
getOutputType
in class FeedForwardLayer
layerIndex
- Index of the layerinputType
- Type of input for the layerpublic void setNIn(InputType inputType, boolean override)
Layer
setNIn
in class FeedForwardLayer
inputType
- Input type for this layeroverride
- If false: only set the nIn value if it's not already set. If true: set it
regardless of whether it's already set or not.public InputPreProcessor getPreProcessorForInputType(InputType inputType)
Layer
InputPreProcessor
for this layer, such as a CnnToFeedForwardPreProcessor
getPreProcessorForInputType
in class FeedForwardLayer
inputType
- InputType to this layerpublic LayerMemoryReport getMemoryReport(InputType inputType)
Layer
getMemoryReport
in class Layer
inputType
- Input type to the layer. Memory consumption is often a function of the input
typeCopyright © 2021. All rights reserved.