Class BatchNormalization
- java.lang.Object
-
- org.deeplearning4j.nn.conf.layers.Layer
-
- org.deeplearning4j.nn.conf.layers.BaseLayer
-
- org.deeplearning4j.nn.conf.layers.FeedForwardLayer
-
- org.deeplearning4j.nn.conf.layers.BatchNormalization
-
- All Implemented Interfaces:
Serializable
,Cloneable
,TrainingConfig
public class BatchNormalization extends FeedForwardLayer
- See Also:
- Serialized Form
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
BatchNormalization.Builder
-
Field Summary
Fields Modifier and Type Field Description protected double
beta
protected CNN2DFormat
cnn2DFormat
protected boolean
cudnnAllowFallback
protected double
decay
protected double
eps
protected double
gamma
protected boolean
isMinibatch
protected boolean
lockGammaBeta
protected boolean
useLogStd
-
Fields inherited from class org.deeplearning4j.nn.conf.layers.FeedForwardLayer
nIn, nOut, timeDistributedFormat
-
Fields inherited from class org.deeplearning4j.nn.conf.layers.BaseLayer
activationFn, biasInit, biasUpdater, gainInit, gradientNormalization, gradientNormalizationThreshold, iUpdater, regularization, regularizationBias, weightInitFn, weightNoise
-
Fields inherited from class org.deeplearning4j.nn.conf.layers.Layer
constraints, iDropout, layerName
-
-
Constructor Summary
Constructors Constructor Description BatchNormalization()
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description BatchNormalization
clone()
LayerMemoryReport
getMemoryReport(InputType inputType)
This is a report of the estimated memory consumption for the given layerInputType
getOutputType(int layerIndex, InputType inputType)
For a given type of input to this layer, what is the type of the output?InputPreProcessor
getPreProcessorForInputType(InputType inputType)
For the given type of input to this layer, what preprocessor (if any) is required?
Returns null if no preprocessor is required, otherwise returns an appropriateInputPreProcessor
for this layer, such as aCnnToFeedForwardPreProcessor
List<Regularization>
getRegularizationByParam(String paramName)
Get the regularization types (l1/l2/weight decay) for the given parameter.IUpdater
getUpdaterByParam(String paramName)
Get the updater for the given parameter.ParamInitializer
initializer()
Layer
instantiate(NeuralNetConfiguration conf, Collection<TrainingListener> trainingListeners, int layerIndex, INDArray layerParamsView, boolean initializeParams, DataType networkDataType)
boolean
isPretrainParam(String paramName)
Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop.
Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs.void
setNIn(InputType inputType, boolean override)
Set the nIn value (number of inputs, or input channels for CNNs) based on the given input type-
Methods inherited from class org.deeplearning4j.nn.conf.layers.BaseLayer
getGradientNormalization, resetLayerDefaultConfig
-
Methods inherited from class org.deeplearning4j.nn.conf.layers.Layer
initializeConstraints, setDataType
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface org.deeplearning4j.nn.api.TrainingConfig
getGradientNormalizationThreshold, getLayerName
-
-
-
-
Field Detail
-
decay
protected double decay
-
eps
protected double eps
-
isMinibatch
protected boolean isMinibatch
-
gamma
protected double gamma
-
beta
protected double beta
-
lockGammaBeta
protected boolean lockGammaBeta
-
cudnnAllowFallback
protected boolean cudnnAllowFallback
-
useLogStd
protected boolean useLogStd
-
cnn2DFormat
protected CNN2DFormat cnn2DFormat
-
-
Method Detail
-
clone
public BatchNormalization clone()
-
instantiate
public Layer instantiate(NeuralNetConfiguration conf, Collection<TrainingListener> trainingListeners, int layerIndex, INDArray layerParamsView, boolean initializeParams, DataType networkDataType)
- Specified by:
instantiate
in classLayer
-
initializer
public ParamInitializer initializer()
- Specified by:
initializer
in classLayer
- Returns:
- The parameter initializer for this model
-
getOutputType
public InputType getOutputType(int layerIndex, InputType inputType)
Description copied from class:Layer
For a given type of input to this layer, what is the type of the output?- Overrides:
getOutputType
in classFeedForwardLayer
- Parameters:
layerIndex
- Index of the layerinputType
- Type of input for the layer- Returns:
- Type of output from the layer
-
setNIn
public void setNIn(InputType inputType, boolean override)
Description copied from class:Layer
Set the nIn value (number of inputs, or input channels for CNNs) based on the given input type- Overrides:
setNIn
in classFeedForwardLayer
- Parameters:
inputType
- Input type for this layeroverride
- If false: only set the nIn value if it's not already set. If true: set it regardless of whether it's already set or not.
-
getPreProcessorForInputType
public InputPreProcessor getPreProcessorForInputType(InputType inputType)
Description copied from class:Layer
For the given type of input to this layer, what preprocessor (if any) is required?
Returns null if no preprocessor is required, otherwise returns an appropriateInputPreProcessor
for this layer, such as aCnnToFeedForwardPreProcessor
- Overrides:
getPreProcessorForInputType
in classFeedForwardLayer
- Parameters:
inputType
- InputType to this layer- Returns:
- Null if no preprocessor is required, otherwise the type of preprocessor necessary for this layer/input combination
-
getRegularizationByParam
public List<Regularization> getRegularizationByParam(String paramName)
Description copied from class:Layer
Get the regularization types (l1/l2/weight decay) for the given parameter. Different parameters may have different regularization types.- Specified by:
getRegularizationByParam
in interfaceTrainingConfig
- Overrides:
getRegularizationByParam
in classBaseLayer
- Parameters:
paramName
- Parameter name ("W", "b" etc)- Returns:
- Regularization types (if any) for the specified parameter
-
getUpdaterByParam
public IUpdater getUpdaterByParam(String paramName)
Description copied from class:BaseLayer
Get the updater for the given parameter. Typically the same updater will be used for all updaters, but this is not necessarily the case- Specified by:
getUpdaterByParam
in interfaceTrainingConfig
- Overrides:
getUpdaterByParam
in classBaseLayer
- Parameters:
paramName
- Parameter name- Returns:
- IUpdater for the parameter
-
getMemoryReport
public LayerMemoryReport getMemoryReport(InputType inputType)
Description copied from class:Layer
This is a report of the estimated memory consumption for the given layer- Specified by:
getMemoryReport
in classLayer
- Parameters:
inputType
- Input type to the layer. Memory consumption is often a function of the input type- Returns:
- Memory report for the layer
-
isPretrainParam
public boolean isPretrainParam(String paramName)
Description copied from class:Layer
Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop.
Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs.- Specified by:
isPretrainParam
in interfaceTrainingConfig
- Overrides:
isPretrainParam
in classFeedForwardLayer
- Parameters:
paramName
- Parameter name/key- Returns:
- True if the parameter is for layerwise pretraining only, false otherwise
-
-