Package org.deeplearning4j.nn.api
Interface TrainingConfig
-
- All Known Implementing Classes:
AbstractLSTM,AbstractSameDiffLayer,ActivationLayer,AttentionVertex,AutoEncoder,BaseLayer,BaseOutputLayer,BasePretrainNetwork,BaseRecurrentLayer,BaseUpsamplingLayer,BaseWrapperLayer,BatchNormalization,Bidirectional,CapsuleLayer,CapsuleStrengthLayer,CenterLossOutputLayer,Cnn3DLossLayer,CnnLossLayer,Convolution1D,Convolution1DLayer,Convolution2D,Convolution3D,ConvolutionLayer,Cropping1D,Cropping2D,Cropping3D,Deconvolution2D,Deconvolution3D,DenseLayer,DepthwiseConvolution2D,DropoutLayer,DummyConfig,ElementWiseMultiplicationLayer,EmbeddingLayer,EmbeddingSequenceLayer,FeedForwardLayer,FrozenLayer,FrozenLayerWithBackprop,GlobalPoolingLayer,GravesBidirectionalLSTM,GravesLSTM,IdentityLayer,LastTimeStep,Layer,LearnedSelfAttentionLayer,LocallyConnected1D,LocallyConnected2D,LocalResponseNormalization,LossLayer,LSTM,MaskLayer,MaskZeroLayer,NoParamLayer,OCNNOutputLayer,OutputLayer,Pooling1D,Pooling2D,PReLULayer,PrimaryCapsules,RecurrentAttentionLayer,RepeatVector,RnnLossLayer,RnnOutputLayer,SameDiffLambdaLayer,SameDiffLambdaVertex,SameDiffLayer,SameDiffOutputLayer,SameDiffVertex,SelfAttentionLayer,SeparableConvolution2D,SimpleRnn,SpaceToBatchLayer,SpaceToDepthLayer,Subsampling1DLayer,Subsampling3DLayer,SubsamplingLayer,TimeDistributed,Upsampling1D,Upsampling2D,Upsampling3D,VariationalAutoencoder,Yolo2OutputLayer,ZeroPadding1DLayer,ZeroPadding3DLayer,ZeroPaddingLayer
public interface TrainingConfig
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description GradientNormalizationgetGradientNormalization()doublegetGradientNormalizationThreshold()StringgetLayerName()List<Regularization>getRegularizationByParam(String paramName)Get the regularization types (l1/l2/weight decay) for the given parameter.IUpdatergetUpdaterByParam(String paramName)Get the updater for the given parameter.booleanisPretrainParam(String paramName)Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop.
Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs.voidsetDataType(DataType dataType)
-
-
-
Method Detail
-
getLayerName
String getLayerName()
- Returns:
- Name of the layer
-
getRegularizationByParam
List<Regularization> getRegularizationByParam(String paramName)
Get the regularization types (l1/l2/weight decay) for the given parameter. Different parameters may have different regularization types.- Parameters:
paramName- Parameter name ("W", "b" etc)- Returns:
- Regularization types (if any) for the specified parameter
-
isPretrainParam
boolean isPretrainParam(String paramName)
Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop.
Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs.- Parameters:
paramName- Parameter name/key- Returns:
- True if the parameter is for layerwise pretraining only, false otherwise
-
getUpdaterByParam
IUpdater getUpdaterByParam(String paramName)
Get the updater for the given parameter. Typically the same updater will be used for all updaters, but this is not necessarily the case- Parameters:
paramName- Parameter name- Returns:
- IUpdater for the parameter
-
getGradientNormalization
GradientNormalization getGradientNormalization()
- Returns:
- The gradient normalization configuration
-
getGradientNormalizationThreshold
double getGradientNormalizationThreshold()
- Returns:
- The gradient normalization threshold
-
setDataType
void setDataType(DataType dataType)
-
-