public abstract class BasePretrainNetwork extends FeedForwardLayer
Modifier and Type | Class and Description |
---|---|
static class |
BasePretrainNetwork.Builder<T extends BasePretrainNetwork.Builder<T>> |
Modifier and Type | Field and Description |
---|---|
protected LossFunctions.LossFunction |
lossFunction |
protected double |
visibleBiasInit |
nIn, nOut
activationFn, biasInit, biasUpdater, gainInit, gradientNormalization, gradientNormalizationThreshold, iUpdater, regularization, regularizationBias, weightInitFn, weightNoise
constraints, iDropout, layerName
Constructor and Description |
---|
BasePretrainNetwork(BasePretrainNetwork.Builder builder) |
Modifier and Type | Method and Description |
---|---|
boolean |
isPretrainParam(String paramName)
Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop. Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs. |
getOutputType, getPreProcessorForInputType, setNIn
clone, getGradientNormalization, getRegularizationByParam, getUpdaterByParam, resetLayerDefaultConfig
getMemoryReport, initializeConstraints, initializer, instantiate, setDataType
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
getGradientNormalizationThreshold, getLayerName
protected LossFunctions.LossFunction lossFunction
protected double visibleBiasInit
public BasePretrainNetwork(BasePretrainNetwork.Builder builder)
public boolean isPretrainParam(String paramName)
Layer
isPretrainParam
in interface TrainingConfig
isPretrainParam
in class FeedForwardLayer
paramName
- Parameter name/keyCopyright © 2019. All rights reserved.