public abstract class BasePretrainNetwork extends FeedForwardLayer
| Modifier and Type | Class and Description |
|---|---|
static class |
BasePretrainNetwork.Builder<T extends BasePretrainNetwork.Builder<T>> |
| Modifier and Type | Field and Description |
|---|---|
protected LossFunctions.LossFunction |
lossFunction |
protected double |
visibleBiasInit |
nIn, nOut, timeDistributedFormatactivationFn, biasInit, biasUpdater, gainInit, gradientNormalization, gradientNormalizationThreshold, iUpdater, regularization, regularizationBias, weightInitFn, weightNoiseconstraints, iDropout, layerName| Constructor and Description |
|---|
BasePretrainNetwork(BasePretrainNetwork.Builder builder) |
| Modifier and Type | Method and Description |
|---|---|
boolean |
isPretrainParam(String paramName)
Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop. Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs. |
getOutputType, getPreProcessorForInputType, setNInclone, getGradientNormalization, getRegularizationByParam, getUpdaterByParam, resetLayerDefaultConfiggetMemoryReport, initializeConstraints, initializer, instantiate, setDataTypeequals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitgetGradientNormalizationThreshold, getLayerNameprotected LossFunctions.LossFunction lossFunction
protected double visibleBiasInit
public BasePretrainNetwork(BasePretrainNetwork.Builder builder)
public boolean isPretrainParam(String paramName)
LayerisPretrainParam in interface TrainingConfigisPretrainParam in class FeedForwardLayerparamName - Parameter name/keyCopyright © 2022. All rights reserved.