public class BatchNormalization extends BaseLayer<BatchNormalization>
Layer.TrainingMode, Layer.Type| Modifier and Type | Field and Description |
|---|---|
static String |
BATCH_NORM_CUDNN_HELPER_CLASS_NAME |
protected int |
helperCountFail |
protected int |
index |
protected List<TrainingListener> |
listeners |
protected static double |
ONE_ON_2LOGE_10 |
protected INDArray |
std |
protected INDArray |
xHat |
protected INDArray |
xMu |
gradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, score, solver, weightNoiseParamscacheMode, conf, dataType, dropoutApplied, epochCount, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners| Constructor and Description |
|---|
BatchNormalization(NeuralNetConfiguration conf,
DataType dataType) |
| Modifier and Type | Method and Description |
|---|---|
INDArray |
activate(boolean training,
LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the last set input
|
Pair<Gradient,INDArray> |
backpropGradient(INDArray epsilon,
LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layer
|
void |
fit(INDArray input,
LayerWorkspaceMgr workspaceMgr)
Fit the model to the given data
|
LayerHelper |
getHelper() |
int |
getIndex()
Get the layer index.
|
Collection<TrainingListener> |
getListeners()
Get the iteration listeners for this layer.
|
long[] |
getShape(INDArray x) |
Gradient |
gradient()
Get the gradient.
|
boolean |
isPretrainLayer()
Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)
|
INDArray |
preOutput(INDArray x,
Layer.TrainingMode training,
LayerWorkspaceMgr workspaceMgr) |
void |
setIndex(int index)
Set the layer index.
|
void |
setListeners(TrainingListener... listeners)
Set the
TrainingListeners for this model. |
Layer.Type |
type()
Returns the layer type
|
boolean |
updaterDivideByMinibatch(String paramName)
DL4J layers typically produce the sum of the gradients during the backward pass for each layer, and if required
(if minibatch=true) then divide by the minibatch size.
However, there are some exceptions, such as the batch norm mean/variance estimate parameters: these "gradients" are actually not gradients, but are updates to be applied directly to the parameter vector. |
calcRegularizationScore, clear, clearNoiseWeightParams, clone, computeGradientAndScore, fit, getGradientsViewArray, getOptimizer, getParam, getParamWithNoise, hasBias, hasLayerNorm, layerConf, numParams, params, paramTable, paramTable, preOutput, preOutputWithPreNorm, score, setBackpropGradientsViewArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, setScoreWithZ, toString, update, updateactivate, addListeners, allowInputModification, applyConstraints, applyDropOutIfNecessary, applyMask, assertInputSet, backpropDropOutIfPresent, batchSize, close, conf, feedForwardMaskArray, getConfig, getEpochCount, getInput, getInputMiniBatchSize, getMaskArray, gradientAndScore, init, input, layerId, numParams, setCacheMode, setConf, setEpochCount, setInput, setInputMiniBatchSize, setListeners, setMaskArrayequals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitgetIterationCount, setIterationCountprotected static final double ONE_ON_2LOGE_10
protected int helperCountFail
protected int index
protected List<TrainingListener> listeners
protected INDArray std
protected INDArray xMu
protected INDArray xHat
public static final String BATCH_NORM_CUDNN_HELPER_CLASS_NAME
public BatchNormalization(NeuralNetConfiguration conf, DataType dataType)
public Layer.Type type()
Layertype in interface Layertype in class AbstractLayer<BatchNormalization>public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
LayerbackpropGradient in interface LayerbackpropGradient in class BaseLayer<BatchNormalization>epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.workspaceMgr - Workspace managerArrayType.ACTIVATION_GRAD workspace via the workspace managerpublic void fit(INDArray input, LayerWorkspaceMgr workspaceMgr)
Modelfit in interface Modelfit in class BaseLayer<BatchNormalization>input - the data to fit the model topublic INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Layeractivate in interface Layeractivate in class BaseLayer<BatchNormalization>training - training or test modeworkspaceMgr - Workspace managerArrayType.ACTIVATIONS workspace via the workspace managerpublic Gradient gradient()
ModelModel.computeGradientAndScore(LayerWorkspaceMgr) } .gradient in interface Modelgradient in class BaseLayer<BatchNormalization>public INDArray preOutput(INDArray x, Layer.TrainingMode training, LayerWorkspaceMgr workspaceMgr)
public Collection<TrainingListener> getListeners()
LayergetListeners in interface LayergetListeners in class AbstractLayer<BatchNormalization>public void setListeners(TrainingListener... listeners)
LayerTrainingListeners for this model. If any listeners have previously been set, they will be
replaced by this methodsetListeners in interface LayersetListeners in interface ModelsetListeners in class AbstractLayer<BatchNormalization>public void setIndex(int index)
LayersetIndex in interface LayersetIndex in class AbstractLayer<BatchNormalization>public int getIndex()
LayergetIndex in interface LayergetIndex in class AbstractLayer<BatchNormalization>public boolean isPretrainLayer()
Layerpublic LayerHelper getHelper()
getHelper in interface LayergetHelper in class AbstractLayer<BatchNormalization>public long[] getShape(INDArray x)
public boolean updaterDivideByMinibatch(String paramName)
TrainableupdaterDivideByMinibatch in interface TrainableupdaterDivideByMinibatch in class AbstractLayer<BatchNormalization>paramName - Name of the parameterCopyright © 2021. All rights reserved.