public class ZeroPaddingLayer extends AbstractLayer<ZeroPaddingLayer>
Layer.TrainingMode, Layer.Type
cacheMode, conf, dropoutApplied, dropoutMask, index, input, iterationListeners, maskArray, maskState, preOutput
Constructor and Description |
---|
ZeroPaddingLayer(NeuralNetConfiguration conf) |
Modifier and Type | Method and Description |
---|---|
org.nd4j.linalg.api.ndarray.INDArray |
activate(boolean training)
Trigger an activation with the last specified input
|
org.nd4j.linalg.api.ndarray.INDArray |
activationMean()
Calculate the mean representation
for the activation for this layer
|
org.nd4j.linalg.primitives.Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> |
backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon)
Calculate the gradient relative to the error in the next layer
|
double |
calcL1(boolean backpropParamsOnly)
Calculate the l1 regularization term
0.0 if regularization is not used. |
double |
calcL2(boolean backpropParamsOnly)
Calculate the l2 regularization term
0.0 if regularization is not used. |
Layer |
clone()
Clone the layer
|
boolean |
isPretrainLayer()
Returns true if the layer can be trained in an unsupervised/pretrain manner (VAE, RBMs etc)
|
org.nd4j.linalg.api.ndarray.INDArray |
preOutput(boolean training) |
Layer.Type |
type()
Returns the layer type
|
accumulateScore, activate, activate, activate, activate, activate, addListeners, applyDropOutIfNecessary, applyLearningRateScoreDecay, applyMask, batchSize, calcGradient, clear, computeGradientAndScore, conf, derivativeActivation, error, feedForwardMaskArray, fit, fit, getGradientsViewArray, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, getOptimizer, getParam, gradient, gradientAndScore, init, initParams, input, iterate, layerConf, layerId, merge, numParams, numParams, params, paramTable, paramTable, preOutput, preOutput, preOutput, score, setBackpropGradientsViewArray, setCacheMode, setConf, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, transpose, update, update, validateInput
public ZeroPaddingLayer(NeuralNetConfiguration conf)
public org.nd4j.linalg.api.ndarray.INDArray preOutput(boolean training)
preOutput
in class AbstractLayer<ZeroPaddingLayer>
public boolean isPretrainLayer()
Layer
public Layer.Type type()
Layer
type
in interface Layer
type
in class AbstractLayer<ZeroPaddingLayer>
public org.nd4j.linalg.primitives.Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon)
Layer
epsilon
- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.public org.nd4j.linalg.api.ndarray.INDArray activationMean()
Layer
public org.nd4j.linalg.api.ndarray.INDArray activate(boolean training)
Layer
training
- training or test modepublic Layer clone()
Layer
clone
in interface Layer
clone
in class AbstractLayer<ZeroPaddingLayer>
public double calcL1(boolean backpropParamsOnly)
Layer
calcL1
in interface Layer
calcL1
in class AbstractLayer<ZeroPaddingLayer>
backpropParamsOnly
- If true: calculate L1 based on backprop params only. If false: calculate
based on all params (including pretrain params, if any)public double calcL2(boolean backpropParamsOnly)
Layer
calcL2
in interface Layer
calcL2
in class AbstractLayer<ZeroPaddingLayer>
backpropParamsOnly
- If true: calculate L2 based on backprop params only. If false: calculate
based on all params (including pretrain params, if any)Copyright © 2017. All rights reserved.