public class TimeDistributedLayer extends BaseWrapperLayer
Layer.TrainingMode, Layer.Type
underlying
Constructor and Description |
---|
TimeDistributedLayer(Layer underlying,
RNNFormat rnnDataFormat) |
Modifier and Type | Method and Description |
---|---|
INDArray |
activate(boolean training,
LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the last set input
|
INDArray |
activate(INDArray input,
boolean training,
LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the specified input
|
Pair<Gradient,INDArray> |
backpropGradient(INDArray epsilon,
LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layer
|
Pair<INDArray,MaskState> |
feedForwardMaskArray(INDArray maskArray,
MaskState currentMaskState,
int minibatchSize)
Feed forward the input mask array, setting in the layer as appropriate.
|
protected int[] |
permuteAxes(int rank,
int timeAxis) |
protected INDArray |
reshape(INDArray array) |
protected INDArray |
revertReshape(INDArray toRevert,
long minibatch) |
void |
setMaskArray(INDArray maskArray)
Set the mask array.
|
addListeners, allowInputModification, applyConstraints, batchSize, calcRegularizationScore, clear, clearNoiseWeightParams, close, computeGradientAndScore, conf, fit, fit, getConfig, getEpochCount, getGradientsViewArray, getHelper, getIndex, getInputMiniBatchSize, getIterationCount, getListeners, getMaskArray, getOptimizer, getParam, gradient, gradientAndScore, init, input, isPretrainLayer, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setIterationCount, setListeners, setListeners, setParam, setParams, setParamsViewArray, setParamTable, type, update, update, updaterDivideByMinibatch
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Layer
backpropGradient
in interface Layer
backpropGradient
in class BaseWrapperLayer
epsilon
- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.workspaceMgr
- Workspace managerArrayType.ACTIVATION_GRAD
workspace via the workspace managerpublic INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Layer
activate
in interface Layer
activate
in class BaseWrapperLayer
training
- training or test modeworkspaceMgr
- Workspace managerArrayType.ACTIVATIONS
workspace via the workspace managerpublic INDArray activate(INDArray input, boolean training, LayerWorkspaceMgr workspaceMgr)
Layer
activate
in interface Layer
activate
in class BaseWrapperLayer
input
- the input to usetraining
- train or test modeworkspaceMgr
- Workspace manager.ArrayType.ACTIVATIONS
workspace via the workspace managerprotected int[] permuteAxes(int rank, int timeAxis)
public void setMaskArray(INDArray maskArray)
Layer
Layer.feedForwardMaskArray(INDArray, MaskState, int)
should be used in
preference to this.setMaskArray
in interface Layer
setMaskArray
in class BaseWrapperLayer
maskArray
- Mask array to setpublic Pair<INDArray,MaskState> feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)
Layer
feedForwardMaskArray
in interface Layer
feedForwardMaskArray
in class BaseWrapperLayer
maskArray
- Mask array to setcurrentMaskState
- Current state of the mask - see MaskState
minibatchSize
- Current minibatch size. Needs to be known as it cannot always be inferred from the activations
array due to reshaping (such as a DenseLayer within a recurrent neural network)Copyright © 2021. All rights reserved.