public class RnnOutputLayer extends BaseOutputLayer<RnnOutputLayer>
BaseOutputLayer, OutputLayer
,
Serialized FormLayer.TrainingMode, Layer.Type
inputMaskArray, inputMaskArrayState, labels
gradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, score, weightNoiseParams
cacheMode, conf, dropoutApplied, dropoutMask, epochCount, index, input, iterationCount, maskArray, maskState, preOutput, trainingListeners
Constructor and Description |
---|
RnnOutputLayer(NeuralNetConfiguration conf) |
RnnOutputLayer(NeuralNetConfiguration conf,
org.nd4j.linalg.api.ndarray.INDArray input) |
Modifier and Type | Method and Description |
---|---|
org.nd4j.linalg.api.ndarray.INDArray |
activate(boolean training,
LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the last set input
|
org.nd4j.linalg.primitives.Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> |
backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon,
LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layer
|
org.nd4j.linalg.api.ndarray.INDArray |
computeScoreForExamples(double fullNetworkL1,
double fullNetworkL2,
LayerWorkspaceMgr workspaceMgr)
Compute the score for each example individually, after labels and input have been set.
|
double |
f1Score(org.nd4j.linalg.api.ndarray.INDArray examples,
org.nd4j.linalg.api.ndarray.INDArray labels)
Returns the f1 score for the given examples.
|
org.nd4j.linalg.primitives.Pair<org.nd4j.linalg.api.ndarray.INDArray,MaskState> |
feedForwardMaskArray(org.nd4j.linalg.api.ndarray.INDArray maskArray,
MaskState currentMaskState,
int minibatchSize)
Feed forward the input mask array, setting in in the layer as appropriate.
|
org.nd4j.linalg.api.ndarray.INDArray |
getInput() |
protected org.nd4j.linalg.api.ndarray.INDArray |
getLabels2d(LayerWorkspaceMgr workspaceMgr,
ArrayType arrayType) |
protected org.nd4j.linalg.api.ndarray.INDArray |
preOutput2d(boolean training,
LayerWorkspaceMgr workspaceMgr) |
void |
setMaskArray(org.nd4j.linalg.api.ndarray.INDArray maskArray)
Set the mask array.
|
Layer.Type |
type()
Returns the layer type
|
activate, applyMask, clear, computeGradientAndScore, computeScore, f1Score, fit, fit, fit, fit, fit, getLabels, gradient, gradientAndScore, hasBias, isPretrainLayer, labelProbabilities, needsLabels, numLabels, predict, predict, setLabels, setScoreWithZ
accumulateScore, calcL1, calcL2, clearNoiseWeightParams, clone, fit, getGradientsViewArray, getOptimizer, getParam, getParamWithNoise, initParams, layerConf, numParams, params, paramTable, paramTable, preOutput, score, setBackpropGradientsViewArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, toString, transpose, update, update
addListeners, applyConstraints, applyDropOutIfNecessary, assertInputSet, batchSize, conf, getIndex, getInputMiniBatchSize, getListeners, getMaskArray, init, input, layerId, numParams, setCacheMode, setConf, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, validateInput
equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
calcL1, calcL2, clearNoiseWeightParams, clone, getEpochCount, getIndex, getInputMiniBatchSize, getIterationCount, getListeners, getMaskArray, setCacheMode, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setIterationCount, setListeners, setListeners, transpose
accumulateScore, addListeners, applyConstraints, batchSize, conf, fit, getGradientsViewArray, getOptimizer, getParam, init, initParams, input, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setConf, setParam, setParams, setParamsViewArray, setParamTable, update, update, validateInput
public RnnOutputLayer(NeuralNetConfiguration conf)
public RnnOutputLayer(NeuralNetConfiguration conf, org.nd4j.linalg.api.ndarray.INDArray input)
public org.nd4j.linalg.primitives.Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Layer
backpropGradient
in interface Layer
backpropGradient
in class BaseOutputLayer<RnnOutputLayer>
epsilon
- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.workspaceMgr
- Workspace managerArrayType.ACTIVATION_GRAD
workspace via the workspace managerpublic double f1Score(org.nd4j.linalg.api.ndarray.INDArray examples, org.nd4j.linalg.api.ndarray.INDArray labels)
f1Score
in interface Classifier
f1Score
in class BaseOutputLayer<RnnOutputLayer>
examples
- te the examples to classify (one example in each row)labels
- the true labelspublic org.nd4j.linalg.api.ndarray.INDArray getInput()
getInput
in class AbstractLayer<RnnOutputLayer>
public Layer.Type type()
Layer
type
in interface Layer
type
in class AbstractLayer<RnnOutputLayer>
protected org.nd4j.linalg.api.ndarray.INDArray preOutput2d(boolean training, LayerWorkspaceMgr workspaceMgr)
preOutput2d
in class BaseOutputLayer<RnnOutputLayer>
protected org.nd4j.linalg.api.ndarray.INDArray getLabels2d(LayerWorkspaceMgr workspaceMgr, ArrayType arrayType)
getLabels2d
in class BaseOutputLayer<RnnOutputLayer>
public org.nd4j.linalg.api.ndarray.INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Layer
activate
in interface Layer
activate
in class BaseLayer<RnnOutputLayer>
training
- training or test modeworkspaceMgr
- Workspace managerArrayType.ACTIVATIONS
workspace via the workspace managerpublic void setMaskArray(org.nd4j.linalg.api.ndarray.INDArray maskArray)
Layer
Layer.feedForwardMaskArray(INDArray, MaskState, int)
should be used in
preference to this.setMaskArray
in interface Layer
setMaskArray
in class AbstractLayer<RnnOutputLayer>
maskArray
- Mask array to setpublic org.nd4j.linalg.primitives.Pair<org.nd4j.linalg.api.ndarray.INDArray,MaskState> feedForwardMaskArray(org.nd4j.linalg.api.ndarray.INDArray maskArray, MaskState currentMaskState, int minibatchSize)
Layer
feedForwardMaskArray
in interface Layer
feedForwardMaskArray
in class AbstractLayer<RnnOutputLayer>
maskArray
- Mask array to setcurrentMaskState
- Current state of the mask - see MaskState
minibatchSize
- Current minibatch size. Needs to be known as it cannot always be inferred from the activations
array due to reshaping (such as a DenseLayer within a recurrent neural network)public org.nd4j.linalg.api.ndarray.INDArray computeScoreForExamples(double fullNetworkL1, double fullNetworkL2, LayerWorkspaceMgr workspaceMgr)
computeScoreForExamples
in interface IOutputLayer
computeScoreForExamples
in class BaseOutputLayer<RnnOutputLayer>
fullNetworkL1
- L1 regularization term for the entire network (or, 0.0 to not include regularization)fullNetworkL2
- L2 regularization term for the entire network (or, 0.0 to not include regularization)Copyright © 2018. All rights reserved.