public class EmbeddingLayer extends BaseLayer<EmbeddingLayer>
Layer.TrainingMode, Layer.Type
gradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, score, solver, weightNoiseParams
cacheMode, conf, dataType, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners
Constructor and Description |
---|
EmbeddingLayer(NeuralNetConfiguration conf,
org.nd4j.linalg.api.buffer.DataType dataType) |
Modifier and Type | Method and Description |
---|---|
INDArray |
activate(boolean training,
LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the last set input
|
protected void |
applyDropOutIfNecessary(boolean training,
LayerWorkspaceMgr workspaceMgr) |
Pair<Gradient,INDArray> |
backpropGradient(INDArray epsilon,
LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layer
|
boolean |
hasBias()
Does this layer have no bias term? Many layers (dense, convolutional, output, embedding) have biases by
default, but no-bias versions are possible via configuration
|
boolean |
isPretrainLayer()
Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)
|
protected INDArray |
preOutput(boolean training,
LayerWorkspaceMgr workspaceMgr) |
calcRegularizationScore, clear, clearNoiseWeightParams, clone, computeGradientAndScore, fit, fit, getGradientsViewArray, getOptimizer, getParam, getParamWithNoise, gradient, hasLayerNorm, layerConf, numParams, params, paramTable, paramTable, preOutputWithPreNorm, score, setBackpropGradientsViewArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, setScoreWithZ, toString, update, update
activate, addListeners, allowInputModification, applyConstraints, applyMask, assertInputSet, backpropDropOutIfPresent, batchSize, conf, feedForwardMaskArray, getConfig, getEpochCount, getHelper, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, gradientAndScore, init, input, layerId, numParams, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, type, updaterDivideByMinibatch
equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
getIterationCount, setIterationCount
public EmbeddingLayer(NeuralNetConfiguration conf, org.nd4j.linalg.api.buffer.DataType dataType)
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Layer
backpropGradient
in interface Layer
backpropGradient
in class BaseLayer<EmbeddingLayer>
epsilon
- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.workspaceMgr
- Workspace managerArrayType.ACTIVATION_GRAD
workspace via the workspace managerprotected INDArray preOutput(boolean training, LayerWorkspaceMgr workspaceMgr)
preOutput
in class BaseLayer<EmbeddingLayer>
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Layer
activate
in interface Layer
activate
in class BaseLayer<EmbeddingLayer>
training
- training or test modeworkspaceMgr
- Workspace managerArrayType.ACTIVATIONS
workspace via the workspace managerpublic boolean hasBias()
BaseLayer
hasBias
in class BaseLayer<EmbeddingLayer>
public boolean isPretrainLayer()
Layer
protected void applyDropOutIfNecessary(boolean training, LayerWorkspaceMgr workspaceMgr)
applyDropOutIfNecessary
in class AbstractLayer<EmbeddingLayer>
Copyright © 2019. All rights reserved.