public class EmbeddingLayer extends BaseLayer<EmbeddingLayer>
Layer.TrainingMode, Layer.Typegradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, score, solver, weightNoiseParamscacheMode, conf, dropoutApplied, dropoutMask, epochCount, index, input, iterationCount, maskArray, maskState, preOutput, trainingListeners| Constructor and Description |
|---|
EmbeddingLayer(NeuralNetConfiguration conf) |
| Modifier and Type | Method and Description |
|---|---|
org.nd4j.linalg.api.ndarray.INDArray |
activate(boolean training,
LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the last set input
|
protected void |
applyDropOutIfNecessary(boolean training,
LayerWorkspaceMgr workspaceMgr) |
org.nd4j.linalg.primitives.Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> |
backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon,
LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layer
|
boolean |
hasBias()
Does this layer have no bias term? Many layers (dense, convolutional, output, embedding) have biases by
default, but no-bias versions are possible via configuration
|
boolean |
isPretrainLayer()
Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)
|
protected org.nd4j.linalg.api.ndarray.INDArray |
preOutput(boolean training,
LayerWorkspaceMgr workspaceMgr) |
accumulateScore, calcL1, calcL2, clear, clearNoiseWeightParams, clone, computeGradientAndScore, fit, fit, getGradientsViewArray, getOptimizer, getParam, getParamWithNoise, gradient, initParams, layerConf, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, setScoreWithZ, toString, transpose, update, updateactivate, addListeners, applyConstraints, applyMask, assertInputSet, batchSize, conf, feedForwardMaskArray, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, gradientAndScore, init, input, layerId, numParams, setCacheMode, setConf, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, type, validateInputequals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitgetEpochCount, getIterationCount, setEpochCount, setIterationCountpublic EmbeddingLayer(NeuralNetConfiguration conf)
public org.nd4j.linalg.primitives.Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
LayerbackpropGradient in interface LayerbackpropGradient in class BaseLayer<EmbeddingLayer>epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.workspaceMgr - Workspace managerArrayType.ACTIVATION_GRAD workspace via the workspace managerprotected org.nd4j.linalg.api.ndarray.INDArray preOutput(boolean training,
LayerWorkspaceMgr workspaceMgr)
preOutput in class BaseLayer<EmbeddingLayer>public org.nd4j.linalg.api.ndarray.INDArray activate(boolean training,
LayerWorkspaceMgr workspaceMgr)
Layeractivate in interface Layeractivate in class BaseLayer<EmbeddingLayer>training - training or test modeworkspaceMgr - Workspace managerArrayType.ACTIVATIONS workspace via the workspace managerpublic boolean hasBias()
BaseLayerhasBias in class BaseLayer<EmbeddingLayer>public boolean isPretrainLayer()
Layerprotected void applyDropOutIfNecessary(boolean training,
LayerWorkspaceMgr workspaceMgr)
applyDropOutIfNecessary in class AbstractLayer<EmbeddingLayer>Copyright © 2018. All rights reserved.