public abstract class BasePretrainNetwork<LayerConfT extends BasePretrainNetwork> extends BaseLayer<LayerConfT>
Layer.TrainingMode, Layer.Typegradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, score, solver, weightNoiseParamscacheMode, conf, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners| Constructor and Description |
|---|
BasePretrainNetwork(NeuralNetConfiguration conf) |
BasePretrainNetwork(NeuralNetConfiguration conf,
INDArray input) |
| Modifier and Type | Method and Description |
|---|---|
Pair<Gradient,INDArray> |
backpropGradient(INDArray epsilon,
LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layer
|
double |
calcL1(boolean backpropParamsOnly)
Calculate the l1 regularization term
0.0 if regularization is not used. |
double |
calcL2(boolean backpropParamsOnly)
Calculate the l2 regularization term
0.0 if regularization is not used. |
protected Gradient |
createGradient(INDArray wGradient,
INDArray vBiasGradient,
INDArray hBiasGradient) |
INDArray |
getCorruptedInput(INDArray x,
double corruptionLevel)
Corrupts the given input by doing a binomial sampling
given the corruption level
|
long |
numParams()
The number of parameters for the model, for backprop (i.e., excluding visible bias)
|
long |
numParams(boolean backwards)
the number of parameters for the model
|
INDArray |
params()
Returns the parameters of the neural network as a flattened row vector
|
Map<String,INDArray> |
paramTable(boolean backpropParamsOnly)
Table of parameters by key, for backprop
For many models (dense layers, etc) - all parameters are backprop parameters
|
abstract Pair<INDArray,INDArray> |
sampleHiddenGivenVisible(INDArray v)
Sample the hidden distribution given the visible
|
abstract Pair<INDArray,INDArray> |
sampleVisibleGivenHidden(INDArray h)
Sample the visible distribution given the hidden
|
void |
setParams(INDArray params)
Set the parameters for this model.
|
protected void |
setScoreWithZ(INDArray z) |
activate, clear, clearNoiseWeightParams, clone, computeGradientAndScore, fit, fit, getGradientsViewArray, getOptimizer, getParam, getParamWithNoise, gradient, hasBias, layerConf, paramTable, preOutput, score, setBackpropGradientsViewArray, setParam, setParams, setParamsViewArray, setParamTable, toString, update, updateactivate, addListeners, allowInputModification, applyConstraints, applyDropOutIfNecessary, applyMask, assertInputSet, backpropDropOutIfPresent, batchSize, conf, feedForwardMaskArray, getConfig, getEpochCount, getHelper, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, gradientAndScore, init, input, layerId, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, type, updaterDivideByMinibatchequals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitgetIterationCount, isPretrainLayer, setIterationCountpublic BasePretrainNetwork(NeuralNetConfiguration conf)
public BasePretrainNetwork(NeuralNetConfiguration conf, INDArray input)
public INDArray getCorruptedInput(INDArray x, double corruptionLevel)
x - the input to corruptcorruptionLevel - the corruption valueprotected Gradient createGradient(INDArray wGradient, INDArray vBiasGradient, INDArray hBiasGradient)
public long numParams(boolean backwards)
ModelnumParams in interface ModelnumParams in class AbstractLayer<LayerConfT extends BasePretrainNetwork>public abstract Pair<INDArray,INDArray> sampleHiddenGivenVisible(INDArray v)
v - the visible to sample frompublic abstract Pair<INDArray,INDArray> sampleVisibleGivenHidden(INDArray h)
h - the hidden to sample fromprotected void setScoreWithZ(INDArray z)
setScoreWithZ in class BaseLayer<LayerConfT extends BasePretrainNetwork>public Map<String,INDArray> paramTable(boolean backpropParamsOnly)
ModelparamTable in interface ModelparamTable in interface TrainableparamTable in class BaseLayer<LayerConfT extends BasePretrainNetwork>backpropParamsOnly - If true, return backprop params only. If false: return all params (equivalent to
paramsTable())public INDArray params()
BaseLayerparams in interface Modelparams in interface Trainableparams in class BaseLayer<LayerConfT extends BasePretrainNetwork>public long numParams()
numParams in interface ModelnumParams in interface TrainablenumParams in class BaseLayer<LayerConfT extends BasePretrainNetwork>public void setParams(INDArray params)
ModelsetParams in interface ModelsetParams in class BaseLayer<LayerConfT extends BasePretrainNetwork>params - the parameters for the modelpublic Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
LayerbackpropGradient in interface LayerbackpropGradient in class BaseLayer<LayerConfT extends BasePretrainNetwork>epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.workspaceMgr - Workspace managerArrayType.ACTIVATION_GRAD workspace via the workspace managerpublic double calcL2(boolean backpropParamsOnly)
LayercalcL2 in interface LayercalcL2 in class BaseLayer<LayerConfT extends BasePretrainNetwork>backpropParamsOnly - If true: calculate L2 based on backprop params only. If false: calculate
based on all params (including pretrain params, if any)public double calcL1(boolean backpropParamsOnly)
LayercalcL1 in interface LayercalcL1 in class BaseLayer<LayerConfT extends BasePretrainNetwork>backpropParamsOnly - If true: calculate L1 based on backprop params only. If false: calculate
based on all params (including pretrain params, if any)Copyright © 2018. All rights reserved.