public class GaussianRectifiedLinearRBM extends RBM
Modifier and Type | Class and Description |
---|---|
static class |
GaussianRectifiedLinearRBM.Builder |
NeuralNetwork.LossFunction, NeuralNetwork.OptimizationAlgorithm
applySparsity, dist, doMask, dropOut, fanIn, firstTimeThrough, gradientListeners, hBias, hBiasAdaGrad, input, l2, lossFunction, momentum, nHidden, normalizeByInputRows, nVisible, optimizationAlgo, renderWeightsEveryNumEpochs, rng, sparsity, useAdaGrad, useRegularization, vBias, vBiasAdaGrad, W, wAdaGrad
Modifier and Type | Method and Description |
---|---|
org.jblas.DoubleMatrix |
propDown(org.jblas.DoubleMatrix h)
Calculates the activation of the hidden:
h * W + vbias
Compute the mean activation of the visibles given hidden unit
configurations for a set of training examples.
|
org.jblas.DoubleMatrix |
propUp(org.jblas.DoubleMatrix v)
Calculates the activation of the visible :
sigmoid(v * W + hbias)
Compute the mean activation of the hiddens given visible unit
configurations for a set of training examples.
|
Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> |
sampleHiddenGivenVisible(org.jblas.DoubleMatrix v)
Rectified linear units for output
|
Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> |
sampleVisibleGivenHidden(org.jblas.DoubleMatrix h)
Guess the visible values given the hidden
|
void |
trainTillConvergence(double learningRate,
int k,
org.jblas.DoubleMatrix input)
Trains till global minimum is found.
|
contrastiveDivergence, freeEnergy, getGradient, gibbhVh, lossFunction, reconstruct, train, trainTillConvergence
applyDropOutIfNecessary, applySparsity, clone, dropOut, epochDone, fanIn, getAdaGrad, getDist, getGradientListeners, gethBias, gethBiasAdaGrad, getInput, getL2, getLossFunction, getMomentum, getnHidden, getnVisible, getOptimizationAlgorithm, getReConstructionCrossEntropy, getRenderEpochs, getRng, getSparsity, getvBias, getVBiasAdaGrad, getW, hBiasMean, initWeights, jostleWeighMatrix, l2RegularizedCoefficient, load, lossFunction, merge, negativeLogLikelihood, negativeLoglikelihood, normalizeByInputRows, resetAdaGrad, setAdaGrad, setDist, setDropOut, setFanIn, setGradientListeners, sethBias, setHbiasAdaGrad, setInput, setL2, setLossFunction, setMomentum, setnHidden, setnVisible, setOptimizationAlgorithm, setRenderEpochs, setRng, setSparsity, setvBias, setVBiasAdaGrad, setW, squaredLoss, transpose, triggerGradientEvents, update, updateGradientAccordingToParams, write
public void trainTillConvergence(double learningRate, int k, org.jblas.DoubleMatrix input)
trainTillConvergence
in class RBM
learningRate
- k
- input
- public org.jblas.DoubleMatrix propUp(org.jblas.DoubleMatrix v)
public Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> sampleHiddenGivenVisible(org.jblas.DoubleMatrix v)
sampleHiddenGivenVisible
in interface NeuralNetwork
sampleHiddenGivenVisible
in class RBM
v
- the visible valuespublic org.jblas.DoubleMatrix propDown(org.jblas.DoubleMatrix h)
public Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> sampleVisibleGivenHidden(org.jblas.DoubleMatrix h)
RBM
sampleVisibleGivenHidden
in interface NeuralNetwork
sampleVisibleGivenHidden
in class RBM
h
- the the hidden inputCopyright © 2014. All Rights Reserved.