public class RBM extends BaseNeuralNetwork
Modifier and Type | Class and Description |
---|---|
static class |
RBM.Builder |
NeuralNetwork.LossFunction, NeuralNetwork.OptimizationAlgorithm
Modifier and Type | Field and Description |
---|---|
protected NeuralNetworkOptimizer |
optimizer |
applySparsity, dist, doMask, dropOut, fanIn, firstTimeThrough, gradientListeners, hBias, hBiasAdaGrad, input, l2, lossFunction, momentum, nHidden, normalizeByInputRows, nVisible, optimizationAlgo, renderWeightsEveryNumEpochs, rng, sparsity, useAdaGrad, useRegularization, vBias, vBiasAdaGrad, W, wAdaGrad
Modifier | Constructor and Description |
---|---|
protected |
RBM() |
protected |
RBM(org.jblas.DoubleMatrix input,
int nVisible,
int n_hidden,
org.jblas.DoubleMatrix W,
org.jblas.DoubleMatrix hbias,
org.jblas.DoubleMatrix vBias,
org.apache.commons.math3.random.RandomGenerator rng,
double fanIn,
org.apache.commons.math3.distribution.RealDistribution dist) |
Modifier and Type | Method and Description |
---|---|
void |
contrastiveDivergence(double learningRate,
int k,
org.jblas.DoubleMatrix input)
Contrastive divergence revolves around the idea
of approximating the log likelihood around x1(input) with repeated sampling.
|
double |
freeEnergy(org.jblas.DoubleMatrix visibleSample)
Free energy for an RBM
Lower energy models have higher probability
of activations
|
NeuralNetworkGradient |
getGradient(Object[] params) |
Pair<Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix>,Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix>> |
gibbhVh(org.jblas.DoubleMatrix h)
Gibbs sampling step: hidden ---> visible ---> hidden
|
double |
lossFunction(Object[] params)
The loss function (cross entropy, reconstruction error,...)
|
org.jblas.DoubleMatrix |
propDown(org.jblas.DoubleMatrix h)
Calculates the activation of the hidden:
sigmoid(h * W + vbias)
|
org.jblas.DoubleMatrix |
propUp(org.jblas.DoubleMatrix v)
Calculates the activation of the visible :
sigmoid(v * W + hbias)
|
org.jblas.DoubleMatrix |
reconstruct(org.jblas.DoubleMatrix v)
Reconstructs the visible input.
|
Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> |
sampleHiddenGivenVisible(org.jblas.DoubleMatrix v)
Binomial sampling of the hidden values given visible
|
Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> |
sampleVisibleGivenHidden(org.jblas.DoubleMatrix h)
Guess the visible values given the hidden
|
void |
train(org.jblas.DoubleMatrix input,
double lr,
Object[] params)
Train one iteration of the network
|
void |
trainTillConvergence(double learningRate,
int k,
org.jblas.DoubleMatrix input)
Trains till global minimum is found.
|
void |
trainTillConvergence(org.jblas.DoubleMatrix input,
double lr,
Object[] params)
Note: k is the first input in params.
|
applyDropOutIfNecessary, applySparsity, clone, dropOut, epochDone, fanIn, getAdaGrad, getDist, getGradientListeners, gethBias, gethBiasAdaGrad, getInput, getL2, getLossFunction, getMomentum, getnHidden, getnVisible, getOptimizationAlgorithm, getReConstructionCrossEntropy, getRenderEpochs, getRng, getSparsity, getvBias, getVBiasAdaGrad, getW, hBiasMean, initWeights, jostleWeighMatrix, l2RegularizedCoefficient, load, lossFunction, merge, negativeLogLikelihood, negativeLoglikelihood, normalizeByInputRows, resetAdaGrad, setAdaGrad, setDist, setDropOut, setFanIn, setGradientListeners, sethBias, setHbiasAdaGrad, setInput, setL2, setLossFunction, setMomentum, setnHidden, setnVisible, setOptimizationAlgorithm, setRenderEpochs, setRng, setSparsity, setvBias, setVBiasAdaGrad, setW, squaredLoss, transpose, triggerGradientEvents, update, updateGradientAccordingToParams, write
protected NeuralNetworkOptimizer optimizer
protected RBM()
protected RBM(org.jblas.DoubleMatrix input, int nVisible, int n_hidden, org.jblas.DoubleMatrix W, org.jblas.DoubleMatrix hbias, org.jblas.DoubleMatrix vBias, org.apache.commons.math3.random.RandomGenerator rng, double fanIn, org.apache.commons.math3.distribution.RealDistribution dist)
public void trainTillConvergence(double learningRate, int k, org.jblas.DoubleMatrix input)
learningRate
- k
- input
- public void contrastiveDivergence(double learningRate, int k, org.jblas.DoubleMatrix input)
learningRate
- the learning rate to scale byk
- the number of iterations to doinput
- the input to sample frompublic NeuralNetworkGradient getGradient(Object[] params)
public double freeEnergy(org.jblas.DoubleMatrix visibleSample)
visibleSample
- the sample to test onpublic Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> sampleHiddenGivenVisible(org.jblas.DoubleMatrix v)
v
- the visible valuespublic Pair<Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix>,Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix>> gibbhVh(org.jblas.DoubleMatrix h)
h
- the hidden inputpublic Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> sampleVisibleGivenHidden(org.jblas.DoubleMatrix h)
h
- public org.jblas.DoubleMatrix propUp(org.jblas.DoubleMatrix v)
v
- the visible layerpublic org.jblas.DoubleMatrix propDown(org.jblas.DoubleMatrix h)
h
- the hidden layerpublic org.jblas.DoubleMatrix reconstruct(org.jblas.DoubleMatrix v)
reconstruct
in class BaseNeuralNetwork
v
- the visible inputpublic void trainTillConvergence(org.jblas.DoubleMatrix input, double lr, Object[] params)
input
- the input to train onlr
- the learning rate to useparams
- the params (k,corruption level, max epochs,...)public double lossFunction(Object[] params)
BaseNeuralNetwork
lossFunction
in class BaseNeuralNetwork
public void train(org.jblas.DoubleMatrix input, double lr, Object[] params)
BaseNeuralNetwork
train
in interface NeuralNetwork
train
in class BaseNeuralNetwork
input
- the input to train onlr
- the learning rate to train atparams
- the extra params (k, corruption level,...)Copyright © 2014. All Rights Reserved.