public class RBM extends BaseNeuralNetwork
Modifier and Type | Class and Description |
---|---|
static class |
RBM.Builder |
Modifier and Type | Field and Description |
---|---|
protected NeuralNetworkOptimizer |
optimizer |
dist, fanIn, hBias, input, l2, momentum, nHidden, nVisible, renderWeightsEveryNumEpochs, rng, sparsity, useRegularization, vBias, W
Constructor and Description |
---|
RBM() |
RBM(org.jblas.DoubleMatrix input,
int n_visible,
int n_hidden,
org.jblas.DoubleMatrix W,
org.jblas.DoubleMatrix hbias,
org.jblas.DoubleMatrix vbias,
org.apache.commons.math3.random.RandomGenerator rng,
double fanIn,
org.apache.commons.math3.distribution.RealDistribution dist) |
RBM(int nVisible,
int nHidden,
org.jblas.DoubleMatrix W,
org.jblas.DoubleMatrix hbias,
org.jblas.DoubleMatrix vbias,
org.apache.commons.math3.random.RandomGenerator rng,
double fanIn,
org.apache.commons.math3.distribution.RealDistribution dist) |
Modifier and Type | Method and Description |
---|---|
void |
contrastiveDivergence(double learningRate,
int k,
org.jblas.DoubleMatrix input)
Contrastive divergence revolves around the idea
of approximating the log likelihood around x1(input) with repeated sampling.
|
NeuralNetworkGradient |
getGradient(Object[] params) |
Pair<Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix>,Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix>> |
gibbhVh(org.jblas.DoubleMatrix h)
Gibbs sampling step: hidden ---> visible ---> hidden
|
double |
lossFunction(Object[] params)
The loss function (cross entropy, reconstruction error,...)
|
org.jblas.DoubleMatrix |
propDown(org.jblas.DoubleMatrix h)
Propagates hidden down to visible
|
org.jblas.DoubleMatrix |
propUp(org.jblas.DoubleMatrix v) |
org.jblas.DoubleMatrix |
reconstruct(org.jblas.DoubleMatrix v)
Reconstructs the visible input.
|
Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> |
sampleHiddenGivenVisible(org.jblas.DoubleMatrix v)
Binomial sampling of the hidden values given visible
|
Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> |
sampleVGivenH(org.jblas.DoubleMatrix h)
Guess the visible values given the hidden
|
void |
train(org.jblas.DoubleMatrix input,
double lr,
Object[] params)
Train one iteration of the network
|
void |
trainTillConvergence(double learningRate,
int k,
org.jblas.DoubleMatrix input)
Trains till global minimum is found.
|
void |
trainTillConvergence(org.jblas.DoubleMatrix input,
double lr,
Object[] params)
Note: k is the first input in params.
|
clone, fanIn, getDist, gethBias, getInput, getL2, getMomentum, getnHidden, getnVisible, getReConstructionCrossEntropy, getRenderEpochs, getRng, getSparsity, getvBias, getW, initWeights, jostleWeighMatrix, l2RegularizedCoefficient, load, lossFunction, merge, setDist, setFanIn, sethBias, setInput, setL2, setMomentum, setnHidden, setnVisible, setRenderEpochs, setRng, setSparsity, setvBias, setW, squaredLoss, transpose, update, write
protected NeuralNetworkOptimizer optimizer
public RBM()
public RBM(int nVisible, int nHidden, org.jblas.DoubleMatrix W, org.jblas.DoubleMatrix hbias, org.jblas.DoubleMatrix vbias, org.apache.commons.math3.random.RandomGenerator rng, double fanIn, org.apache.commons.math3.distribution.RealDistribution dist)
public RBM(org.jblas.DoubleMatrix input, int n_visible, int n_hidden, org.jblas.DoubleMatrix W, org.jblas.DoubleMatrix hbias, org.jblas.DoubleMatrix vbias, org.apache.commons.math3.random.RandomGenerator rng, double fanIn, org.apache.commons.math3.distribution.RealDistribution dist)
public void trainTillConvergence(double learningRate, int k, org.jblas.DoubleMatrix input)
learningRate
- k
- input
- public void contrastiveDivergence(double learningRate, int k, org.jblas.DoubleMatrix input)
learningRate
- the learning rate to scale byk
- the number of iterations to doinput
- the input to sample frompublic NeuralNetworkGradient getGradient(Object[] params)
public Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> sampleHiddenGivenVisible(org.jblas.DoubleMatrix v)
v
- the visible valuespublic Pair<Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix>,Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix>> gibbhVh(org.jblas.DoubleMatrix h)
h
- the hidden inputpublic Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> sampleVGivenH(org.jblas.DoubleMatrix h)
h
- public org.jblas.DoubleMatrix propUp(org.jblas.DoubleMatrix v)
public org.jblas.DoubleMatrix propDown(org.jblas.DoubleMatrix h)
h
- the hidden layerpublic org.jblas.DoubleMatrix reconstruct(org.jblas.DoubleMatrix v)
reconstruct
in class BaseNeuralNetwork
v
- the visible inputpublic void trainTillConvergence(org.jblas.DoubleMatrix input, double lr, Object[] params)
public double lossFunction(Object[] params)
BaseNeuralNetwork
lossFunction
in class BaseNeuralNetwork
public void train(org.jblas.DoubleMatrix input, double lr, Object[] params)
BaseNeuralNetwork
train
in interface NeuralNetwork
train
in class BaseNeuralNetwork
input
- the input to train onlr
- the learning rate to train atparams
- the extra params (k, corruption level,...)Copyright © 2014. All Rights Reserved.