public class DenoisingAutoEncoder extends BaseNeuralNetwork implements Serializable
Modifier and Type | Class and Description |
---|---|
static class |
DenoisingAutoEncoder.Builder |
NeuralNetwork.LossFunction, NeuralNetwork.OptimizationAlgorithm
applySparsity, dist, doMask, dropOut, fanIn, firstTimeThrough, gradientListeners, hBias, hBiasAdaGrad, input, l2, lossFunction, momentum, nHidden, normalizeByInputRows, nVisible, optimizationAlgo, optimizer, renderWeightsEveryNumEpochs, rng, sparsity, useAdaGrad, useRegularization, vBias, vBiasAdaGrad, W, wAdaGrad
Constructor and Description |
---|
DenoisingAutoEncoder() |
DenoisingAutoEncoder(org.jblas.DoubleMatrix input,
int nVisible,
int nHidden,
org.jblas.DoubleMatrix W,
org.jblas.DoubleMatrix hbias,
org.jblas.DoubleMatrix vbias,
org.apache.commons.math3.random.RandomGenerator rng,
double fanIn,
org.apache.commons.math3.distribution.RealDistribution dist) |
Modifier and Type | Method and Description |
---|---|
org.jblas.DoubleMatrix |
getCorruptedInput(org.jblas.DoubleMatrix x,
double corruptionLevel)
Corrupts the given input by doing a binomial sampling
given the corruption level
|
NeuralNetworkGradient |
getGradient(Object[] params) |
org.jblas.DoubleMatrix |
getHiddenValues(org.jblas.DoubleMatrix x) |
org.jblas.DoubleMatrix |
getReconstructedInput(org.jblas.DoubleMatrix y) |
double |
lossFunction(Object[] params)
The loss function (cross entropy, reconstruction error,...)
|
org.jblas.DoubleMatrix |
reconstruct(org.jblas.DoubleMatrix x)
All neural networks are based on this idea of
minimizing reconstruction error.
|
Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> |
sampleHiddenGivenVisible(org.jblas.DoubleMatrix v)
Sample hidden mean and sample
given visible
|
Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> |
sampleVisibleGivenHidden(org.jblas.DoubleMatrix h)
Sample visible mean and sample
given hidden
|
void |
train(org.jblas.DoubleMatrix x,
double lr,
double corruptionLevel)
Perform one iteration of training
|
void |
train(org.jblas.DoubleMatrix input,
double lr,
Object[] params)
Train one iteration of the network
|
void |
trainTillConvergence(org.jblas.DoubleMatrix x,
double lr,
double corruptionLevel)
Run a network optimizer
|
void |
trainTillConvergence(org.jblas.DoubleMatrix input,
double lr,
Object[] params)
Trains via an optimization algorithm such as SGD or Conjugate Gradient
|
applyDropOutIfNecessary, applySparsity, clone, dropOut, epochDone, fanIn, getAdaGrad, getDist, getGradientListeners, gethBias, gethBiasAdaGrad, getInput, getL2, getLossFunction, getMomentum, getnHidden, getnVisible, getOptimizationAlgorithm, getReConstructionCrossEntropy, getRenderEpochs, getRng, getSparsity, getvBias, getVBiasAdaGrad, getW, hBiasMean, initWeights, jostleWeighMatrix, l2RegularizedCoefficient, load, lossFunction, merge, negativeLogLikelihood, negativeLoglikelihood, normalizeByInputRows, resetAdaGrad, setAdaGrad, setDist, setDropOut, setFanIn, setGradientListeners, sethBias, setHbiasAdaGrad, setInput, setL2, setLossFunction, setMomentum, setnHidden, setnVisible, setOptimizationAlgorithm, setRenderEpochs, setRng, setSparsity, setvBias, setVBiasAdaGrad, setW, squaredLoss, transpose, triggerGradientEvents, update, updateGradientAccordingToParams, write
public DenoisingAutoEncoder()
public DenoisingAutoEncoder(org.jblas.DoubleMatrix input, int nVisible, int nHidden, org.jblas.DoubleMatrix W, org.jblas.DoubleMatrix hbias, org.jblas.DoubleMatrix vbias, org.apache.commons.math3.random.RandomGenerator rng, double fanIn, org.apache.commons.math3.distribution.RealDistribution dist)
public org.jblas.DoubleMatrix getCorruptedInput(org.jblas.DoubleMatrix x, double corruptionLevel)
x
- the input to corruptcorruptionLevel
- the corruption valuepublic Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> sampleHiddenGivenVisible(org.jblas.DoubleMatrix v)
NeuralNetwork
sampleHiddenGivenVisible
in interface NeuralNetwork
v
- the the visible inputpublic Pair<org.jblas.DoubleMatrix,org.jblas.DoubleMatrix> sampleVisibleGivenHidden(org.jblas.DoubleMatrix h)
NeuralNetwork
sampleVisibleGivenHidden
in interface NeuralNetwork
h
- the the hidden inputpublic org.jblas.DoubleMatrix getHiddenValues(org.jblas.DoubleMatrix x)
public org.jblas.DoubleMatrix getReconstructedInput(org.jblas.DoubleMatrix y)
public void trainTillConvergence(org.jblas.DoubleMatrix x, double lr, double corruptionLevel)
x
- the inputlr
- the learning ratecorruptionLevel
- the corruption levelpublic void train(org.jblas.DoubleMatrix x, double lr, double corruptionLevel)
x
- the inputlr
- the learning ratecorruptionLevel
- the corruption level to train withpublic org.jblas.DoubleMatrix reconstruct(org.jblas.DoubleMatrix x)
BaseNeuralNetwork
reconstruct
in class BaseNeuralNetwork
x
- the input to reconstructpublic void trainTillConvergence(org.jblas.DoubleMatrix input, double lr, Object[] params)
NeuralNetwork
trainTillConvergence
in interface NeuralNetwork
input
- the input to train onlr
- the learning rate to useparams
- the params (k,corruption level, max epochs,...)public double lossFunction(Object[] params)
BaseNeuralNetwork
lossFunction
in class BaseNeuralNetwork
public void train(org.jblas.DoubleMatrix input, double lr, Object[] params)
BaseNeuralNetwork
train
in interface NeuralNetwork
train
in class BaseNeuralNetwork
input
- the input to train onlr
- the learning rate to train atparams
- the extra params (k, corruption level,...)public NeuralNetworkGradient getGradient(Object[] params)
getGradient
in interface NeuralNetwork
Copyright © 2014. All Rights Reserved.