public class DBN extends BaseMultiLayerNetwork
Modifier and Type | Class and Description |
---|---|
static class |
DBN.Builder |
errorTolerance, layers, learningRateUpdate
Constructor and Description |
---|
DBN() |
DBN(int n_ins,
int[] hidden_layer_sizes,
int n_outs,
int n_layers,
org.apache.commons.math3.random.RandomGenerator rng) |
DBN(int n_ins,
int[] hidden_layer_sizes,
int n_outs,
int n_layers,
org.apache.commons.math3.random.RandomGenerator rng,
org.jblas.DoubleMatrix input,
org.jblas.DoubleMatrix labels) |
Modifier and Type | Method and Description |
---|---|
NeuralNetwork |
createLayer(org.jblas.DoubleMatrix input,
int nVisible,
int nHidden,
org.jblas.DoubleMatrix W,
org.jblas.DoubleMatrix hBias,
org.jblas.DoubleMatrix vBias,
org.apache.commons.math3.random.RandomGenerator rng,
int index)
Creates a layer depending on the index.
|
NeuralNetwork[] |
createNetworkLayers(int numLayers) |
void |
pretrain(org.jblas.DoubleMatrix input,
int k,
double learningRate,
int epochs)
This unsupervised learning method runs
contrastive divergence on each RBM layer in the network.
|
void |
pretrain(int k,
double learningRate,
int epochs) |
void |
trainNetwork(org.jblas.DoubleMatrix input,
org.jblas.DoubleMatrix labels,
Object[] otherParams)
Train the network running some unsupervised
pretraining followed by SGD/finetune
|
applyTransforms, asDecoder, backProp, backPropStep, clone, encode, fanIn, feedForward, finetune, finetune, getActivation, getColumnMeans, getColumnStds, getColumnSums, getDist, getErrorTolerance, getFanIn, getHiddenLayerSizes, getInput, getL2, getLabels, getLayers, getLearningRateUpdate, getLogLayer, getMomentum, getnIns, getnLayers, getnOuts, getOptimizer, getRenderWeightsEveryNEpochs, getRng, getSigmoidLayers, getSparsity, getWeightTransforms, initializeLayers, initializeNetwork, isForceNumEpochs, isShouldBackProp, isShouldInit, isToDecode, isUseRegularization, load, loadFromFile, merge, negativeLogLikelihood, predict, reconstruct, reconstruct, setActivation, setColumnMeans, setColumnStds, setColumnSums, setDist, setErrorTolerance, setFanIn, setForceNumEpochs, setHiddenLayerSizes, setInput, setL2, setLabels, setLayers, setLearningRateUpdate, setLogLayer, setMomentum, setnIns, setnLayers, setnOuts, setOptimizer, setRenderWeightsEveryNEpochs, setRng, setShouldBackProp, setShouldInit, setSigmoidLayers, setSparsity, setToDecode, setUseRegularization, setWeightTransforms, update, write
public DBN()
public DBN(int n_ins, int[] hidden_layer_sizes, int n_outs, int n_layers, org.apache.commons.math3.random.RandomGenerator rng, org.jblas.DoubleMatrix input, org.jblas.DoubleMatrix labels)
public DBN(int n_ins, int[] hidden_layer_sizes, int n_outs, int n_layers, org.apache.commons.math3.random.RandomGenerator rng)
public void trainNetwork(org.jblas.DoubleMatrix input, org.jblas.DoubleMatrix labels, Object[] otherParams)
BaseMultiLayerNetwork
trainNetwork
in class BaseMultiLayerNetwork
input
- input exampleslabels
- output labelsotherParams
- (int) k
(double) learningRate
(int) epochs
Optional:
(double) finetune lr
(int) finetune epochspublic void pretrain(org.jblas.DoubleMatrix input, int k, double learningRate, int epochs)
input
- the input to train onk
- the k to use for running the RBM contrastive divergence.
The typical tip is that the higher k is the closer to the model
you will be approximating due to more sampling. K = 1
usually gives very good results and is the default in quite a few situations.learningRate
- the learning rate to useepochs
- the number of epochs to trainpublic void pretrain(int k, double learningRate, int epochs)
public NeuralNetwork createLayer(org.jblas.DoubleMatrix input, int nVisible, int nHidden, org.jblas.DoubleMatrix W, org.jblas.DoubleMatrix hBias, org.jblas.DoubleMatrix vBias, org.apache.commons.math3.random.RandomGenerator rng, int index)
BaseMultiLayerNetwork
CDBN
where the first layer needs to be an CRBM
for continuous inputs.
Please be sure to call super.initializeNetwork
to handle the passing of baseline parameters such as fanin
and rendering.createLayer
in class BaseMultiLayerNetwork
input
- the input to the layernVisible
- the number of visible inputsnHidden
- the number of hidden unitsW
- the weight vectorhBias
- the hidden biasvBias
- the visible biasrng
- the rng to use (THiS IS IMPORTANT; YOU DO NOT WANT TO HAVE A MIS REFERENCED RNG OTHERWISE NUMBERS WILL BE MEANINGLESS)index
- the index of the layerRBM
public NeuralNetwork[] createNetworkLayers(int numLayers)
createNetworkLayers
in class BaseMultiLayerNetwork
Copyright © 2014. All Rights Reserved.