public static class VariationalAutoencoder.Builder extends BasePretrainNetwork.Builder<VariationalAutoencoder.Builder>
lossFunction, visibleBiasInit
nIn, nOut
activationFn, biasInit, biasUpdater, dist, gradientNormalization, gradientNormalizationThreshold, iupdater, l1, l1Bias, l2, l2Bias, weightInit, weightNoise
allParamConstraints, biasConstraints, iDropout, layerName, weightConstraints
Constructor and Description |
---|
Builder() |
Modifier and Type | Method and Description |
---|---|
VariationalAutoencoder |
build() |
VariationalAutoencoder.Builder |
decoderLayerSizes(int... decoderLayerSizes)
Size of the decoder layers, in units.
|
VariationalAutoencoder.Builder |
encoderLayerSizes(int... encoderLayerSizes)
Size of the encoder layers, in units.
|
VariationalAutoencoder.Builder |
lossFunction(org.nd4j.linalg.activations.Activation outputActivationFn,
org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction)
Configure the VAE to use the specified loss function for the reconstruction, instead of a ReconstructionDistribution.
|
VariationalAutoencoder.Builder |
lossFunction(org.nd4j.linalg.activations.IActivation outputActivationFn,
org.nd4j.linalg.lossfunctions.ILossFunction lossFunction)
Configure the VAE to use the specified loss function for the reconstruction, instead of a ReconstructionDistribution.
|
VariationalAutoencoder.Builder |
lossFunction(org.nd4j.linalg.activations.IActivation outputActivationFn,
org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction)
Configure the VAE to use the specified loss function for the reconstruction, instead of a ReconstructionDistribution.
|
VariationalAutoencoder.Builder |
nOut(int nOut)
Set the size of the VAE state Z.
|
VariationalAutoencoder.Builder |
numSamples(int numSamples)
Set the number of samples per data point (from VAE state Z) used when doing pretraining.
|
VariationalAutoencoder.Builder |
pzxActivationFn(org.nd4j.linalg.activations.IActivation activationFunction)
Activation function for the input to P(z|data).
Care should be taken with this, as some activation functions (relu, etc) are not suitable due to being bounded in range [0,infinity). |
VariationalAutoencoder.Builder |
pzxActivationFunction(org.nd4j.linalg.activations.Activation activation)
Activation function for the input to P(z|data).
Care should be taken with this, as some activation functions (relu, etc) are not suitable due to being bounded in range [0,infinity). |
VariationalAutoencoder.Builder |
reconstructionDistribution(ReconstructionDistribution distribution)
The reconstruction distribution for the data given the hidden state - i.e., P(data|Z).
This should be selected carefully based on the type of data being modelled. |
lossFunction, visibleBiasInit
nIn, units
activation, activation, biasInit, biasUpdater, dist, gradientNormalization, gradientNormalizationThreshold, l1, l1Bias, l2, l2Bias, updater, updater, weightInit, weightInit, weightNoise
constrainAllParameters, constrainBias, constrainWeights, dropOut, dropOut, name
public VariationalAutoencoder.Builder encoderLayerSizes(int... encoderLayerSizes)
DenseLayer
.
Typically the number and size of the decoder layers (set via decoderLayerSizes(int...)
is similar to the encoder layers.encoderLayerSizes
- Size of each encoder layer in the variational autoencoderpublic VariationalAutoencoder.Builder decoderLayerSizes(int... decoderLayerSizes)
DenseLayer
.
Typically the number and size of the decoder layers is similar to the encoder layers (set via encoderLayerSizes(int...)
.decoderLayerSizes
- Size of each deccoder layer in the variational autoencoderpublic VariationalAutoencoder.Builder reconstructionDistribution(ReconstructionDistribution distribution)
GaussianReconstructionDistribution
+ {identity or tanh} for real-valued (Gaussian) dataBernoulliReconstructionDistribution
+ sigmoid for binary-valued (0 or 1) datadistribution
- Reconstruction distributionpublic VariationalAutoencoder.Builder lossFunction(org.nd4j.linalg.activations.IActivation outputActivationFn, org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction)
outputActivationFn
- Activation function for the output/reconstructionlossFunction
- Loss function to usepublic VariationalAutoencoder.Builder lossFunction(org.nd4j.linalg.activations.Activation outputActivationFn, org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction)
outputActivationFn
- Activation function for the output/reconstructionlossFunction
- Loss function to usepublic VariationalAutoencoder.Builder lossFunction(org.nd4j.linalg.activations.IActivation outputActivationFn, org.nd4j.linalg.lossfunctions.ILossFunction lossFunction)
outputActivationFn
- Activation function for the output/reconstructionlossFunction
- Loss function to usepublic VariationalAutoencoder.Builder pzxActivationFn(org.nd4j.linalg.activations.IActivation activationFunction)
activationFunction
- Activation function for p(z|x)public VariationalAutoencoder.Builder pzxActivationFunction(org.nd4j.linalg.activations.Activation activation)
activation
- Activation function for p(z|x)public VariationalAutoencoder.Builder nOut(int nOut)
nOut
in class FeedForwardLayer.Builder<VariationalAutoencoder.Builder>
nOut
- Size of P(Z|data) and output sizepublic VariationalAutoencoder.Builder numSamples(int numSamples)
This is parameter L from Kingma and Welling: "In our experiments we found that the number of samples L per datapoint can be set to 1 as long as the minibatch size M was large enough, e.g. M = 100."
numSamples
- Number of samples per data point for pretrainingpublic VariationalAutoencoder build()
build
in class Layer.Builder<VariationalAutoencoder.Builder>
Copyright © 2018. All rights reserved.