public class NeuralNetConfiguration extends Object implements Serializable, Cloneable
Modifier and Type | Class and Description |
---|---|
static class |
NeuralNetConfiguration.ActivationType |
static class |
NeuralNetConfiguration.Builder |
Modifier and Type | Field and Description |
---|---|
protected org.nd4j.linalg.api.activation.ActivationFunction |
activationFunction |
protected boolean |
applySparsity |
protected boolean |
concatBiases |
protected boolean |
constrainGradientToUnitNorm |
protected float |
corruptionLevel |
protected org.apache.commons.math3.distribution.RealDistribution |
dist |
protected float |
dropOut |
protected int |
k |
protected float |
l2 |
protected org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction |
lossFunction |
protected float |
momentum |
protected Map<Integer,Float> |
momentumAfter |
protected int |
nIn |
protected int |
nOut |
protected int |
numIterations |
protected NeuralNetwork.OptimizationAlgorithm |
optimizationAlgo |
protected int |
renderWeightsEveryNumEpochs |
protected int |
resetAdaGradIterations |
protected org.apache.commons.math3.random.RandomGenerator |
rng |
protected long |
seed |
protected boolean |
useRegularization |
protected WeightInit |
weightInit |
Constructor and Description |
---|
NeuralNetConfiguration() |
NeuralNetConfiguration(float sparsity,
boolean useAdaGrad,
float lr,
int k,
float corruptionLevel,
int numIterations,
float momentum,
float l2,
boolean useRegularization,
Map<Integer,Float> momentumAfter,
int resetAdaGradIterations,
float dropOut,
boolean applySparsity,
WeightInit weightInit,
NeuralNetwork.OptimizationAlgorithm optimizationAlgo,
org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction,
int renderWeightsEveryNumEpochs,
boolean concatBiases,
boolean constrainGradientToUnitNorm,
org.apache.commons.math3.random.RandomGenerator rng,
org.apache.commons.math3.distribution.RealDistribution dist,
long seed,
int nIn,
int nOut,
org.nd4j.linalg.api.activation.ActivationFunction activationFunction,
RBM.VisibleUnit visibleUnit,
RBM.HiddenUnit hiddenUnit,
NeuralNetConfiguration.ActivationType activationType,
int[] weightShape,
int[] filterSize,
int numFeatureMaps,
int[] stride,
int[] featureMapSize,
int numInFeatureMaps) |
NeuralNetConfiguration(NeuralNetConfiguration neuralNetConfiguration) |
Modifier and Type | Method and Description |
---|---|
NeuralNetConfiguration |
clone()
Creates and returns a copy of this object.
|
boolean |
equals(Object o) |
org.nd4j.linalg.api.activation.ActivationFunction |
getActivationFunction() |
NeuralNetConfiguration.ActivationType |
getActivationType() |
float |
getCorruptionLevel() |
org.apache.commons.math3.distribution.RealDistribution |
getDist() |
float |
getDropOut() |
int[] |
getFeatureMapSize() |
int[] |
getFilterSize() |
int |
getFinetuneEpochs() |
float |
getFinetuneLearningRate() |
RBM.HiddenUnit |
getHiddenUnit() |
int |
getK() |
float |
getL2() |
org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction |
getLossFunction() |
float |
getLr() |
float |
getMomentum() |
Map<Integer,Float> |
getMomentumAfter() |
int |
getnIn() |
int |
getnOut() |
int |
getNumFeatureMaps() |
int |
getNumInFeatureMaps() |
int |
getNumIterations() |
NeuralNetwork.OptimizationAlgorithm |
getOptimizationAlgo() |
int |
getPretrainEpochs() |
int |
getRenderWeightsEveryNumEpochs() |
int |
getResetAdaGradIterations() |
org.apache.commons.math3.random.RandomGenerator |
getRng() |
long |
getSeed() |
float |
getSparsity() |
int[] |
getStride() |
RBM.VisibleUnit |
getVisibleUnit() |
WeightInit |
getWeightInit() |
int[] |
getWeightShape() |
int |
hashCode() |
boolean |
isApplySparsity() |
boolean |
isConcatBiases() |
boolean |
isConstrainGradientToUnitNorm() |
boolean |
isUseAdaGrad() |
boolean |
isUseRegularization() |
void |
setActivationFunction(org.nd4j.linalg.api.activation.ActivationFunction activationFunction) |
void |
setActivationType(NeuralNetConfiguration.ActivationType activationType) |
void |
setApplySparsity(boolean applySparsity) |
void |
setConcatBiases(boolean concatBiases) |
void |
setConstrainGradientToUnitNorm(boolean constrainGradientToUnitNorm) |
void |
setCorruptionLevel(float corruptionLevel) |
void |
setDist(org.apache.commons.math3.distribution.RealDistribution dist) |
void |
setDropOut(float dropOut) |
void |
setFeatureMapSize(int[] featureMapSize) |
void |
setFilterSize(int[] filterSize) |
void |
setFinetuneEpochs(int finetuneEpochs) |
void |
setFinetuneLearningRate(float finetuneLearningRate) |
void |
setHiddenUnit(RBM.HiddenUnit hiddenUnit) |
void |
setK(int k) |
void |
setL2(float l2) |
void |
setLossFunction(org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction) |
void |
setLr(float lr) |
void |
setMomentum(float momentum) |
void |
setMomentumAfter(Map<Integer,Float> momentumAfter) |
void |
setnIn(int nIn) |
void |
setnOut(int nOut) |
void |
setNumFeatureMaps(int numFeatureMaps) |
void |
setNumInFeatureMaps(int numInFeatureMaps) |
void |
setNumIterations(int numIterations) |
void |
setOptimizationAlgo(NeuralNetwork.OptimizationAlgorithm optimizationAlgo) |
void |
setPretrainEpochs(int pretrainEpochs) |
void |
setPretrainLearningRate(float pretrainLearningRate) |
void |
setRenderWeightsEveryNumEpochs(int renderWeightsEveryNumEpochs) |
void |
setResetAdaGradIterations(int resetAdaGradIterations) |
void |
setRng(org.apache.commons.math3.random.RandomGenerator rng) |
void |
setSeed(long seed) |
void |
setSparsity(float sparsity) |
void |
setStride(int[] stride) |
void |
setUseAdaGrad(boolean useAdaGrad) |
void |
setUseRegularization(boolean useRegularization) |
void |
setVisibleUnit(RBM.VisibleUnit visibleUnit) |
void |
setWeightInit(WeightInit weightInit) |
void |
setWeightShape(int[] weightShape) |
String |
toString() |
protected int k
protected float corruptionLevel
protected int numIterations
protected float momentum
protected float l2
protected boolean useRegularization
protected int resetAdaGradIterations
protected float dropOut
protected boolean applySparsity
protected WeightInit weightInit
protected NeuralNetwork.OptimizationAlgorithm optimizationAlgo
protected org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction
protected int renderWeightsEveryNumEpochs
protected boolean concatBiases
protected boolean constrainGradientToUnitNorm
protected long seed
protected transient org.apache.commons.math3.random.RandomGenerator rng
protected transient org.apache.commons.math3.distribution.RealDistribution dist
protected int nIn
protected int nOut
protected org.nd4j.linalg.api.activation.ActivationFunction activationFunction
public NeuralNetConfiguration()
public NeuralNetConfiguration(float sparsity, boolean useAdaGrad, float lr, int k, float corruptionLevel, int numIterations, float momentum, float l2, boolean useRegularization, Map<Integer,Float> momentumAfter, int resetAdaGradIterations, float dropOut, boolean applySparsity, WeightInit weightInit, NeuralNetwork.OptimizationAlgorithm optimizationAlgo, org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction, int renderWeightsEveryNumEpochs, boolean concatBiases, boolean constrainGradientToUnitNorm, org.apache.commons.math3.random.RandomGenerator rng, org.apache.commons.math3.distribution.RealDistribution dist, long seed, int nIn, int nOut, org.nd4j.linalg.api.activation.ActivationFunction activationFunction, RBM.VisibleUnit visibleUnit, RBM.HiddenUnit hiddenUnit, NeuralNetConfiguration.ActivationType activationType, int[] weightShape, int[] filterSize, int numFeatureMaps, int[] stride, int[] featureMapSize, int numInFeatureMaps)
public NeuralNetConfiguration(NeuralNetConfiguration neuralNetConfiguration)
public int getNumInFeatureMaps()
public void setNumInFeatureMaps(int numInFeatureMaps)
public int[] getFeatureMapSize()
public void setFeatureMapSize(int[] featureMapSize)
public int[] getWeightShape()
public void setWeightShape(int[] weightShape)
public int getNumIterations()
public void setNumIterations(int numIterations)
public int getK()
public void setK(int k)
public float getCorruptionLevel()
public void setCorruptionLevel(float corruptionLevel)
public RBM.HiddenUnit getHiddenUnit()
public void setHiddenUnit(RBM.HiddenUnit hiddenUnit)
public RBM.VisibleUnit getVisibleUnit()
public void setVisibleUnit(RBM.VisibleUnit visibleUnit)
public org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction getLossFunction()
public void setLossFunction(org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction)
public org.nd4j.linalg.api.activation.ActivationFunction getActivationFunction()
public void setActivationFunction(org.nd4j.linalg.api.activation.ActivationFunction activationFunction)
public int getnIn()
public void setnIn(int nIn)
public int getnOut()
public void setnOut(int nOut)
public float getSparsity()
public void setSparsity(float sparsity)
public boolean isUseAdaGrad()
public void setUseAdaGrad(boolean useAdaGrad)
public float getLr()
public void setLr(float lr)
public float getMomentum()
public void setMomentum(float momentum)
public float getL2()
public void setL2(float l2)
public boolean isUseRegularization()
public void setUseRegularization(boolean useRegularization)
public int getResetAdaGradIterations()
public void setResetAdaGradIterations(int resetAdaGradIterations)
public float getDropOut()
public void setDropOut(float dropOut)
public boolean isApplySparsity()
public void setApplySparsity(boolean applySparsity)
public WeightInit getWeightInit()
public void setWeightInit(WeightInit weightInit)
public NeuralNetwork.OptimizationAlgorithm getOptimizationAlgo()
public void setOptimizationAlgo(NeuralNetwork.OptimizationAlgorithm optimizationAlgo)
public int getRenderWeightsEveryNumEpochs()
public void setRenderWeightsEveryNumEpochs(int renderWeightsEveryNumEpochs)
public boolean isConcatBiases()
public void setConcatBiases(boolean concatBiases)
public boolean isConstrainGradientToUnitNorm()
public void setConstrainGradientToUnitNorm(boolean constrainGradientToUnitNorm)
public org.apache.commons.math3.random.RandomGenerator getRng()
public void setRng(org.apache.commons.math3.random.RandomGenerator rng)
public long getSeed()
public void setSeed(long seed)
public org.apache.commons.math3.distribution.RealDistribution getDist()
public void setDist(org.apache.commons.math3.distribution.RealDistribution dist)
public NeuralNetConfiguration.ActivationType getActivationType()
public void setActivationType(NeuralNetConfiguration.ActivationType activationType)
public int[] getFilterSize()
public void setFilterSize(int[] filterSize)
public int getNumFeatureMaps()
public void setNumFeatureMaps(int numFeatureMaps)
public int[] getStride()
public void setStride(int[] stride)
public int getPretrainEpochs()
public void setPretrainEpochs(int pretrainEpochs)
public void setPretrainLearningRate(float pretrainLearningRate)
public float getFinetuneLearningRate()
public void setFinetuneLearningRate(float finetuneLearningRate)
public int getFinetuneEpochs()
public void setFinetuneEpochs(int finetuneEpochs)
public NeuralNetConfiguration clone()
x
, the expression:
will be true, and that the expression:x.clone() != x
will bex.clone().getClass() == x.getClass()
true
, but these are not absolute requirements.
While it is typically the case that:
will bex.clone().equals(x)
true
, this is not an absolute requirement.
By convention, the returned object should be obtained by calling
super.clone
. If a class and all of its superclasses (except
Object
) obey this convention, it will be the case that
x.clone().getClass() == x.getClass()
.
By convention, the object returned by this method should be independent
of this object (which is being cloned). To achieve this independence,
it may be necessary to modify one or more fields of the object returned
by super.clone
before returning it. Typically, this means
copying any mutable objects that comprise the internal "deep structure"
of the object being cloned and replacing the references to these
objects with references to the copies. If a class contains only
primitive fields or references to immutable objects, then it is usually
the case that no fields in the object returned by super.clone
need to be modified.
The method clone
for class Object
performs a
specific cloning operation. First, if the class of this object does
not implement the interface Cloneable
, then a
CloneNotSupportedException
is thrown. Note that all arrays
are considered to implement the interface Cloneable
and that
the return type of the clone
method of an array type T[]
is T[]
where T is any reference or primitive type.
Otherwise, this method creates a new instance of the class of this
object and initializes all its fields with exactly the contents of
the corresponding fields of this object, as if by assignment; the
contents of the fields are not themselves cloned. Thus, this method
performs a "shallow copy" of this object, not a "deep copy" operation.
The class Object
does not itself implement the interface
Cloneable
, so calling the clone
method on an object
whose class is Object
will result in throwing an
exception at run time.clone
in class Object
CloneNotSupportedException
- if the object's class does not
support the Cloneable
interface. Subclasses
that override the clone
method can also
throw this exception to indicate that an instance cannot
be cloned.Cloneable
Copyright © 2014. All rights reserved.