public class NeuralNetConfiguration extends Object implements Serializable, Cloneable
Modifier and Type | Class and Description |
---|---|
static class |
NeuralNetConfiguration.Builder |
static interface |
NeuralNetConfiguration.ConfOverride
Interface for a function to override
builder configurations at a particular layer
|
static class |
NeuralNetConfiguration.ListBuilder
Fluent interface for building a list of configurations
|
Modifier and Type | Field and Description |
---|---|
protected org.nd4j.linalg.api.activation.ActivationFunction |
activationFunction |
protected boolean |
applySparsity |
protected int |
batchSize |
protected boolean |
concatBiases |
protected boolean |
constrainGradientToUnitNorm |
protected double |
corruptionLevel |
protected org.apache.commons.math3.distribution.RealDistribution |
dist |
protected double |
dropOut |
protected List<String> |
gradientList |
protected int |
k |
protected int |
kernel |
protected double |
l2 |
protected LayerFactory |
layerFactory |
protected Collection<IterationListener> |
listeners |
protected org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction |
lossFunction |
protected double |
momentum |
protected Map<Integer,Double> |
momentumAfter |
protected int |
nIn |
protected int |
nOut |
protected int |
numIterations |
protected OptimizationAlgorithm |
optimizationAlgo |
protected int |
renderWeightsEveryNumEpochs |
protected int |
resetAdaGradIterations |
protected org.apache.commons.math3.random.RandomGenerator |
rng |
protected long |
seed |
protected StepFunction |
stepFunction |
protected boolean |
useRegularization |
protected WeightInit |
weightInit |
Constructor and Description |
---|
NeuralNetConfiguration() |
NeuralNetConfiguration(double sparsity,
boolean useAdaGrad,
double lr,
int k,
double corruptionLevel,
int numIterations,
double momentum,
double l2,
boolean useRegularization,
Map<Integer,Double> momentumAfter,
int resetAdaGradIterations,
double dropOut,
boolean applySparsity,
WeightInit weightInit,
OptimizationAlgorithm optimizationAlgo,
org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction,
int renderWeightsEveryNumEpochs,
boolean concatBiases,
boolean constrainGradientToUnitNorm,
org.apache.commons.math3.random.RandomGenerator rng,
org.apache.commons.math3.distribution.RealDistribution dist,
long seed,
int nIn,
int nOut,
org.nd4j.linalg.api.activation.ActivationFunction activationFunction,
RBM.VisibleUnit visibleUnit,
RBM.HiddenUnit hiddenUnit,
int[] weightShape,
int[] filterSize,
int[] stride,
int[] featureMapSize,
int kernel,
int batchSize,
Collection<IterationListener> listeners,
LayerFactory layerFactory) |
NeuralNetConfiguration(NeuralNetConfiguration neuralNetConfiguration) |
Modifier and Type | Method and Description |
---|---|
NeuralNetConfiguration |
clone()
Creates and returns a copy of this object.
|
boolean |
equals(Object o) |
static NeuralNetConfiguration |
fromJson(String json)
Create a neural net configuration from json
|
org.nd4j.linalg.api.activation.ActivationFunction |
getActivationFunction() |
int |
getBatchSize() |
double |
getCorruptionLevel() |
org.apache.commons.math3.distribution.RealDistribution |
getDist() |
double |
getDropOut() |
int[] |
getFeatureMapSize() |
int[] |
getFilterSize() |
List<String> |
getGradientList() |
RBM.HiddenUnit |
getHiddenUnit() |
int |
getK() |
int |
getKernel() |
double |
getL2() |
LayerFactory |
getLayerFactory() |
Collection<IterationListener> |
getListeners() |
org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction |
getLossFunction() |
double |
getLr() |
double |
getMomentum() |
Map<Integer,Double> |
getMomentumAfter() |
int |
getnIn() |
int |
getnOut() |
int |
getNumFeatureMaps() |
int |
getNumIterations() |
OptimizationAlgorithm |
getOptimizationAlgo() |
int |
getRenderWeightIterations() |
int |
getResetAdaGradIterations() |
org.apache.commons.math3.random.RandomGenerator |
getRng() |
long |
getSeed() |
double |
getSparsity() |
StepFunction |
getStepFunction() |
int[] |
getStride() |
RBM.VisibleUnit |
getVisibleUnit() |
WeightInit |
getWeightInit() |
int[] |
getWeightShape() |
int |
hashCode() |
boolean |
isApplySparsity() |
boolean |
isConcatBiases() |
boolean |
isConstrainGradientToUnitNorm() |
boolean |
isUseAdaGrad() |
boolean |
isUseRegularization() |
static com.fasterxml.jackson.databind.ObjectMapper |
mapper()
Object mapper for serialization of configurations
|
void |
setActivationFunction(org.nd4j.linalg.api.activation.ActivationFunction activationFunction) |
void |
setApplySparsity(boolean applySparsity) |
void |
setBatchSize(int batchSize) |
static void |
setClassifier(NeuralNetConfiguration conf)
Set the configuration for classification
|
static void |
setClassifier(NeuralNetConfiguration conf,
boolean rows)
Set the conf for classification
|
void |
setConcatBiases(boolean concatBiases) |
void |
setConstrainGradientToUnitNorm(boolean constrainGradientToUnitNorm) |
void |
setCorruptionLevel(double corruptionLevel) |
void |
setDist(org.apache.commons.math3.distribution.RealDistribution dist) |
void |
setDropOut(double dropOut) |
void |
setFeatureMapSize(int[] featureMapSize) |
void |
setFilterSize(int[] filterSize) |
void |
setGradientList(List<String> gradientList) |
void |
setHiddenUnit(RBM.HiddenUnit hiddenUnit) |
void |
setK(int k) |
void |
setKernel(int kernel) |
void |
setL2(double l2) |
void |
setLayerFactory(LayerFactory layerFactory) |
void |
setListeners(Collection<IterationListener> listeners) |
void |
setLossFunction(org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction) |
void |
setLr(double lr) |
void |
setMomentum(double momentum) |
void |
setMomentumAfter(Map<Integer,Double> momentumAfter) |
void |
setnIn(int nIn) |
void |
setnOut(int nOut) |
void |
setNumFeatureMaps(int numFeatureMaps) |
void |
setNumIterations(int numIterations) |
void |
setOptimizationAlgo(OptimizationAlgorithm optimizationAlgo) |
void |
setRenderWeightIterations(int renderWeightsEveryNumEpochs) |
void |
setResetAdaGradIterations(int resetAdaGradIterations) |
void |
setRng(org.apache.commons.math3.random.RandomGenerator rng) |
void |
setSeed(long seed) |
void |
setSparsity(double sparsity) |
void |
setStepFunction(StepFunction stepFunction) |
void |
setStride(int[] stride) |
void |
setUseAdaGrad(boolean useAdaGrad) |
void |
setUseRegularization(boolean useRegularization) |
void |
setVisibleUnit(RBM.VisibleUnit visibleUnit) |
void |
setWeightInit(WeightInit weightInit) |
void |
setWeightShape(int[] weightShape) |
String |
toJson()
Return this configuration as json
|
String |
toString() |
protected double corruptionLevel
protected int numIterations
protected double momentum
protected double l2
protected boolean useRegularization
protected int resetAdaGradIterations
protected double dropOut
protected boolean applySparsity
protected WeightInit weightInit
protected OptimizationAlgorithm optimizationAlgo
protected org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction
protected int renderWeightsEveryNumEpochs
protected boolean concatBiases
protected boolean constrainGradientToUnitNorm
protected long seed
protected transient org.apache.commons.math3.random.RandomGenerator rng
protected transient org.apache.commons.math3.distribution.RealDistribution dist
protected transient Collection<IterationListener> listeners
protected transient StepFunction stepFunction
protected transient LayerFactory layerFactory
protected int nIn
protected int nOut
protected org.nd4j.linalg.api.activation.ActivationFunction activationFunction
protected int k
protected int kernel
protected int batchSize
public NeuralNetConfiguration()
public NeuralNetConfiguration(double sparsity, boolean useAdaGrad, double lr, int k, double corruptionLevel, int numIterations, double momentum, double l2, boolean useRegularization, Map<Integer,Double> momentumAfter, int resetAdaGradIterations, double dropOut, boolean applySparsity, WeightInit weightInit, OptimizationAlgorithm optimizationAlgo, org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction, int renderWeightsEveryNumEpochs, boolean concatBiases, boolean constrainGradientToUnitNorm, org.apache.commons.math3.random.RandomGenerator rng, org.apache.commons.math3.distribution.RealDistribution dist, long seed, int nIn, int nOut, org.nd4j.linalg.api.activation.ActivationFunction activationFunction, RBM.VisibleUnit visibleUnit, RBM.HiddenUnit hiddenUnit, int[] weightShape, int[] filterSize, int[] stride, int[] featureMapSize, int kernel, int batchSize, Collection<IterationListener> listeners, LayerFactory layerFactory)
public NeuralNetConfiguration(NeuralNetConfiguration neuralNetConfiguration)
public int getBatchSize()
public void setBatchSize(int batchSize)
public int getKernel()
public void setKernel(int kernel)
public LayerFactory getLayerFactory()
public void setLayerFactory(LayerFactory layerFactory)
public StepFunction getStepFunction()
public void setStepFunction(StepFunction stepFunction)
public int[] getFeatureMapSize()
public void setFeatureMapSize(int[] featureMapSize)
public int[] getWeightShape()
public void setWeightShape(int[] weightShape)
public int getNumIterations()
public void setNumIterations(int numIterations)
public int getK()
public void setK(int k)
public double getCorruptionLevel()
public RBM.HiddenUnit getHiddenUnit()
public void setHiddenUnit(RBM.HiddenUnit hiddenUnit)
public RBM.VisibleUnit getVisibleUnit()
public void setVisibleUnit(RBM.VisibleUnit visibleUnit)
public org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction getLossFunction()
public void setLossFunction(org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction)
public org.nd4j.linalg.api.activation.ActivationFunction getActivationFunction()
public void setActivationFunction(org.nd4j.linalg.api.activation.ActivationFunction activationFunction)
public int getnIn()
public void setnIn(int nIn)
public int getnOut()
public void setnOut(int nOut)
public double getSparsity()
public boolean isUseAdaGrad()
public void setUseAdaGrad(boolean useAdaGrad)
public double getLr()
public void setLr(double lr)
public double getMomentum()
public double getL2()
public void setL2(double l2)
public boolean isUseRegularization()
public void setUseRegularization(boolean useRegularization)
public int getResetAdaGradIterations()
public void setResetAdaGradIterations(int resetAdaGradIterations)
public double getDropOut()
public boolean isApplySparsity()
public void setApplySparsity(boolean applySparsity)
public WeightInit getWeightInit()
public void setWeightInit(WeightInit weightInit)
public OptimizationAlgorithm getOptimizationAlgo()
public void setOptimizationAlgo(OptimizationAlgorithm optimizationAlgo)
public int getRenderWeightIterations()
public void setRenderWeightIterations(int renderWeightsEveryNumEpochs)
public boolean isConcatBiases()
public void setConcatBiases(boolean concatBiases)
public boolean isConstrainGradientToUnitNorm()
public void setConstrainGradientToUnitNorm(boolean constrainGradientToUnitNorm)
public org.apache.commons.math3.random.RandomGenerator getRng()
public void setRng(org.apache.commons.math3.random.RandomGenerator rng)
public long getSeed()
public void setSeed(long seed)
public org.apache.commons.math3.distribution.RealDistribution getDist()
public void setDist(org.apache.commons.math3.distribution.RealDistribution dist)
public int[] getFilterSize()
public void setFilterSize(int[] filterSize)
public int getNumFeatureMaps()
public void setNumFeatureMaps(int numFeatureMaps)
public int[] getStride()
public void setStride(int[] stride)
public static void setClassifier(NeuralNetConfiguration conf)
conf
- the configuration to setpublic static void setClassifier(NeuralNetConfiguration conf, boolean rows)
conf
- the configuration to setrows
- whether to use softmax rows or soft max columnspublic NeuralNetConfiguration clone()
x
, the expression:
will be true, and that the expression:x.clone() != x
will bex.clone().getClass() == x.getClass()
true
, but these are not absolute requirements.
While it is typically the case that:
will bex.clone().equals(x)
true
, this is not an absolute requirement.
By convention, the returned object should be obtained by calling
super.clone
. If a class and all of its superclasses (except
Object
) obey this convention, it will be the case that
x.clone().getClass() == x.getClass()
.
By convention, the object returned by this method should be independent
of this object (which is being cloned). To achieve this independence,
it may be necessary to modify one or more fields of the object returned
by super.clone
before returning it. Typically, this means
copying any mutable objects that comprise the internal "deep structure"
of the object being cloned and replacing the references to these
objects with references to the copies. If a class contains only
primitive fields or references to immutable objects, then it is usually
the case that no fields in the object returned by super.clone
need to be modified.
The method clone
for class Object
performs a
specific cloning operation. First, if the class of this object does
not implement the interface Cloneable
, then a
CloneNotSupportedException
is thrown. Note that all arrays
are considered to implement the interface Cloneable
and that
the return type of the clone
method of an array type T[]
is T[]
where T is any reference or primitive type.
Otherwise, this method creates a new instance of the class of this
object and initializes all its fields with exactly the contents of
the corresponding fields of this object, as if by assignment; the
contents of the fields are not themselves cloned. Thus, this method
performs a "shallow copy" of this object, not a "deep copy" operation.
The class Object
does not itself implement the interface
Cloneable
, so calling the clone
method on an object
whose class is Object
will result in throwing an
exception at run time.clone
in class Object
CloneNotSupportedException
- if the object's class does not
support the Cloneable
interface. Subclasses
that override the clone
method can also
throw this exception to indicate that an instance cannot
be cloned.Cloneable
public String toJson()
public static NeuralNetConfiguration fromJson(String json)
json
- the neural net configuration from jsonpublic void setSparsity(double sparsity)
public void setCorruptionLevel(double corruptionLevel)
public void setMomentum(double momentum)
public void setDropOut(double dropOut)
public Collection<IterationListener> getListeners()
public void setListeners(Collection<IterationListener> listeners)
public static com.fasterxml.jackson.databind.ObjectMapper mapper()
Copyright © 2015. All rights reserved.