public class NeuralNetConfiguration extends Object implements Serializable, Cloneable
Modifier and Type | Class and Description |
---|---|
static class |
NeuralNetConfiguration.Builder |
static class |
NeuralNetConfiguration.ListBuilder
Fluent interface for building a list of configurations
|
Modifier and Type | Field and Description |
---|---|
protected String |
activationFunction |
protected boolean |
applySparsity |
protected int |
batchSize |
protected boolean |
constrainGradientToUnitNorm |
protected ConvolutionDownSampleLayer.ConvolutionType |
convolutionType |
protected double |
corruptionLevel |
protected Distribution |
dist |
protected double |
dropOut |
protected int |
k |
protected int |
kernel |
protected double |
l2 |
protected Layer |
layer |
protected org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction |
lossFunction |
protected boolean |
minimize |
protected double |
momentum |
protected Map<Integer,Double> |
momentumAfter |
protected int |
nIn |
protected int |
nOut |
protected int |
numIterations |
protected int |
numLineSearchIterations |
protected OptimizationAlgorithm |
optimizationAlgo |
protected int |
resetAdaGradIterations |
protected Random |
rng |
protected StepFunction |
stepFunction |
protected boolean |
useRegularization |
protected List<String> |
variables |
protected WeightInit |
weightInit |
Constructor and Description |
---|
NeuralNetConfiguration() |
NeuralNetConfiguration(double sparsity,
boolean useAdaGrad,
double lr,
double corruptionLevel,
int numIterations,
double momentum,
double l2,
boolean useRegularization,
Map<Integer,Double> momentumAfter,
int resetAdaGradIterations,
int numLineSearchIterations,
double dropOut,
boolean applySparsity,
WeightInit weightInit,
OptimizationAlgorithm optimizationAlgo,
org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction,
boolean constrainGradientToUnitNorm,
Random rng,
Distribution dist,
StepFunction stepFunction,
Layer layer,
List<String> variables,
int nIn,
int nOut,
String activationFunction,
RBM.VisibleUnit visibleUnit,
RBM.HiddenUnit hiddenUnit,
int k,
int[] weightShape,
int[] filterSize,
int[] stride,
int kernel,
int batchSize,
boolean minimize,
ConvolutionDownSampleLayer.ConvolutionType convolutionType) |
NeuralNetConfiguration(double sparsity,
boolean useAdaGrad,
double lr,
int k,
double corruptionLevel,
int numIterations,
double momentum,
double l2,
boolean useRegularization,
Map<Integer,Double> momentumAfter,
int resetAdaGradIterations,
double dropOut,
boolean applySparsity,
WeightInit weightInit,
OptimizationAlgorithm optimizationAlgo,
org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction,
boolean constrainGradientToUnitNorm,
Random rng,
Distribution dist,
int nIn,
int nOut,
String activationFunction,
RBM.VisibleUnit visibleUnit,
RBM.HiddenUnit hiddenUnit,
int[] weightShape,
int[] filterSize,
int[] stride,
int[] featureMapSize,
int kernel,
int batchSize,
int numLineSearchIterations,
boolean minimize,
Layer layer,
ConvolutionDownSampleLayer.ConvolutionType convolutionType,
double l1) |
NeuralNetConfiguration(NeuralNetConfiguration neuralNetConfiguration) |
Modifier and Type | Method and Description |
---|---|
void |
addVariable(String variable) |
NeuralNetConfiguration |
clone()
Creates and returns a copy of this object.
|
boolean |
equals(Object o) |
static NeuralNetConfiguration |
fromJson(String json)
Create a neural net configuration from json
|
String |
getActivationFunction() |
int |
getBatchSize() |
ConvolutionDownSampleLayer.ConvolutionType |
getConvolutionType()
The convolution type to use with the convolution layer
|
double |
getCorruptionLevel() |
Distribution |
getDist() |
double |
getDropOut() |
int[] |
getFeatureMapSize() |
int[] |
getFilterSize() |
RBM.HiddenUnit |
getHiddenUnit() |
int |
getK() |
int |
getKernel() |
double |
getL1() |
double |
getL2() |
Layer |
getLayer() |
org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction |
getLossFunction() |
double |
getLr() |
double |
getMomentum() |
Map<Integer,Double> |
getMomentumAfter() |
int |
getnIn() |
int |
getnOut() |
int |
getNumIterations() |
int |
getNumLineSearchIterations() |
OptimizationAlgorithm |
getOptimizationAlgo() |
int |
getResetAdaGradIterations() |
Random |
getRng() |
double |
getSparsity() |
StepFunction |
getStepFunction() |
int[] |
getStride() |
List<String> |
getVariables() |
RBM.VisibleUnit |
getVisibleUnit() |
WeightInit |
getWeightInit() |
int[] |
getWeightShape() |
int |
hashCode() |
boolean |
isApplySparsity() |
boolean |
isConstrainGradientToUnitNorm() |
boolean |
isMinimize() |
boolean |
isUseAdaGrad() |
boolean |
isUseRegularization() |
static com.fasterxml.jackson.databind.ObjectMapper |
mapper()
Object mapper for serialization of configurations
|
void |
setActivationFunction(String activationFunction) |
void |
setApplySparsity(boolean applySparsity) |
void |
setBatchSize(int batchSize) |
void |
setConstrainGradientToUnitNorm(boolean constrainGradientToUnitNorm) |
void |
setConvolutionType(ConvolutionDownSampleLayer.ConvolutionType convolutionType) |
void |
setCorruptionLevel(double corruptionLevel) |
void |
setDist(Distribution dist) |
void |
setDropOut(double dropOut) |
void |
setFeatureMapSize(int[] featureMapSize) |
void |
setFilterSize(int[] filterSize) |
void |
setHiddenUnit(RBM.HiddenUnit hiddenUnit) |
void |
setK(int k) |
void |
setKernel(int kernel) |
void |
setL1(double l1) |
void |
setL2(double l2) |
void |
setLayer(Layer layer) |
void |
setLossFunction(org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction) |
void |
setLr(double lr) |
void |
setMinimize(boolean minimize) |
void |
setMomentum(double momentum) |
void |
setMomentumAfter(Map<Integer,Double> momentumAfter) |
void |
setnIn(int nIn) |
void |
setnOut(int nOut) |
void |
setNumIterations(int numIterations) |
void |
setNumLineSearchIterations(int numLineSearchIterations) |
void |
setOptimizationAlgo(OptimizationAlgorithm optimizationAlgo) |
void |
setResetAdaGradIterations(int resetAdaGradIterations) |
void |
setRng(Random rng) |
void |
setSparsity(double sparsity) |
void |
setStepFunction(StepFunction stepFunction) |
void |
setStride(int[] stride) |
void |
setUseAdaGrad(boolean useAdaGrad) |
void |
setUseRegularization(boolean useRegularization) |
void |
setVariables(List<String> variables) |
void |
setVisibleUnit(RBM.VisibleUnit visibleUnit) |
void |
setWeightInit(WeightInit weightInit) |
void |
setWeightShape(int[] weightShape) |
String |
toJson()
Return this configuration as json
|
String |
toString() |
List<String> |
variables() |
protected double corruptionLevel
protected int numIterations
protected double momentum
protected double l2
protected boolean useRegularization
protected int resetAdaGradIterations
protected int numLineSearchIterations
protected double dropOut
protected boolean applySparsity
protected WeightInit weightInit
protected OptimizationAlgorithm optimizationAlgo
protected org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction
protected boolean constrainGradientToUnitNorm
protected Random rng
protected Distribution dist
protected StepFunction stepFunction
protected Layer layer
protected int nIn
protected int nOut
protected String activationFunction
protected int k
protected int kernel
protected int batchSize
protected boolean minimize
protected ConvolutionDownSampleLayer.ConvolutionType convolutionType
public NeuralNetConfiguration()
public NeuralNetConfiguration(double sparsity, boolean useAdaGrad, double lr, double corruptionLevel, int numIterations, double momentum, double l2, boolean useRegularization, Map<Integer,Double> momentumAfter, int resetAdaGradIterations, int numLineSearchIterations, double dropOut, boolean applySparsity, WeightInit weightInit, OptimizationAlgorithm optimizationAlgo, org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction, boolean constrainGradientToUnitNorm, Random rng, Distribution dist, StepFunction stepFunction, Layer layer, List<String> variables, int nIn, int nOut, String activationFunction, RBM.VisibleUnit visibleUnit, RBM.HiddenUnit hiddenUnit, int k, int[] weightShape, int[] filterSize, int[] stride, int kernel, int batchSize, boolean minimize, ConvolutionDownSampleLayer.ConvolutionType convolutionType)
public NeuralNetConfiguration(double sparsity, boolean useAdaGrad, double lr, int k, double corruptionLevel, int numIterations, double momentum, double l2, boolean useRegularization, Map<Integer,Double> momentumAfter, int resetAdaGradIterations, double dropOut, boolean applySparsity, WeightInit weightInit, OptimizationAlgorithm optimizationAlgo, org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction, boolean constrainGradientToUnitNorm, Random rng, Distribution dist, int nIn, int nOut, String activationFunction, RBM.VisibleUnit visibleUnit, RBM.HiddenUnit hiddenUnit, int[] weightShape, int[] filterSize, int[] stride, int[] featureMapSize, int kernel, int batchSize, int numLineSearchIterations, boolean minimize, Layer layer, ConvolutionDownSampleLayer.ConvolutionType convolutionType, double l1)
public NeuralNetConfiguration(NeuralNetConfiguration neuralNetConfiguration)
public ConvolutionDownSampleLayer.ConvolutionType getConvolutionType()
public void setConvolutionType(ConvolutionDownSampleLayer.ConvolutionType convolutionType)
public int getNumLineSearchIterations()
public void setNumLineSearchIterations(int numLineSearchIterations)
public int getBatchSize()
public void setBatchSize(int batchSize)
public int getKernel()
public void setKernel(int kernel)
public Layer getLayer()
public void setLayer(Layer layer)
public void addVariable(String variable)
public boolean isMinimize()
public void setMinimize(boolean minimize)
public StepFunction getStepFunction()
public void setStepFunction(StepFunction stepFunction)
public int[] getWeightShape()
public void setWeightShape(int[] weightShape)
public int getNumIterations()
public void setNumIterations(int numIterations)
public int getK()
public void setK(int k)
public double getCorruptionLevel()
public RBM.HiddenUnit getHiddenUnit()
public void setHiddenUnit(RBM.HiddenUnit hiddenUnit)
public RBM.VisibleUnit getVisibleUnit()
public void setVisibleUnit(RBM.VisibleUnit visibleUnit)
public org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction getLossFunction()
public void setLossFunction(org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction)
public String getActivationFunction()
public void setActivationFunction(String activationFunction)
public int getnIn()
public void setnIn(int nIn)
public int getnOut()
public void setnOut(int nOut)
public double getSparsity()
public boolean isUseAdaGrad()
public void setUseAdaGrad(boolean useAdaGrad)
public double getLr()
public void setLr(double lr)
public double getMomentum()
public double getL2()
public void setL2(double l2)
public boolean isUseRegularization()
public void setUseRegularization(boolean useRegularization)
public int getResetAdaGradIterations()
public void setResetAdaGradIterations(int resetAdaGradIterations)
public double getDropOut()
public boolean isApplySparsity()
public void setApplySparsity(boolean applySparsity)
public WeightInit getWeightInit()
public void setWeightInit(WeightInit weightInit)
public OptimizationAlgorithm getOptimizationAlgo()
public void setOptimizationAlgo(OptimizationAlgorithm optimizationAlgo)
public boolean isConstrainGradientToUnitNorm()
public void setConstrainGradientToUnitNorm(boolean constrainGradientToUnitNorm)
public Random getRng()
public void setRng(Random rng)
public Distribution getDist()
public void setDist(Distribution dist)
public int[] getFilterSize()
public void setFilterSize(int[] filterSize)
public int[] getStride()
public void setStride(int[] stride)
public NeuralNetConfiguration clone()
x
, the expression:
will be true, and that the expression:x.clone() != x
will bex.clone().getClass() == x.getClass()
true
, but these are not absolute requirements.
While it is typically the case that:
will bex.clone().equals(x)
true
, this is not an absolute requirement.
By convention, the returned object should be obtained by calling
super.clone
. If a class and all of its superclasses (except
Object
) obey this convention, it will be the case that
x.clone().getClass() == x.getClass()
.
By convention, the object returned by this method should be independent
of this object (which is being cloned). To achieve this independence,
it may be necessary to modify one or more fields of the object returned
by super.clone
before returning it. Typically, this means
copying any mutable objects that comprise the internal "deep structure"
of the object being cloned and replacing the references to these
objects with references to the copies. If a class contains only
primitive fields or references to immutable objects, then it is usually
the case that no fields in the object returned by super.clone
need to be modified.
The method clone
for class Object
performs a
specific cloning operation. First, if the class of this object does
not implement the interface Cloneable
, then a
CloneNotSupportedException
is thrown. Note that all arrays
are considered to implement the interface Cloneable
and that
the return type of the clone
method of an array type T[]
is T[]
where T is any reference or primitive type.
Otherwise, this method creates a new instance of the class of this
object and initializes all its fields with exactly the contents of
the corresponding fields of this object, as if by assignment; the
contents of the fields are not themselves cloned. Thus, this method
performs a "shallow copy" of this object, not a "deep copy" operation.
The class Object
does not itself implement the interface
Cloneable
, so calling the clone
method on an object
whose class is Object
will result in throwing an
exception at run time.clone
in class Object
CloneNotSupportedException
- if the object's class does not
support the Cloneable
interface. Subclasses
that overrideLayer the clone
method can also
throw this exception to indicate that an instance cannot
be cloned.Cloneable
public String toJson()
public static NeuralNetConfiguration fromJson(String json)
json
- the neural net configuration from jsonpublic double getL1()
public void setL1(double l1)
public int[] getFeatureMapSize()
public void setFeatureMapSize(int[] featureMapSize)
public void setSparsity(double sparsity)
public void setCorruptionLevel(double corruptionLevel)
public void setMomentum(double momentum)
public void setDropOut(double dropOut)
public static com.fasterxml.jackson.databind.ObjectMapper mapper()
Copyright © 2015. All Rights Reserved.