public static class NeuralNetConfiguration.Builder extends Object implements Cloneable
Modifier and Type | Field and Description |
---|---|
protected org.nd4j.linalg.activations.IActivation |
activationFn |
protected double |
adamMeanDecay
Deprecated.
|
protected double |
adamVarDecay
Deprecated.
|
protected double |
biasInit |
protected double |
biasLearningRate |
protected CacheMode |
cacheMode |
protected ConvolutionMode |
convolutionMode |
protected Distribution |
dist |
protected double |
dropOut |
protected double |
epsilon
Deprecated.
|
protected GradientNormalization |
gradientNormalization |
protected double |
gradientNormalizationThreshold |
protected WorkspaceMode |
inferenceWorkspaceMode |
protected org.nd4j.linalg.learning.config.IUpdater |
iUpdater |
protected double |
l1 |
protected double |
l1Bias |
protected double |
l2 |
protected double |
l2Bias |
protected Layer |
layer |
protected double |
leakyreluAlpha
Deprecated.
|
protected double |
learningRate |
protected LearningRatePolicy |
learningRatePolicy |
protected Map<Integer,Double> |
learningRateSchedule |
protected double |
lrPolicyDecayRate |
protected double |
lrPolicyPower |
protected double |
lrPolicySteps |
protected double |
lrScoreBasedDecay |
protected int |
maxNumLineSearchIterations |
protected boolean |
miniBatch |
protected boolean |
minimize |
protected double |
momentum
Deprecated.
|
protected Map<Integer,Double> |
momentumSchedule
Deprecated.
|
protected int |
numIterations |
protected OptimizationAlgorithm |
optimizationAlgo |
protected boolean |
pretrain |
protected double |
rho
Deprecated.
|
protected double |
rmsDecay
Deprecated.
|
protected long |
seed |
protected StepFunction |
stepFunction |
protected WorkspaceMode |
trainingWorkspaceMode |
protected Updater |
updater
Deprecated.
|
protected boolean |
useDropConnect |
protected boolean |
useRegularization |
protected WeightInit |
weightInit |
Constructor and Description |
---|
Builder() |
Builder(NeuralNetConfiguration newConf) |
Modifier and Type | Method and Description |
---|---|
NeuralNetConfiguration.Builder |
activation(org.nd4j.linalg.activations.Activation activation)
Activation function / neuron non-linearity
|
NeuralNetConfiguration.Builder |
activation(org.nd4j.linalg.activations.IActivation activationFunction)
Activation function / neuron non-linearity
|
NeuralNetConfiguration.Builder |
activation(String activationFunction)
Deprecated.
Use
activation(Activation) or
@activation(IActivation) |
NeuralNetConfiguration.Builder |
adamMeanDecay(double adamMeanDecay)
Deprecated.
use
.updater(Adam.builder().beta1(adamMeanDecay).build()) intead |
NeuralNetConfiguration.Builder |
adamVarDecay(double adamVarDecay)
Deprecated.
use
.updater(Adam.builder().beta2(adamVarDecay).build()) intead |
NeuralNetConfiguration.Builder |
biasInit(double biasInit)
Constant for bias initialization.
|
NeuralNetConfiguration.Builder |
biasLearningRate(double biasLearningRate)
Bias learning rate.
|
NeuralNetConfiguration |
build()
Return a configuration based on this builder
|
NeuralNetConfiguration.Builder |
cacheMode(CacheMode cacheMode)
This method defines how/if preOutput cache is handled:
NONE: cache disabled (default value)
HOST: Host memory will be used
DEVICE: GPU memory will be used (on CPU backends effect will be the same as for HOST)
|
NeuralNetConfiguration.Builder |
clone() |
NeuralNetConfiguration.Builder |
convolutionMode(ConvolutionMode convolutionMode) |
NeuralNetConfiguration.Builder |
dist(Distribution dist)
Distribution to sample initial weights from.
|
NeuralNetConfiguration.Builder |
dropOut(double dropOut)
Dropout probability.
|
NeuralNetConfiguration.Builder |
epsilon(double epsilon)
Deprecated.
Use use
.updater(Adam.builder().epsilon(epsilon).build()) or similar instead |
NeuralNetConfiguration.Builder |
gradientNormalization(GradientNormalization gradientNormalization)
Gradient normalization strategy.
|
NeuralNetConfiguration.Builder |
gradientNormalizationThreshold(double threshold)
Threshold for gradient normalization, only used for GradientNormalization.ClipL2PerLayer,
GradientNormalization.ClipL2PerParamType, and GradientNormalization.ClipElementWiseAbsoluteValue
Not used otherwise. L2 threshold for first two types of clipping, or absolute value threshold for last type of clipping. |
ComputationGraphConfiguration.GraphBuilder |
graphBuilder()
Create a GraphBuilder (for creating a ComputationGraphConfiguration).
|
NeuralNetConfiguration.Builder |
inferenceWorkspaceMode(WorkspaceMode workspaceMode)
This method defines Workspace mode being used during inference:
NONE: workspace won't be used
SINGLE: one workspace will be used during whole iteration loop
SEPARATE: separate workspaces will be used for feedforward and backprop iteration loops
|
NeuralNetConfiguration.Builder |
iterations(int numIterations)
Number of optimization iterations.
|
NeuralNetConfiguration.Builder |
l1(double l1)
L1 regularization coefficient for the weights.
|
NeuralNetConfiguration.Builder |
l1Bias(double l1Bias)
L1 regularization coefficient for the bias.
|
NeuralNetConfiguration.Builder |
l2(double l2)
L2 regularization coefficient for the weights.
|
NeuralNetConfiguration.Builder |
l2Bias(double l2Bias)
L2 regularization coefficient for the bias.
|
NeuralNetConfiguration.Builder |
layer(Layer layer)
Layer class.
|
NeuralNetConfiguration.Builder |
leakyreluAlpha(double leakyreluAlpha)
Deprecated.
Use
activation(IActivation) with leaky relu, setting alpha value directly in constructor. |
NeuralNetConfiguration.Builder |
learningRate(double learningRate)
Learning rate.
|
NeuralNetConfiguration.Builder |
learningRateDecayPolicy(LearningRatePolicy policy)
Learning rate decay policy.
|
NeuralNetConfiguration.Builder |
learningRateSchedule(Map<Integer,Double> learningRateSchedule)
Learning rate schedule.
|
NeuralNetConfiguration.Builder |
learningRateScoreBasedDecayRate(double lrScoreBasedDecay)
Rate to decrease learningRate by when the score stops improving.
|
NeuralNetConfiguration.ListBuilder |
list()
Create a ListBuilder (for creating a MultiLayerConfiguration)
Usage: |
NeuralNetConfiguration.ListBuilder |
list(Layer... layers)
Create a ListBuilder (for creating a MultiLayerConfiguration) with the specified layers
Usage: |
NeuralNetConfiguration.Builder |
lrPolicyDecayRate(double lrPolicyDecayRate)
Set the decay rate for the learning rate decay policy.
|
NeuralNetConfiguration.Builder |
lrPolicyPower(double lrPolicyPower)
Set the power used for learning rate inverse policy.
|
NeuralNetConfiguration.Builder |
lrPolicySteps(double lrPolicySteps)
Set the number of steps used for learning decay rate steps policy.
|
NeuralNetConfiguration.Builder |
maxNumLineSearchIterations(int maxNumLineSearchIterations)
Maximum number of line search iterations.
|
NeuralNetConfiguration.Builder |
miniBatch(boolean miniBatch)
Process input as minibatch vs full dataset.
|
NeuralNetConfiguration.Builder |
minimize(boolean minimize)
Objective function to minimize or maximize cost function
Default set to minimize true.
|
NeuralNetConfiguration.Builder |
momentum(double momentum)
Deprecated.
Use
.updater(new Nesterov(momentum)) instead |
NeuralNetConfiguration.Builder |
momentumAfter(Map<Integer,Double> momentumAfter)
Deprecated.
Use
.updater(Nesterov.builder().momentumSchedule(schedule).build()) instead |
NeuralNetConfiguration.Builder |
optimizationAlgo(OptimizationAlgorithm optimizationAlgo)
Optimization algorithm to use.
|
NeuralNetConfiguration.Builder |
regularization(boolean useRegularization)
Whether to use regularization (l1, l2, dropout, etc
|
NeuralNetConfiguration.Builder |
rho(double rho)
Deprecated.
use
.updater(new AdaDelta(rho,epsilon)) intead |
NeuralNetConfiguration.Builder |
rmsDecay(double rmsDecay)
Deprecated.
use
.updater(new RmsProp(rmsDecay)) intead |
NeuralNetConfiguration.Builder |
seed(int seed)
Random number generator seed.
|
NeuralNetConfiguration.Builder |
seed(long seed)
Random number generator seed.
|
NeuralNetConfiguration.Builder |
stepFunction(StepFunction stepFunction)
Step function to apply for back track line search.
|
NeuralNetConfiguration.Builder |
trainingWorkspaceMode(WorkspaceMode workspaceMode)
This method defines Workspace mode being used during training:
NONE: workspace won't be used
SINGLE: one workspace will be used during whole iteration loop
SEPARATE: separate workspaces will be used for feedforward and backprop iteration loops
|
NeuralNetConfiguration.Builder |
updater(org.nd4j.linalg.learning.config.IUpdater updater)
Gradient updater.
|
NeuralNetConfiguration.Builder |
updater(Updater updater)
Gradient updater.
|
NeuralNetConfiguration.Builder |
useDropConnect(boolean useDropConnect)
Use drop connect: multiply the weight by a binomial sampling wrt the dropout probability.
|
NeuralNetConfiguration.Builder |
weightInit(WeightInit weightInit)
Weight initialization scheme.
|
protected org.nd4j.linalg.activations.IActivation activationFn
protected WeightInit weightInit
protected double biasInit
protected Distribution dist
protected double learningRate
protected double biasLearningRate
protected double lrScoreBasedDecay
protected double l1
protected double l2
protected double l1Bias
protected double l2Bias
protected double dropOut
@Deprecated protected Updater updater
protected org.nd4j.linalg.learning.config.IUpdater iUpdater
@Deprecated protected double momentum
@Deprecated protected Map<Integer,Double> momentumSchedule
@Deprecated protected double epsilon
@Deprecated protected double rho
@Deprecated protected double rmsDecay
@Deprecated protected double adamMeanDecay
@Deprecated protected double adamVarDecay
protected Layer layer
@Deprecated protected double leakyreluAlpha
protected boolean miniBatch
protected int numIterations
protected int maxNumLineSearchIterations
protected long seed
protected boolean useRegularization
protected OptimizationAlgorithm optimizationAlgo
protected StepFunction stepFunction
protected boolean useDropConnect
protected boolean minimize
protected GradientNormalization gradientNormalization
protected double gradientNormalizationThreshold
protected LearningRatePolicy learningRatePolicy
protected double lrPolicyDecayRate
protected double lrPolicySteps
protected double lrPolicyPower
protected boolean pretrain
protected WorkspaceMode trainingWorkspaceMode
protected WorkspaceMode inferenceWorkspaceMode
protected CacheMode cacheMode
protected ConvolutionMode convolutionMode
public Builder()
public Builder(NeuralNetConfiguration newConf)
public NeuralNetConfiguration.Builder miniBatch(boolean miniBatch)
public NeuralNetConfiguration.Builder trainingWorkspaceMode(@NonNull WorkspaceMode workspaceMode)
workspaceMode
- public NeuralNetConfiguration.Builder inferenceWorkspaceMode(@NonNull WorkspaceMode workspaceMode)
workspaceMode
- public NeuralNetConfiguration.Builder cacheMode(@NonNull CacheMode cacheMode)
cacheMode
- public NeuralNetConfiguration.Builder useDropConnect(boolean useDropConnect)
dropOut(double)
; this is the probability of retaining a weightuseDropConnect
- whether to use drop connect or notpublic NeuralNetConfiguration.Builder minimize(boolean minimize)
public NeuralNetConfiguration.Builder maxNumLineSearchIterations(int maxNumLineSearchIterations)
maxNumLineSearchIterations
- > 0public NeuralNetConfiguration.Builder layer(Layer layer)
public NeuralNetConfiguration.Builder stepFunction(StepFunction stepFunction)
public NeuralNetConfiguration.ListBuilder list()
.list()
.layer(0,new DenseLayer.Builder()...build())
...
.layer(n,new OutputLayer.Builder()...build())
public NeuralNetConfiguration.ListBuilder list(Layer... layers)
.list(
new DenseLayer.Builder()...build(),
...,
new OutputLayer.Builder()...build())
layers
- The layer configurations for the networkpublic ComputationGraphConfiguration.GraphBuilder graphBuilder()
public NeuralNetConfiguration.Builder iterations(int numIterations)
public NeuralNetConfiguration.Builder seed(int seed)
public NeuralNetConfiguration.Builder seed(long seed)
public NeuralNetConfiguration.Builder optimizationAlgo(OptimizationAlgorithm optimizationAlgo)
optimizationAlgo
- Optimization algorithm to use when trainingpublic NeuralNetConfiguration.Builder regularization(boolean useRegularization)
public NeuralNetConfiguration.Builder clone()
@Deprecated public NeuralNetConfiguration.Builder activation(String activationFunction)
activation(Activation)
or
@activation(IActivation)
public NeuralNetConfiguration.Builder activation(org.nd4j.linalg.activations.IActivation activationFunction)
activation(Activation)
public NeuralNetConfiguration.Builder activation(org.nd4j.linalg.activations.Activation activation)
@Deprecated public NeuralNetConfiguration.Builder leakyreluAlpha(double leakyreluAlpha)
activation(IActivation)
with leaky relu, setting alpha value directly in constructor.public NeuralNetConfiguration.Builder weightInit(WeightInit weightInit)
WeightInit
public NeuralNetConfiguration.Builder biasInit(double biasInit)
biasInit
- Constant for bias initializationpublic NeuralNetConfiguration.Builder dist(Distribution dist)
public NeuralNetConfiguration.Builder learningRate(double learningRate)
public NeuralNetConfiguration.Builder biasLearningRate(double biasLearningRate)
public NeuralNetConfiguration.Builder learningRateSchedule(Map<Integer,Double> learningRateSchedule)
public NeuralNetConfiguration.Builder learningRateScoreBasedDecayRate(double lrScoreBasedDecay)
public NeuralNetConfiguration.Builder l1(double l1)
public NeuralNetConfiguration.Builder l2(double l2)
public NeuralNetConfiguration.Builder l1Bias(double l1Bias)
public NeuralNetConfiguration.Builder l2Bias(double l2Bias)
public NeuralNetConfiguration.Builder dropOut(double dropOut)
Note: This sets the probability per-layer. Care should be taken when setting lower values for complex networks.
dropOut
- Dropout probability (probability of retaining an activation)@Deprecated public NeuralNetConfiguration.Builder momentum(double momentum)
.updater(new Nesterov(momentum))
insteadUpdater.NESTEROVS
@Deprecated public NeuralNetConfiguration.Builder momentumAfter(Map<Integer,Double> momentumAfter)
.updater(Nesterov.builder().momentumSchedule(schedule).build())
insteadUpdater.NESTEROVS
public NeuralNetConfiguration.Builder updater(Updater updater)
updater(IUpdater)
to configure
the updater-specific hyperparameters.Updater
public NeuralNetConfiguration.Builder updater(org.nd4j.linalg.learning.config.IUpdater updater)
Adam
or Nesterovs
updater
- Updater to use@Deprecated public NeuralNetConfiguration.Builder rho(double rho)
.updater(new AdaDelta(rho,epsilon))
inteadrho
- @Deprecated public NeuralNetConfiguration.Builder epsilon(double epsilon)
.updater(Adam.builder().epsilon(epsilon).build())
or similar insteadepsilon
- Epsilon value to use for adagrad or@Deprecated public NeuralNetConfiguration.Builder rmsDecay(double rmsDecay)
.updater(new RmsProp(rmsDecay))
intead@Deprecated public NeuralNetConfiguration.Builder adamMeanDecay(double adamMeanDecay)
.updater(Adam.builder().beta1(adamMeanDecay).build())
intead@Deprecated public NeuralNetConfiguration.Builder adamVarDecay(double adamVarDecay)
.updater(Adam.builder().beta2(adamVarDecay).build())
inteadpublic NeuralNetConfiguration.Builder gradientNormalization(GradientNormalization gradientNormalization)
gradientNormalization
- Type of normalization to use. Defaults to None.GradientNormalization
public NeuralNetConfiguration.Builder gradientNormalizationThreshold(double threshold)
public NeuralNetConfiguration.Builder learningRateDecayPolicy(LearningRatePolicy policy)
policy
- Type of policy to use. Defaults to None.public NeuralNetConfiguration.Builder lrPolicyDecayRate(double lrPolicyDecayRate)
lrPolicyDecayRate
- rate.public NeuralNetConfiguration.Builder lrPolicySteps(double lrPolicySteps)
lrPolicySteps
- number of stepspublic NeuralNetConfiguration.Builder lrPolicyPower(double lrPolicyPower)
lrPolicyPower
- powerpublic NeuralNetConfiguration.Builder convolutionMode(ConvolutionMode convolutionMode)
public NeuralNetConfiguration build()
Copyright © 2017. All rights reserved.