Package org.deeplearning4j.optimize.api
Interface ConvexOptimizer
-
- All Superinterfaces:
Serializable
- All Known Implementing Classes:
BaseOptimizer,ConjugateGradient,LBFGS,LineGradientDescent,StochasticGradientDescent
public interface ConvexOptimizer extends Serializable
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description intbatchSize()The batch size for the optimizerComputationGraphUpdatergetComputationGraphUpdater()ComputationGraphUpdatergetComputationGraphUpdater(boolean initializeIfReq)NeuralNetConfigurationgetConf()GradientsAccumulatorgetGradientsAccumulator()This method returns GradientsAccumulator instance used in this optimizer.StepFunctiongetStepFunction()This method returns StepFunction defined within this Optimizer instanceUpdatergetUpdater()UpdatergetUpdater(boolean initializeIfReq)Pair<Gradient,Double>gradientAndScore(LayerWorkspaceMgr workspaceMgr)The gradient and score for this optimizerbooleanoptimize(LayerWorkspaceMgr workspaceMgr)Calls optimizevoidpostStep(INDArray line)After the step has been made, do an actionvoidpreProcessLine()Pre preProcess a line before an iterationdoublescore()The score for the optimizer so farvoidsetBatchSize(int batchSize)Set the batch size for the optimizervoidsetGradientsAccumulator(GradientsAccumulator accumulator)This method specifies GradientsAccumulator instance to be used for updates sharing across multiple modelsvoidsetListeners(Collection<TrainingListener> listeners)voidsetUpdater(Updater updater)voidsetUpdaterComputationGraph(ComputationGraphUpdater updater)voidsetupSearchState(Pair<Gradient,Double> pair)Based on the gradient and score setup a search statevoidupdateGradientAccordingToParams(Gradient gradient, Model model, int batchSize, LayerWorkspaceMgr workspaceMgr)Update the gradient according to the configuration such as adagrad, momentum, and sparsity
-
-
-
Method Detail
-
score
double score()
The score for the optimizer so far- Returns:
- the score for this optimizer so far
-
getUpdater
Updater getUpdater()
-
getUpdater
Updater getUpdater(boolean initializeIfReq)
-
getComputationGraphUpdater
ComputationGraphUpdater getComputationGraphUpdater()
-
getComputationGraphUpdater
ComputationGraphUpdater getComputationGraphUpdater(boolean initializeIfReq)
-
setUpdater
void setUpdater(Updater updater)
-
setUpdaterComputationGraph
void setUpdaterComputationGraph(ComputationGraphUpdater updater)
-
setListeners
void setListeners(Collection<TrainingListener> listeners)
-
setGradientsAccumulator
void setGradientsAccumulator(GradientsAccumulator accumulator)
This method specifies GradientsAccumulator instance to be used for updates sharing across multiple models- Parameters:
accumulator-
-
getStepFunction
StepFunction getStepFunction()
This method returns StepFunction defined within this Optimizer instance- Returns:
-
getGradientsAccumulator
GradientsAccumulator getGradientsAccumulator()
This method returns GradientsAccumulator instance used in this optimizer. This method can return null.- Returns:
-
getConf
NeuralNetConfiguration getConf()
-
gradientAndScore
Pair<Gradient,Double> gradientAndScore(LayerWorkspaceMgr workspaceMgr)
The gradient and score for this optimizer- Returns:
- the gradient and score for this optimizer
-
optimize
boolean optimize(LayerWorkspaceMgr workspaceMgr)
Calls optimize- Returns:
- whether the convex optimizer converted or not
-
batchSize
int batchSize()
The batch size for the optimizer- Returns:
-
setBatchSize
void setBatchSize(int batchSize)
Set the batch size for the optimizer- Parameters:
batchSize-
-
preProcessLine
void preProcessLine()
Pre preProcess a line before an iteration
-
postStep
void postStep(INDArray line)
After the step has been made, do an action- Parameters:
line-
-
setupSearchState
void setupSearchState(Pair<Gradient,Double> pair)
Based on the gradient and score setup a search state- Parameters:
pair- the gradient and score
-
updateGradientAccordingToParams
void updateGradientAccordingToParams(Gradient gradient, Model model, int batchSize, LayerWorkspaceMgr workspaceMgr)
Update the gradient according to the configuration such as adagrad, momentum, and sparsity- Parameters:
gradient- the gradient to modifymodel- the model with the parameters to updatebatchSize- batchSize for update
-
-