Modifier and Type | Class and Description |
---|---|
class |
EarlyStoppingConfiguration<T extends Model>
Early stopping configuration: Specifies the various configuration options for running training with early stopping.
Users need to specify the following: (a) EarlyStoppingModelSaver: How models will be saved (to disk, to memory, etc) (Default: in memory) (b) Termination conditions: at least one termination condition must be specified (i) Iteration termination conditions: calculated once for each minibatch. |
static class |
EarlyStoppingConfiguration.Builder<T extends Model> |
interface |
EarlyStoppingModelSaver<T extends Model>
Interface for saving MultiLayerNetworks learned during early stopping, and retrieving them again later
|
class |
EarlyStoppingResult<T extends Model>
EarlyStoppingResult: contains the results of the early stopping training, such as:
- Why the training was terminated
- Score vs.
|
Modifier and Type | Interface and Description |
---|---|
interface |
EarlyStoppingListener<T extends Model>
EarlyStoppingListener is a listener interface for conducting early stopping training.
|
Modifier and Type | Class and Description |
---|---|
class |
InMemoryModelSaver<T extends Model>
Save the best (and latest) models for early stopping training to memory for later retrieval
Note: Assumes that network is cloneable via .clone() method
|
Modifier and Type | Interface and Description |
---|---|
interface |
ScoreCalculator<T extends Model>
ScoreCalculator interface is used to calculate a score for a neural network.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseEarlyStoppingTrainer<T extends Model>
Base/abstract class for conducting early stopping training locally (single machine).
Can be used to train a MultiLayerNetwork or a ComputationGraph via early stopping |
interface |
IEarlyStoppingTrainer<T extends Model>
Interface for early stopping trainers
|
Modifier and Type | Field and Description |
---|---|
protected T |
BaseEarlyStoppingTrainer.model |
Modifier and Type | Interface and Description |
---|---|
interface |
Classifier
A classifier (this is for supervised learning)
|
interface |
Layer
Interface for a layer of a neural network.
|
Modifier and Type | Interface and Description |
---|---|
interface |
IOutputLayer
Interface for output layers (those that calculate gradients with respect to a labels array)
|
interface |
RecurrentLayer
Created by Alex on 28/08/2016.
|
Modifier and Type | Class and Description |
---|---|
class |
ComputationGraph
A ComputationGraph network is a neural network with arbitrary (directed acyclic graph) connection structure.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractLayer<LayerConfT extends Layer>
A layer with input and output, no parameters or gradients
|
class |
ActivationLayer
Activation Layer
Used to apply activation on input and corresponding derivative on epsilon.
|
class |
BaseLayer<LayerConfT extends BaseLayer>
A layer with parameters
|
class |
BaseOutputLayer<LayerConfT extends BaseOutputLayer>
Output layer with different objective
in co-occurrences for different objectives.
|
class |
BasePretrainNetwork<LayerConfT extends BasePretrainNetwork>
Baseline class for any Neural Network used
as a layer in a deep network *
|
class |
DropoutLayer
Created by davekale on 12/7/16.
|
class |
FrozenLayer
For purposes of transfer learning
A frozen layers wraps another dl4j layer within it.
|
class |
LossLayer
LossLayer is a flexible output "layer" that performs a loss function on
an input without MLP logic.
|
class |
OutputLayer
Output layer with different objective
incooccurrences for different objectives.
|
Modifier and Type | Class and Description |
---|---|
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
ConvolutionLayer
Convolution layer
|
class |
ZeroPaddingLayer
Zero padding layer for convolutional neural networks.
|
Modifier and Type | Class and Description |
---|---|
class |
Subsampling1DLayer
1D (temporal) subsampling layer.
|
class |
SubsamplingLayer
Subsampling layer.
|
Modifier and Type | Class and Description |
---|---|
class |
AutoEncoder
Autoencoder.
|
Modifier and Type | Class and Description |
---|---|
class |
DenseLayer |
Modifier and Type | Class and Description |
---|---|
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
Modifier and Type | Class and Description |
---|---|
class |
RBM
Restricted Boltzmann Machine.
|
Modifier and Type | Class and Description |
---|---|
class |
BatchNormalization
Batch normalization layer.
|
class |
LocalResponseNormalization
Deep neural net normalization approach normalizes activations between layers
"brightness normalization"
Used for nets like AlexNet
|
Modifier and Type | Class and Description |
---|---|
class |
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
Supports the following PoolingType s: SUM, AVG, MAX, PNORMGlobal pooling layer can also handle mask arrays when dealing with variable length inputs. |
Modifier and Type | Class and Description |
---|---|
class |
BaseRecurrentLayer<LayerConfT extends BaseLayer> |
class |
GravesBidirectionalLSTM
RNN tutorial: http://deeplearning4j.org/usingrnns.html
READ THIS FIRST
Bdirectional LSTM layer implementation.
|
class |
GravesLSTM
LSTM layer implementation.
|
class |
LSTM
LSTM layer implementation.
|
class |
RnnOutputLayer
Recurrent Neural Network Output Layer.
Handles calculation of gradients etc for various objective functions. Functionally the same as OutputLayer, but handles output and label reshaping automatically. Input and output activations are same as other RNN layers: 3 dimensions with shape [miniBatchSize,nIn,timeSeriesLength] and [miniBatchSize,nOut,timeSeriesLength] respectively. |
Modifier and Type | Class and Description |
---|---|
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
Modifier and Type | Class and Description |
---|---|
class |
VariationalAutoencoder
Variational Autoencoder layer
|
Modifier and Type | Class and Description |
---|---|
class |
MultiLayerNetwork
MultiLayerNetwork is a neural network with multiple layers in a stack, and usually an output layer.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseMultiLayerUpdater<T extends Model>
BaseMultiLayerUpdater - core functionality for applying updaters to MultiLayerNetwork and ComputationGraph.
|
Modifier and Type | Field and Description |
---|---|
protected T |
BaseMultiLayerUpdater.network |
Modifier and Type | Method and Description |
---|---|
static Updater |
UpdaterCreator.getUpdater(Model layer) |
Modifier and Type | Method and Description |
---|---|
Solver.Builder |
Solver.Builder.model(Model model) |
Modifier and Type | Method and Description |
---|---|
void |
IterationListener.iterationDone(Model model,
int iteration)
Event listener for each iteration
|
void |
TrainingListener.onBackwardPass(Model model)
Called once per iteration (backward pass) after gradients have been calculated, and updated
Gradients are available via
gradient() . |
void |
TrainingListener.onEpochEnd(Model model)
Called once at the end of each epoch, when using methods such as
MultiLayerNetwork.fit(DataSetIterator) ,
ComputationGraph.fit(DataSetIterator) or ComputationGraph.fit(MultiDataSetIterator) |
void |
TrainingListener.onEpochStart(Model model)
Called once at the start of each epoch, when using methods such as
MultiLayerNetwork.fit(DataSetIterator) ,
ComputationGraph.fit(DataSetIterator) or ComputationGraph.fit(MultiDataSetIterator) |
void |
TrainingListener.onForwardPass(Model model,
List<org.nd4j.linalg.api.ndarray.INDArray> activations)
Called once per iteration (forward pass) for activations (usually for a
MultiLayerNetwork ),
only at training time |
void |
TrainingListener.onForwardPass(Model model,
Map<String,org.nd4j.linalg.api.ndarray.INDArray> activations)
Called once per iteration (forward pass) for activations (usually for a
ComputationGraph ),
only at training time |
void |
TrainingListener.onGradientCalculation(Model model)
Called once per iteration (backward pass) before the gradients are updated
Gradients are available via
gradient() . |
void |
ConvexOptimizer.updateGradientAccordingToParams(Gradient gradient,
Model model,
int batchSize)
Update the gradient according to the configuration such as adagrad, momentum, and sparsity
|
Modifier and Type | Method and Description |
---|---|
protected void |
EvaluativeListener.invokeListener(Model model) |
void |
PerformanceListener.iterationDone(Model model,
int iteration) |
void |
ScoreIterationListener.iterationDone(Model model,
int iteration) |
void |
EvaluativeListener.iterationDone(Model model,
int iteration)
Event listener for each iteration
|
void |
SleepyTrainingListener.iterationDone(Model model,
int iteration) |
void |
ComposableIterationListener.iterationDone(Model model,
int iteration) |
void |
ParamAndGradientIterationListener.iterationDone(Model model,
int iteration) |
void |
TimeIterationListener.iterationDone(Model model,
int iteration) |
void |
CollectScoresIterationListener.iterationDone(Model model,
int iteration) |
void |
EvaluativeListener.onBackwardPass(Model model) |
void |
SleepyTrainingListener.onBackwardPass(Model model) |
void |
EvaluativeListener.onEpochEnd(Model model) |
void |
SleepyTrainingListener.onEpochEnd(Model model) |
void |
EvaluativeListener.onEpochStart(Model model) |
void |
SleepyTrainingListener.onEpochStart(Model model) |
void |
EvaluativeListener.onForwardPass(Model model,
List<org.nd4j.linalg.api.ndarray.INDArray> activations) |
void |
SleepyTrainingListener.onForwardPass(Model model,
List<org.nd4j.linalg.api.ndarray.INDArray> activations) |
void |
EvaluativeListener.onForwardPass(Model model,
Map<String,org.nd4j.linalg.api.ndarray.INDArray> activations) |
void |
SleepyTrainingListener.onForwardPass(Model model,
Map<String,org.nd4j.linalg.api.ndarray.INDArray> activations) |
void |
EvaluativeListener.onGradientCalculation(Model model) |
void |
SleepyTrainingListener.onGradientCalculation(Model model) |
Modifier and Type | Method and Description |
---|---|
void |
ModelSavingCallback.call(EvaluativeListener listener,
Model model,
long invocationsCount,
IEvaluation[] evaluations) |
void |
EvaluationCallback.call(EvaluativeListener listener,
Model model,
long invocationsCount,
IEvaluation[] evaluations) |
protected void |
ModelSavingCallback.save(Model model,
String filename)
This method saves model
|
Modifier and Type | Field and Description |
---|---|
protected Model |
BaseOptimizer.model |
Modifier and Type | Method and Description |
---|---|
static int |
BaseOptimizer.getIterationCount(Model model) |
static void |
BaseOptimizer.incrementIterationCount(Model model,
int incrementBy) |
void |
BaseOptimizer.updateGradientAccordingToParams(Gradient gradient,
Model model,
int batchSize) |
Modifier and Type | Method and Description |
---|---|
static int |
EncodedGradientsAccumulator.getOptimalBufferSize(Model model,
int numWorkers,
int queueSize) |
Modifier and Type | Method and Description |
---|---|
static org.nd4j.linalg.heartbeat.reports.Task |
ModelSerializer.taskByModel(Model model) |
static void |
ModelSerializer.writeModel(Model model,
File file,
boolean saveUpdater)
Write a model to a file
|
static void |
ModelSerializer.writeModel(Model model,
OutputStream stream,
boolean saveUpdater)
Write a model to an output stream
|
static void |
ModelSerializer.writeModel(Model model,
String path,
boolean saveUpdater)
Write a model to a file path
|
Copyright © 2017. All rights reserved.