Modifier and Type | Class and Description |
---|---|
class |
EarlyStoppingConfiguration<T extends Model>
Early stopping configuration: Specifies the various configuration options for running training with early stopping.
Users need to specify the following: (a) EarlyStoppingModelSaver: How models will be saved (to disk, to memory, etc) (Default: in memory) (b) Termination conditions: at least one termination condition must be specified (i) Iteration termination conditions: calculated once for each minibatch. |
static class |
EarlyStoppingConfiguration.Builder<T extends Model> |
interface |
EarlyStoppingModelSaver<T extends Model>
Interface for saving MultiLayerNetworks learned during early stopping, and retrieving them again later
|
class |
EarlyStoppingResult<T extends Model>
EarlyStoppingResult: contains the results of the early stopping training, such as:
- Why the training was terminated
- Score vs.
|
Modifier and Type | Interface and Description |
---|---|
interface |
EarlyStoppingListener<T extends Model>
EarlyStoppingListener is a listener interface for conducting early stopping training.
|
Modifier and Type | Class and Description |
---|---|
class |
InMemoryModelSaver<T extends Model>
Save the best (and latest) models for early stopping training to memory for later retrieval
Note: Assumes that network is cloneable via .clone() method
|
Modifier and Type | Interface and Description |
---|---|
interface |
ScoreCalculator<T extends Model>
ScoreCalculator interface is used to calculate a score for a neural network.
|
Modifier and Type | Method and Description |
---|---|
protected org.nd4j.linalg.api.ndarray.INDArray[] |
VAEReconErrorScoreCalculator.output(Model network,
org.nd4j.linalg.api.ndarray.INDArray[] input,
org.nd4j.linalg.api.ndarray.INDArray[] fMask,
org.nd4j.linalg.api.ndarray.INDArray[] lMask) |
protected org.nd4j.linalg.api.ndarray.INDArray[] |
VAEReconProbScoreCalculator.output(Model network,
org.nd4j.linalg.api.ndarray.INDArray[] input,
org.nd4j.linalg.api.ndarray.INDArray[] fMask,
org.nd4j.linalg.api.ndarray.INDArray[] lMask) |
protected org.nd4j.linalg.api.ndarray.INDArray[] |
DataSetLossCalculator.output(Model network,
org.nd4j.linalg.api.ndarray.INDArray[] input,
org.nd4j.linalg.api.ndarray.INDArray[] fMask,
org.nd4j.linalg.api.ndarray.INDArray[] lMask) |
protected org.nd4j.linalg.api.ndarray.INDArray[] |
AutoencoderScoreCalculator.output(Model network,
org.nd4j.linalg.api.ndarray.INDArray[] input,
org.nd4j.linalg.api.ndarray.INDArray[] fMask,
org.nd4j.linalg.api.ndarray.INDArray[] lMask) |
protected org.nd4j.linalg.api.ndarray.INDArray |
VAEReconErrorScoreCalculator.output(Model net,
org.nd4j.linalg.api.ndarray.INDArray input,
org.nd4j.linalg.api.ndarray.INDArray fMask,
org.nd4j.linalg.api.ndarray.INDArray lMask) |
protected org.nd4j.linalg.api.ndarray.INDArray |
VAEReconProbScoreCalculator.output(Model network,
org.nd4j.linalg.api.ndarray.INDArray input,
org.nd4j.linalg.api.ndarray.INDArray fMask,
org.nd4j.linalg.api.ndarray.INDArray lMask) |
protected org.nd4j.linalg.api.ndarray.INDArray |
DataSetLossCalculator.output(Model network,
org.nd4j.linalg.api.ndarray.INDArray input,
org.nd4j.linalg.api.ndarray.INDArray fMask,
org.nd4j.linalg.api.ndarray.INDArray lMask) |
protected org.nd4j.linalg.api.ndarray.INDArray |
AutoencoderScoreCalculator.output(Model net,
org.nd4j.linalg.api.ndarray.INDArray input,
org.nd4j.linalg.api.ndarray.INDArray fMask,
org.nd4j.linalg.api.ndarray.INDArray lMask) |
protected double |
VAEReconErrorScoreCalculator.scoreMinibatch(Model network,
org.nd4j.linalg.api.ndarray.INDArray[] features,
org.nd4j.linalg.api.ndarray.INDArray[] labels,
org.nd4j.linalg.api.ndarray.INDArray[] fMask,
org.nd4j.linalg.api.ndarray.INDArray[] lMask,
org.nd4j.linalg.api.ndarray.INDArray[] output) |
protected double |
VAEReconProbScoreCalculator.scoreMinibatch(Model network,
org.nd4j.linalg.api.ndarray.INDArray[] features,
org.nd4j.linalg.api.ndarray.INDArray[] labels,
org.nd4j.linalg.api.ndarray.INDArray[] fMask,
org.nd4j.linalg.api.ndarray.INDArray[] lMask,
org.nd4j.linalg.api.ndarray.INDArray[] output) |
protected double |
DataSetLossCalculator.scoreMinibatch(Model network,
org.nd4j.linalg.api.ndarray.INDArray[] features,
org.nd4j.linalg.api.ndarray.INDArray[] labels,
org.nd4j.linalg.api.ndarray.INDArray[] fMask,
org.nd4j.linalg.api.ndarray.INDArray[] lMask,
org.nd4j.linalg.api.ndarray.INDArray[] output) |
protected double |
AutoencoderScoreCalculator.scoreMinibatch(Model network,
org.nd4j.linalg.api.ndarray.INDArray[] features,
org.nd4j.linalg.api.ndarray.INDArray[] labels,
org.nd4j.linalg.api.ndarray.INDArray[] fMask,
org.nd4j.linalg.api.ndarray.INDArray[] lMask,
org.nd4j.linalg.api.ndarray.INDArray[] output) |
protected double |
VAEReconErrorScoreCalculator.scoreMinibatch(Model network,
org.nd4j.linalg.api.ndarray.INDArray features,
org.nd4j.linalg.api.ndarray.INDArray labels,
org.nd4j.linalg.api.ndarray.INDArray fMask,
org.nd4j.linalg.api.ndarray.INDArray lMask,
org.nd4j.linalg.api.ndarray.INDArray output) |
protected double |
VAEReconProbScoreCalculator.scoreMinibatch(Model net,
org.nd4j.linalg.api.ndarray.INDArray features,
org.nd4j.linalg.api.ndarray.INDArray labels,
org.nd4j.linalg.api.ndarray.INDArray fMask,
org.nd4j.linalg.api.ndarray.INDArray lMask,
org.nd4j.linalg.api.ndarray.INDArray output) |
protected double |
AutoencoderScoreCalculator.scoreMinibatch(Model network,
org.nd4j.linalg.api.ndarray.INDArray features,
org.nd4j.linalg.api.ndarray.INDArray labels,
org.nd4j.linalg.api.ndarray.INDArray fMask,
org.nd4j.linalg.api.ndarray.INDArray lMask,
org.nd4j.linalg.api.ndarray.INDArray output) |
Modifier and Type | Class and Description |
---|---|
class |
BaseIEvaluationScoreCalculator<T extends Model,U extends IEvaluation>
Base score function based on an IEvaluation instance.
|
class |
BaseScoreCalculator<T extends Model> |
Modifier and Type | Class and Description |
---|---|
class |
BaseEarlyStoppingTrainer<T extends Model>
Base/abstract class for conducting early stopping training locally (single machine).
Can be used to train a MultiLayerNetwork or a ComputationGraph via early stopping |
interface |
IEarlyStoppingTrainer<T extends Model>
Interface for early stopping trainers
|
Modifier and Type | Field and Description |
---|---|
protected T |
BaseEarlyStoppingTrainer.model |
Modifier and Type | Method and Description |
---|---|
protected void |
BaseEarlyStoppingTrainer.triggerEpochListeners(boolean epochStart,
Model model,
int epochNum) |
Modifier and Type | Interface and Description |
---|---|
interface |
Classifier
A classifier (this is for supervised learning)
|
interface |
Layer
Interface for a layer of a neural network.
|
Modifier and Type | Interface and Description |
---|---|
interface |
IOutputLayer
Interface for output layers (those that calculate gradients with respect to a labels array)
|
interface |
RecurrentLayer
Created by Alex on 28/08/2016.
|
Modifier and Type | Class and Description |
---|---|
class |
ComputationGraph
A ComputationGraph network is a neural network with arbitrary (directed acyclic graph) connection structure.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractLayer<LayerConfT extends Layer>
A layer with input and output, no parameters or gradients
|
class |
ActivationLayer
Activation Layer
Used to apply activation on input and corresponding derivative on epsilon.
|
class |
BaseLayer<LayerConfT extends BaseLayer>
A layer with parameters
|
class |
BaseOutputLayer<LayerConfT extends BaseOutputLayer>
Output layer with different objective
in co-occurrences for different objectives.
|
class |
BasePretrainNetwork<LayerConfT extends BasePretrainNetwork>
Baseline class for any Neural Network used
as a layer in a deep network *
|
class |
DropoutLayer
Created by davekale on 12/7/16.
|
class |
FrozenLayer
For purposes of transfer learning
A frozen layers wraps another dl4j layer within it.
|
class |
FrozenLayerWithBackprop
Frozen layer freezes parameters of the layer it wraps, but allows the backpropagation to continue.
|
class |
LossLayer
LossLayer is a flexible output "layer" that performs a loss function on
an input without MLP logic.
|
class |
OutputLayer
Output layer with different objective
incooccurrences for different objectives.
|
Modifier and Type | Class and Description |
---|---|
class |
CnnLossLayer
Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: CnnLossLayer does not have any parameters. |
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
Convolution3DLayer
3D convolution layer implementation.
|
class |
ConvolutionLayer
Convolution layer
|
class |
Cropping1DLayer
Zero cropping layer for 1D convolutional neural networks.
|
class |
Cropping2DLayer
Zero cropping layer for convolutional neural networks.
|
class |
Cropping3DLayer
Cropping layer for 3D convolutional neural networks.
|
class |
Deconvolution2DLayer
2D deconvolution layer implementation.
|
class |
DepthwiseConvolution2DLayer
2D depth-wise convolution layer configuration.
|
class |
SeparableConvolution2DLayer
2D Separable convolution layer implementation
Separable convolutions split a regular convolution operation into two
simpler operations, which are usually computationally more efficient.
|
class |
SpaceToBatch
Space to batch utility layer for convolutional input types.
|
class |
SpaceToDepth
Space to channels utility layer for convolutional input types.
|
class |
ZeroPadding1DLayer
Zero padding 1D layer for convolutional neural networks.
|
class |
ZeroPadding3DLayer
Zero padding 3D layer for convolutional neural networks.
|
class |
ZeroPaddingLayer
Zero padding layer for convolutional neural networks.
|
Modifier and Type | Class and Description |
---|---|
class |
Subsampling1DLayer
1D (temporal) subsampling layer.
|
class |
Subsampling3DLayer
Subsampling 3D layer, used for downsampling a 3D convolution
|
class |
SubsamplingLayer
Subsampling layer.
|
Modifier and Type | Class and Description |
---|---|
class |
Upsampling1D
1D Upsampling layer.
|
class |
Upsampling2D
2D Upsampling layer.
|
class |
Upsampling3D
3D Upsampling layer.
|
Modifier and Type | Class and Description |
---|---|
class |
AutoEncoder
Autoencoder.
|
Modifier and Type | Class and Description |
---|---|
class |
DenseLayer |
Modifier and Type | Class and Description |
---|---|
class |
ElementWiseMultiplicationLayer
Elementwise multiplication layer with weights: implements out = activationFn(input .* w + b) where:
- w is a learnable weight vector of length nOut - ".*" is element-wise multiplication - b is a bias vector Note that the input and output sizes of the element-wise layer are the same for this layer |
Modifier and Type | Class and Description |
---|---|
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
class |
EmbeddingSequenceLayer
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
Modifier and Type | Class and Description |
---|---|
class |
BatchNormalization
Batch normalization layer.
|
class |
LocalResponseNormalization
Deep neural net normalization approach normalizes activations between layers
"brightness normalization"
Used for nets like AlexNet
|
Modifier and Type | Class and Description |
---|---|
class |
Yolo2OutputLayer
Output (loss) layer for YOLOv2 object detection model, based on the papers:
YOLO9000: Better, Faster, Stronger - Redmon & Farhadi (2016) - https://arxiv.org/abs/1612.08242
and You Only Look Once: Unified, Real-Time Object Detection - Redmon et al. |
Modifier and Type | Class and Description |
---|---|
class |
OCNNOutputLayer
Layer implementation for
OCNNOutputLayer
See OCNNOutputLayer
for details. |
Modifier and Type | Class and Description |
---|---|
class |
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
Supports the following PoolingType s: SUM, AVG, MAX, PNORMGlobal pooling layer can also handle mask arrays when dealing with variable length inputs. |
Modifier and Type | Class and Description |
---|---|
class |
BaseRecurrentLayer<LayerConfT extends BaseLayer> |
class |
BidirectionalLayer
Bidirectional is a "wrapper" layer: it wraps any uni-directional RNN layer to make it bidirectional.
Note that multiple different modes are supported - these specify how the activations should be combined from the forward and backward RNN networks. |
class |
GravesBidirectionalLSTM
RNN tutorial: http://deeplearning4j.org/usingrnns.html
READ THIS FIRST
Bdirectional LSTM layer implementation.
|
class |
GravesLSTM
LSTM layer implementation.
|
class |
LastTimeStepLayer
LastTimeStep is a "wrapper" layer: it wraps any RNN layer, and extracts out the last time step during forward pass,
and returns it as a row vector (per example).
|
class |
LSTM
LSTM layer implementation.
|
class |
MaskZeroLayer
Masks timesteps with 0 activation.
|
class |
RnnLossLayer
Recurrent Neural Network Loss Layer.
Handles calculation of gradients etc for various objective functions. NOTE: Unlike RnnOutputLayer this RnnLossLayer does not have any parameters - i.e., there is no time
distributed dense component here. |
class |
RnnOutputLayer
Recurrent Neural Network Output Layer.
Handles calculation of gradients etc for various objective functions. Functionally the same as OutputLayer, but handles output and label reshaping automatically. Input and output activations are same as other RNN layers: 3 dimensions with shape [miniBatchSize,nIn,timeSeriesLength] and [miniBatchSize,nOut,timeSeriesLength] respectively. |
class |
SimpleRnn
Simple RNN - aka "vanilla" RNN is the simplest type of recurrent neural network layer.
|
Modifier and Type | Class and Description |
---|---|
class |
SameDiffLayer |
Modifier and Type | Class and Description |
---|---|
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
Modifier and Type | Class and Description |
---|---|
class |
MaskLayer
MaskLayer applies the mask array to the forward pass activations, and backward pass gradients, passing through
this layer.
|
Modifier and Type | Class and Description |
---|---|
class |
VariationalAutoencoder
Variational Autoencoder layer
|
Modifier and Type | Class and Description |
---|---|
class |
BaseWrapperLayer
Abstract wrapper layer.
|
Modifier and Type | Class and Description |
---|---|
class |
MultiLayerNetwork
MultiLayerNetwork is a neural network with multiple layers in a stack, and usually an output layer.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseMultiLayerUpdater<T extends Model>
BaseMultiLayerUpdater - core functionality for applying updaters to MultiLayerNetwork and ComputationGraph.
|
Modifier and Type | Field and Description |
---|---|
protected T |
BaseMultiLayerUpdater.network |
Modifier and Type | Method and Description |
---|---|
static Updater |
UpdaterCreator.getUpdater(Model layer) |
Modifier and Type | Method and Description |
---|---|
Solver.Builder |
Solver.Builder.model(Model model) |
Modifier and Type | Method and Description |
---|---|
void |
TrainingListener.iterationDone(Model model,
int iteration,
int epoch)
Event listener for each iteration.
|
abstract void |
IterationListener.iterationDone(Model model,
int iteration,
int epoch)
Deprecated.
Event listener for each iteration
|
void |
BaseTrainingListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
TrainingListener.onBackwardPass(Model model)
Called once per iteration (backward pass) after gradients have been calculated, and updated
Gradients are available via
gradient() . |
void |
BaseTrainingListener.onBackwardPass(Model model) |
void |
TrainingListener.onEpochEnd(Model model)
Called once at the end of each epoch, when using methods such as
MultiLayerNetwork.fit(DataSetIterator) ,
ComputationGraph.fit(DataSetIterator) or ComputationGraph.fit(MultiDataSetIterator) |
void |
BaseTrainingListener.onEpochEnd(Model model) |
void |
TrainingListener.onEpochStart(Model model)
Called once at the start of each epoch, when using methods such as
MultiLayerNetwork.fit(DataSetIterator) ,
ComputationGraph.fit(DataSetIterator) or ComputationGraph.fit(MultiDataSetIterator) |
void |
BaseTrainingListener.onEpochStart(Model model) |
void |
TrainingListener.onForwardPass(Model model,
List<org.nd4j.linalg.api.ndarray.INDArray> activations)
Called once per iteration (forward pass) for activations (usually for a
MultiLayerNetwork ),
only at training time |
void |
BaseTrainingListener.onForwardPass(Model model,
List<org.nd4j.linalg.api.ndarray.INDArray> activations) |
void |
TrainingListener.onForwardPass(Model model,
Map<String,org.nd4j.linalg.api.ndarray.INDArray> activations)
Called once per iteration (forward pass) for activations (usually for a
ComputationGraph ),
only at training time |
void |
BaseTrainingListener.onForwardPass(Model model,
Map<String,org.nd4j.linalg.api.ndarray.INDArray> activations) |
void |
TrainingListener.onGradientCalculation(Model model)
Called once per iteration (backward pass) before the gradients are updated
Gradients are available via
gradient() . |
void |
BaseTrainingListener.onGradientCalculation(Model model) |
void |
ConvexOptimizer.updateGradientAccordingToParams(Gradient gradient,
Model model,
int batchSize,
LayerWorkspaceMgr workspaceMgr)
Update the gradient according to the configuration such as adagrad, momentum, and sparsity
|
Modifier and Type | Method and Description |
---|---|
protected void |
EvaluativeListener.invokeListener(Model model) |
void |
ParamAndGradientIterationListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
ScoreIterationListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
CollectScoresIterationListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
CollectScoresListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
ComposableIterationListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
PerformanceListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
EvaluativeListener.iterationDone(Model model,
int iteration,
int epoch)
Event listener for each iteration
|
void |
SleepyTrainingListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
TimeIterationListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
EvaluativeListener.onBackwardPass(Model model) |
void |
SleepyTrainingListener.onBackwardPass(Model model) |
void |
EvaluativeListener.onEpochEnd(Model model) |
void |
SleepyTrainingListener.onEpochEnd(Model model) |
void |
EvaluativeListener.onEpochStart(Model model) |
void |
SleepyTrainingListener.onEpochStart(Model model) |
void |
EvaluativeListener.onForwardPass(Model model,
List<org.nd4j.linalg.api.ndarray.INDArray> activations) |
void |
SleepyTrainingListener.onForwardPass(Model model,
List<org.nd4j.linalg.api.ndarray.INDArray> activations) |
void |
EvaluativeListener.onForwardPass(Model model,
Map<String,org.nd4j.linalg.api.ndarray.INDArray> activations) |
void |
SleepyTrainingListener.onForwardPass(Model model,
Map<String,org.nd4j.linalg.api.ndarray.INDArray> activations) |
void |
EvaluativeListener.onGradientCalculation(Model model) |
void |
SleepyTrainingListener.onGradientCalculation(Model model) |
Modifier and Type | Method and Description |
---|---|
void |
ModelSavingCallback.call(EvaluativeListener listener,
Model model,
long invocationsCount,
IEvaluation[] evaluations) |
void |
EvaluationCallback.call(EvaluativeListener listener,
Model model,
long invocationsCount,
IEvaluation[] evaluations) |
protected void |
ModelSavingCallback.save(Model model,
String filename)
This method saves model
|
Modifier and Type | Method and Description |
---|---|
protected static int |
CheckpointListener.getEpoch(Model model) |
protected static int |
CheckpointListener.getIter(Model model) |
protected static String |
CheckpointListener.getModelType(Model model) |
void |
CheckpointListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
CheckpointListener.onEpochEnd(Model model) |
Modifier and Type | Field and Description |
---|---|
protected Model |
BaseOptimizer.model |
Modifier and Type | Method and Description |
---|---|
static void |
BaseOptimizer.applyConstraints(Model model) |
static int |
BaseOptimizer.getEpochCount(Model model) |
static int |
BaseOptimizer.getIterationCount(Model model) |
static void |
BaseOptimizer.incrementIterationCount(Model model,
int incrementBy) |
void |
BaseOptimizer.updateGradientAccordingToParams(Gradient gradient,
Model model,
int batchSize,
LayerWorkspaceMgr workspaceMgr) |
Modifier and Type | Method and Description |
---|---|
static int |
EncodedGradientsAccumulator.getOptimalBufferSize(Model model,
int numWorkers,
int queueSize) |
Modifier and Type | Method and Description |
---|---|
static org.nd4j.linalg.heartbeat.reports.Task |
ModelSerializer.taskByModel(Model model) |
static void |
ModelSerializer.writeModel(Model model,
File file,
boolean saveUpdater)
Write a model to a file
|
static void |
ModelSerializer.writeModel(Model model,
File file,
boolean saveUpdater,
org.nd4j.linalg.dataset.api.preprocessor.DataNormalization dataNormalization)
Write a model to a file
|
static void |
ModelSerializer.writeModel(Model model,
OutputStream stream,
boolean saveUpdater)
Write a model to an output stream
|
static void |
ModelSerializer.writeModel(Model model,
OutputStream stream,
boolean saveUpdater,
org.nd4j.linalg.dataset.api.preprocessor.DataNormalization dataNormalization)
Write a model to an output stream
|
static void |
ModelSerializer.writeModel(Model model,
String path,
boolean saveUpdater)
Write a model to a file path
|
Copyright © 2018. All rights reserved.