Modifier and Type | Class and Description |
---|---|
class |
BaseOutputLayer<LayerConfT extends BaseOutputLayer>
Output layer with different objective
in co-occurrences for different objectives.
|
class |
BasePretrainNetwork<LayerConfT extends BasePretrainNetwork>
Baseline class for any Neural Network used
as a layer in a deep network *
|
class |
DropoutLayer
Created by davekale on 12/7/16.
|
class |
LossLayer
LossLayer is a flexible output "layer" that performs a loss function on
an input without MLP logic.
|
class |
OutputLayer
Output layer with different objective
incooccurrences for different objectives.
|
Modifier and Type | Class and Description |
---|---|
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
ConvolutionLayer
Convolution layer
|
Modifier and Type | Class and Description |
---|---|
class |
AutoEncoder
Autoencoder.
|
Modifier and Type | Class and Description |
---|---|
class |
DenseLayer |
Modifier and Type | Class and Description |
---|---|
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
Modifier and Type | Class and Description |
---|---|
class |
RBM
Restricted Boltzmann Machine.
|
Modifier and Type | Class and Description |
---|---|
class |
BatchNormalization
Batch normalization layer.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseRecurrentLayer<LayerConfT extends BaseLayer> |
class |
GravesBidirectionalLSTM
RNN tutorial: http://deeplearning4j.org/usingrnns.html
READ THIS FIRST
Bdirectional LSTM layer implementation.
|
class |
GravesLSTM
LSTM layer implementation.
|
class |
LSTM
LSTM layer implementation.
|
class |
RnnOutputLayer
Recurrent Neural Network Output Layer.
Handles calculation of gradients etc for various objective functions. Functionally the same as OutputLayer, but handles output and label reshaping automatically. Input and output activations are same as other RNN layers: 3 dimensions with shape [miniBatchSize,nIn,timeSeriesLength] and [miniBatchSize,nOut,timeSeriesLength] respectively. |
Modifier and Type | Method and Description |
---|---|
static FwdPassReturn |
LSTMHelpers.activateHelper(BaseLayer layer,
NeuralNetConfiguration conf,
org.nd4j.linalg.activations.IActivation gateActivationFn,
org.nd4j.linalg.api.ndarray.INDArray input,
org.nd4j.linalg.api.ndarray.INDArray recurrentWeights,
org.nd4j.linalg.api.ndarray.INDArray originalInputWeights,
org.nd4j.linalg.api.ndarray.INDArray biases,
boolean training,
org.nd4j.linalg.api.ndarray.INDArray originalPrevOutputActivations,
org.nd4j.linalg.api.ndarray.INDArray originalPrevMemCellState,
boolean forBackprop,
boolean forwards,
String inputWeightKey,
org.nd4j.linalg.api.ndarray.INDArray maskArray,
boolean hasPeepholeConnections,
LSTMHelper helper,
CacheMode cacheMode)
Returns FwdPassReturn object with activations/INDArrays.
|
Modifier and Type | Class and Description |
---|---|
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
Copyright © 2017. All rights reserved.