Package | Description |
---|---|
org.deeplearning4j.nn.conf.preprocessor |
Class and Description |
---|
BaseInputPreProcessor |
Cnn3DToFeedForwardPreProcessor
A preprocessor to allow CNN and standard feed-forward network layers to be used together.
For example, CNN3D -> Denselayer This does two things: (b) Reshapes 5d activations out of CNN layer, with shape [numExamples, numChannels, inputDepth, inputHeight, inputWidth]) into 2d activations (with shape [numExamples, inputDepth*inputHeight*inputWidth*numChannels]) for use in feed forward layer (a) Reshapes epsilons (weights*deltas) out of FeedFoward layer (which is 2D or 3D with shape [numExamples, inputDepth* inputHeight*inputWidth*numChannels]) into 5d epsilons (with shape [numExamples, numChannels, inputDepth, inputHeight, inputWidth]) suitable to feed into CNN layers. Note: numChannels is equivalent to featureMaps referenced in different literature |
CnnToFeedForwardPreProcessor
A preprocessor to allow CNN and standard feed-forward network layers to be used together.
For example, CNN -> Denselayer This does two things: (b) Reshapes 4d activations out of CNN layer, with shape [numExamples, numChannels, inputHeight, inputWidth]) into 2d activations (with shape [numExamples, inputHeight*inputWidth*numChannels]) for use in feed forward layer (a) Reshapes epsilons (weights*deltas) out of FeedFoward layer (which is 2D or 3D with shape [numExamples, inputHeight*inputWidth*numChannels]) into 4d epsilons (with shape [numExamples, numChannels, inputHeight, inputWidth]) suitable to feed into CNN layers. Note: numChannels is equivalent to channels or featureMaps referenced in different literature |
CnnToRnnPreProcessor
A preprocessor to allow CNN and RNN layers to be used together.
For example, ConvolutionLayer -> GravesLSTM Functionally equivalent to combining CnnToFeedForwardPreProcessor + FeedForwardToRnnPreProcessor Specifically, this does two things: (a) Reshape 4d activations out of CNN layer, with shape [timeSeriesLength*miniBatchSize, numChannels, inputHeight, inputWidth]) into 3d (time series) activations (with shape [numExamples, inputHeight*inputWidth*numChannels, timeSeriesLength]) for use in RNN layers (b) Reshapes 3d epsilons (weights.*deltas) out of RNN layer (with shape [miniBatchSize,inputHeight*inputWidth*numChannels,timeSeriesLength]) into 4d epsilons with shape [miniBatchSize*timeSeriesLength, numChannels, inputHeight, inputWidth] suitable to feed into CNN layers. |
ComposableInputPreProcessor
Composable input pre processor
|
FeedForwardToCnn3DPreProcessor
A preprocessor to allow 3D CNN and standard feed-forward network layers to be used together.
For example, DenseLayer -> Convolution3D This does two things: (a) Reshapes activations out of FeedFoward layer (which is 2D with shape [numExamples, inputDepth*inputHeight*inputWidth*numChannels]) into 5d activations (with shape [numExamples, numChannels, inputDepth, inputHeight, inputWidth]) suitable to feed into CNN layers. (b) Reshapes 5d epsilons from 3D CNN layer, with shape [numExamples, numChannels, inputDepth, inputHeight, inputWidth]) into 2d epsilons (with shape [numExamples, inputDepth*inputHeight*inputWidth*numChannels]) for use in feed forward layer |
FeedForwardToCnnPreProcessor
A preprocessor to allow CNN and standard feed-forward network layers to be used together.
For example, DenseLayer -> CNN This does two things: (a) Reshapes activations out of FeedFoward layer (which is 2D or 3D with shape [numExamples, inputHeight*inputWidth*numChannels]) into 4d activations (with shape [numExamples, numChannels, inputHeight, inputWidth]) suitable to feed into CNN layers. (b) Reshapes 4d epsilons (weights*deltas) from CNN layer, with shape [numExamples, numChannels, inputHeight, inputWidth]) into 2d epsilons (with shape [numExamples, inputHeight*inputWidth*numChannels]) for use in feed forward layer Note: numChannels is equivalent to channels or featureMaps referenced in different literature |
FeedForwardToRnnPreProcessor
A preprocessor to allow RNN and feed-forward network layers to be used together.
For example, DenseLayer -> GravesLSTM This does two things: (a) Reshapes activations out of FeedFoward layer (which is 2D with shape [miniBatchSize*timeSeriesLength,layerSize]) into 3d activations (with shape [miniBatchSize,layerSize,timeSeriesLength]) suitable to feed into RNN layers. (b) Reshapes 3d epsilons (weights*deltas from RNN layer, with shape [miniBatchSize,layerSize,timeSeriesLength]) into 2d epsilons (with shape [miniBatchSize*timeSeriesLength,layerSize]) for use in feed forward layer |
RnnToCnnPreProcessor
A preprocessor to allow RNN and CNN layers to be used together
For example, time series (video) input -> ConvolutionLayer, or conceivable GravesLSTM -> ConvolutionLayer Functionally equivalent to combining RnnToFeedForwardPreProcessor + FeedForwardToCnnPreProcessor Specifically, this does two things: (a) Reshape 3d activations out of RNN layer, with shape [miniBatchSize, numChannels*inputHeight*inputWidth, timeSeriesLength]) into 4d (CNN) activations (with shape [numExamples*timeSeriesLength, numChannels, inputWidth, inputHeight]) (b) Reshapes 4d epsilons (weights.*deltas) out of CNN layer (with shape [numExamples*timeSeriesLength, numChannels, inputHeight, inputWidth]) into 3d epsilons with shape [miniBatchSize, numChannels*inputHeight*inputWidth, timeSeriesLength] suitable to feed into CNN layers. |
RnnToFeedForwardPreProcessor
A preprocessor to allow RNN and feed-forward network layers to be used together.
For example, GravesLSTM -> OutputLayer or GravesLSTM -> DenseLayer This does two things: (a) Reshapes activations out of RNN layer (which is 3D with shape [miniBatchSize,layerSize,timeSeriesLength]) into 2d activations (with shape [miniBatchSize*timeSeriesLength,layerSize]) suitable for use in feed-forward layers. (b) Reshapes 2d epsilons (weights*deltas from feed forward layer, with shape [miniBatchSize*timeSeriesLength,layerSize]) into 3d epsilons (with shape [miniBatchSize,layerSize,timeSeriesLength]) for use in RNN layer |
UnitVarianceProcessor
Unit variance operation
|
Copyright © 2018. All rights reserved.