A base layer used for implementing Deeplearning4j layers using SameDiff. These layers are not scoring/output layers:
that is, they should be used as the intermediate layer in a network only.
To implement an output layer, extend
SameDiffOutputLayer
instead.
Note also that if multiple inputs are required, it is possible to implement a vertex instead:
SameDiffVertex
To implement a Deeplearning layer using SameDiff, extend this class.
There are 4 required methods:
- defineLayer: Defines the forward pass for the layer
- defineParameters: Define the layer's parameters in a way suitable for DL4J
- initializeParameters: if required, set the initial parameter values for the layer
- getOutputType: determine the type of output/activations for the layer (without actually executing the layer's
forward pass)
Furthermore, there are 3 optional methods:
- setNIn(InputType inputType, boolean override): if implemented, set the number of inputs to the layer during network
initialization
- getPreProcessorForInputType: return the preprocessor that should be added (if any), for the given input type
- applyGlobalConfigToLayer: apply any global configuration options (weight init, activation functions etc) to the
layer's configuration.