Package org.nd4j.linalg.activations
Interface IActivation
-
- All Superinterfaces:
Serializable
- All Known Implementing Classes:
ActivationCube
,ActivationELU
,ActivationGELU
,ActivationHardSigmoid
,ActivationHardTanH
,ActivationIdentity
,ActivationLReLU
,ActivationMish
,ActivationPReLU
,ActivationRationalTanh
,ActivationRectifiedTanh
,ActivationReLU
,ActivationReLU6
,ActivationRReLU
,ActivationSELU
,ActivationSigmoid
,ActivationSoftmax
,ActivationSoftPlus
,ActivationSoftSign
,ActivationSwish
,ActivationTanH
,ActivationThresholdedReLU
,BaseActivationFunction
public interface IActivation extends Serializable
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description Pair<INDArray,INDArray>
backprop(INDArray in, INDArray epsilon)
Backpropagate the errors through the activation function, given input z and epsilon dL/da.
Returns 2 INDArrays:
(a) The gradient dL/dz, calculated from dL/da, and
(b) The parameter gradients dL/dW, where w is the weights in the activation function.INDArray
getActivation(INDArray in, boolean training)
Carry out activation function on the input array (usually known as 'preOut' or 'z') Implementations must overwrite "in", transform in place and return "in" Can support separate behaviour during testint
numParams(int inputSize)
-
-
-
Method Detail
-
getActivation
INDArray getActivation(INDArray in, boolean training)
Carry out activation function on the input array (usually known as 'preOut' or 'z') Implementations must overwrite "in", transform in place and return "in" Can support separate behaviour during test- Parameters:
in
- input array.training
- true when training.- Returns:
- transformed activation
-
backprop
Pair<INDArray,INDArray> backprop(INDArray in, INDArray epsilon)
Backpropagate the errors through the activation function, given input z and epsilon dL/da.
Returns 2 INDArrays:
(a) The gradient dL/dz, calculated from dL/da, and
(b) The parameter gradients dL/dW, where w is the weights in the activation function. For activation functions with no gradients, this will be null.- Parameters:
in
- Input, before applying the activation function (z, or 'preOut')epsilon
- Gradient to be backpropagated: dL/da, where L is the loss function- Returns:
- dL/dz and dL/dW, for weights w (null if activation function has no weights)
-
numParams
int numParams(int inputSize)
-
-