Package org.nd4j.linalg.activations.impl
Class ActivationLReLU
- java.lang.Object
-
- org.nd4j.linalg.activations.BaseActivationFunction
-
- org.nd4j.linalg.activations.impl.ActivationLReLU
-
- All Implemented Interfaces:
Serializable
,IActivation
public class ActivationLReLU extends BaseActivationFunction
Leaky RELU f(x) = max(0, x) + alpha * min(0, x) alpha defaults to 0.01- See Also:
- Serialized Form
-
-
Field Summary
Fields Modifier and Type Field Description static double
DEFAULT_ALPHA
-
Constructor Summary
Constructors Constructor Description ActivationLReLU()
ActivationLReLU(double alpha)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description Pair<INDArray,INDArray>
backprop(INDArray in, INDArray epsilon)
Backpropagate the errors through the activation function, given input z and epsilon dL/da.
Returns 2 INDArrays:
(a) The gradient dL/dz, calculated from dL/da, and
(b) The parameter gradients dL/dW, where w is the weights in the activation function.INDArray
getActivation(INDArray in, boolean training)
Carry out activation function on the input array (usually known as 'preOut' or 'z') Implementations must overwrite "in", transform in place and return "in" Can support separate behaviour during testString
toString()
-
Methods inherited from class org.nd4j.linalg.activations.BaseActivationFunction
assertShape, numParams
-
-
-
-
Field Detail
-
DEFAULT_ALPHA
public static final double DEFAULT_ALPHA
- See Also:
- Constant Field Values
-
-
Method Detail
-
getActivation
public INDArray getActivation(INDArray in, boolean training)
Description copied from interface:IActivation
Carry out activation function on the input array (usually known as 'preOut' or 'z') Implementations must overwrite "in", transform in place and return "in" Can support separate behaviour during test- Parameters:
in
- input array.training
- true when training.- Returns:
- transformed activation
-
backprop
public Pair<INDArray,INDArray> backprop(INDArray in, INDArray epsilon)
Description copied from interface:IActivation
Backpropagate the errors through the activation function, given input z and epsilon dL/da.
Returns 2 INDArrays:
(a) The gradient dL/dz, calculated from dL/da, and
(b) The parameter gradients dL/dW, where w is the weights in the activation function. For activation functions with no gradients, this will be null.- Parameters:
in
- Input, before applying the activation function (z, or 'preOut')epsilon
- Gradient to be backpropagated: dL/da, where L is the loss function- Returns:
- dL/dz and dL/dW, for weights w (null if activation function has no weights)
-
-