public class Dropout extends Object implements IDropout
new Dropout(x)
will keep an input activation with probability x, and set to 0 with probability 1-x.
Note 1: As per all IDropout instances, dropout is applied at training time only - and is automatically not applied at
test time (for evaluation, etc)
Note 2: Care should be taken when setting lower (probability of retaining) values for (too much information may be
lost with aggressive (very low) dropout values).
Note 3: Frequently, dropout is not applied to (or, has higher retain probability for) input (first layer)
layers. Dropout is also often not applied to output layers.
Note 4: Implementation detail (most users can ignore): DL4J uses inverted dropout, as described here:
http://cs231n.github.io/neural-networks-2/
Modifier and Type | Field and Description |
---|---|
protected boolean |
helperAllowFallback
When using CuDNN and an error is encountered, should fallback to the non-CuDNN implementatation be allowed?
If set to false, an exception in CuDNN will be propagated back to the user.
|
Modifier | Constructor and Description |
---|---|
|
Dropout(double activationRetainProbability) |
protected |
Dropout(double activationRetainProbability,
ISchedule activationRetainProbabilitySchedule) |
|
Dropout(ISchedule activationRetainProbabilitySchedule) |
Modifier and Type | Method and Description |
---|---|
INDArray |
applyDropout(INDArray inputActivations,
INDArray output,
int iteration,
int epoch,
LayerWorkspaceMgr workspaceMgr) |
INDArray |
backprop(INDArray gradAtOutput,
INDArray gradAtInput,
int iteration,
int epoch)
Perform backprop.
|
void |
clear()
Clear the internal state (for example, dropout mask) if any is present
|
Dropout |
clone() |
Dropout |
helperAllowFallback(boolean allowFallback)
When using a helper (CuDNN or MKLDNN in some cases) and an error is encountered, should fallback to the non-helper implementation be allowed?
If set to false, an exception in the helper will be propagated back to the user.
|
protected void |
initializeHelper(org.nd4j.linalg.api.buffer.DataType dataType)
Initialize the CuDNN dropout helper, if possible
|
protected boolean helperAllowFallback
public Dropout(double activationRetainProbability)
activationRetainProbability
- Probability of retaining an activation - see Dropout
javadocpublic Dropout(ISchedule activationRetainProbabilitySchedule)
activationRetainProbabilitySchedule
- Schedule for probability of retaining an activation - see Dropout
javadocprotected Dropout(double activationRetainProbability, ISchedule activationRetainProbabilitySchedule)
public Dropout helperAllowFallback(boolean allowFallback)
allowFallback
- Whether fallback to non-helper implementation should be usedprotected void initializeHelper(org.nd4j.linalg.api.buffer.DataType dataType)
public INDArray applyDropout(INDArray inputActivations, INDArray output, int iteration, int epoch, LayerWorkspaceMgr workspaceMgr)
applyDropout
in interface IDropout
inputActivations
- Input activations arrayoutput
- The result array (same as inputArray for in-place ops) for the post-dropout activationsiteration
- Current iteration numberepoch
- Current epoch numberworkspaceMgr
- Workspace manager, if any storage is required (use ArrayType.INPUT)public INDArray backprop(INDArray gradAtOutput, INDArray gradAtInput, int iteration, int epoch)
IDropout
backprop
in interface IDropout
gradAtOutput
- Gradients at the output of the dropout op - i.e., dL/dOutgradAtInput
- Gradients at the input of the dropout op - i.e., dL/dIn. Use the same array as gradAtOutput
to apply the backprop gradient in-placeiteration
- Current iterationepoch
- Current epochpublic void clear()
IDropout
Copyright © 2019. All rights reserved.