public class Dropout extends Object implements IDropout
new Dropout(x)
will keep an input activation with probability x, and set to 0 with probability 1-x.
Note 1: As per all IDropout instances, dropout is applied at training time only - and is automatically not applied at
test time (for evaluation, etc)
Note 2: Care should be taken when setting lower (probability of retaining) values for (too much information may be
lost with aggressive (very low) dropout values).
Note 3: Frequently, dropout is not applied to (or, has higher retain probability for) input (first layer)
layers. Dropout is also often not applied to output layers.
Note 4: Implementation detail (most users can ignore): DL4J uses inverted dropout, as described here:
http://cs231n.github.io/neural-networks-2/
Modifier | Constructor and Description |
---|---|
|
Dropout(double activationRetainProbability) |
protected |
Dropout(double activationRetainProbability,
ISchedule activationRetainProbabilitySchedule) |
|
Dropout(ISchedule activationRetainProbabilitySchedule) |
Modifier and Type | Method and Description |
---|---|
INDArray |
applyDropout(INDArray inputActivations,
INDArray output,
int iteration,
int epoch,
LayerWorkspaceMgr workspaceMgr) |
INDArray |
backprop(INDArray gradAtOutput,
INDArray gradAtInput,
int iteration,
int epoch)
Perform backprop.
|
void |
clear()
Clear the internal state (for example, dropout mask) if any is present
|
Dropout |
clone() |
protected void |
initializeHelper()
Initialize the CuDNN dropout helper, if possible
|
public Dropout(double activationRetainProbability)
activationRetainProbability
- Probability of retaining an activation - see Dropout
javadocpublic Dropout(ISchedule activationRetainProbabilitySchedule)
activationRetainProbabilitySchedule
- Schedule for probability of retaining an activation - see Dropout
javadocprotected Dropout(double activationRetainProbability, ISchedule activationRetainProbabilitySchedule)
protected void initializeHelper()
public INDArray applyDropout(INDArray inputActivations, INDArray output, int iteration, int epoch, LayerWorkspaceMgr workspaceMgr)
applyDropout
in interface IDropout
inputActivations
- Input activations arrayoutput
- The result array (same as inputArray for in-place ops) for the post-dropout activationsiteration
- Current iteration numberepoch
- Current epoch numberworkspaceMgr
- Workspace manager, if any storage is required (use ArrayType.INPUT)public INDArray backprop(INDArray gradAtOutput, INDArray gradAtInput, int iteration, int epoch)
IDropout
backprop
in interface IDropout
gradAtOutput
- Gradients at the output of the dropout op - i.e., dL/dOutgradAtInput
- Gradients at the input of the dropout op - i.e., dL/dIn. Use the same array as gradAtOutput
to apply the backprop gradient in-placeiteration
- Current iterationepoch
- Current epochpublic void clear()
IDropout
Copyright © 2018. All rights reserved.