By default, the computation in a FloatLayer will re-evaluate again and again
if the FloatLayer is used by multiple other operations.
This behavior is very inefficient if there is are diamond dependencies in a neural network.
It's wise to use CumulativeFloatLayers instead of this FloatLayers in such neural network.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.
Author:
杨博 (Yang Bo)
By default, the computation in a FloatLayer will re-evaluate again and again if the
FloatLayer
is used by multiple other operations. This behavior is very inefficient if there is are diamond dependencies in a neural network. It's wise to use CumulativeFloatLayers instead of thisFloatLayers
in such neural network.