Class RMSProp

java.lang.Object
smile.deep.optimizer.RMSProp
All Implemented Interfaces:
Serializable, Optimizer

public class RMSProp extends Object implements Optimizer
RMSProp optimizer with adaptive learning rate. RMSProp uses a moving average of squared gradients to normalize the gradient. This normalization balances the step size (momentum), decreasing the step for large gradients to avoid exploding, and increasing the step for small gradients to avoid vanishing.
See Also:
  • Constructor Details

    • RMSProp

      public RMSProp()
      Constructor.
    • RMSProp

      public RMSProp(smile.math.TimeFunction learningRate)
      Constructor.
      Parameters:
      learningRate - the learning rate.
    • RMSProp

      public RMSProp(smile.math.TimeFunction learningRate, double rho, double epsilon)
      Constructor.
      Parameters:
      learningRate - the learning rate.
      rho - the discounting factor for the history/coming gradient.
      epsilon - a small constant for numerical stability.
  • Method Details

    • toString

      public String toString()
      Overrides:
      toString in class Object
    • update

      public void update(Layer layer, int m, int t)
      Description copied from interface: Optimizer
      Updates a layer.
      Specified by:
      update in interface Optimizer
      Parameters:
      layer - a neural network layer.
      m - the size of mini-batch.
      t - the time step, i.e. the number of training iterations so far.