com.etsy.conjecture.scalding.train
Aggressiveness parameter for passive aggressive classifier *
Size of minibatch for mini-batch training, defaults to 1 which is just SGD.
Size of minibatch for mini-batch training, defaults to 1 which is just SGD. *
A fudge factor so that an "epoch" for the purpose of learning rate computation can be more than one example, in which case the "epoch" will take a fractional amount equal to {# examples seen} / examples_per_epoch.
Base of the exponential learning rate (e.
Base of the exponential learning rate (e.g., 0.99^{# examples seen}). *
Learning rate parameters for FTRL
Weight on gaussian prior on the parameters similar to ridge
Initial learning rate used for SGD learning.
Number of iterations for sequential gradient descent
Weight on laplace regularization- a laplace prior on the parameters sparsity inducing ala lasso
What type of linear model should be used? Options are:
What type of linear model should be used? Options are:
Choose an optimizer to use
What kind of learning rate schedule / regularization should we use?
What kind of learning rate schedule / regularization should we use?
Options:
Aggressiveness of gradient truncation updates, how much shrinkage is applied to the model's parameters
Period of gradient truncation updates *
Threshold for applying gradient truncation updates parameter values smaller than this in magnitude are truncated
Whether to use the exponential learning rate.
Whether to use the exponential learning rate. If not chosen then the learning rate is like 1.0 / epoch. *