Ftrl

class Ftrl(learningRate: Float, l1RegularizationStrength: Float, l2RegularizationStrength: Float, learningRatePower: Float, l2ShrinkageRegularizationStrength: Float, initialAccumulatorValue: Float, clipGradient: ClipGradientAction) : Optimizer

Optimizer that implements the FTRL algorithm.

Updates variable according next formula:

m_t <- beta1 * m_{t-1} + (1 - beta1) * g
v_t <- max(beta2 * v_{t-1}, abs(g))
variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon)

This version has support for both online L2 (the L2 penalty given in the paper above) and shrinkage-type L2 (which is the addition of an L2 penalty to the loss function).

Check the documentation for the l2ShrinkageRegularizationStrength parameter for more details when shrinkage is enabled, in which case gradient is replaced with gradient_with_shrinkage.

NOTE: This optimizer works on CPU only. It has known bug on GPU: NaN instead of gradient values https://github.com/tensorflow/tensorflow/issues/26256

It is recommended to leave the parameters of this optimizer at their default values.

See also

Constructors

Ftrl
Link copied to clipboard
fun Ftrl(learningRate: Float = 0.001f, l1RegularizationStrength: Float = 0.0f, l2RegularizationStrength: Float = 0.0f, learningRatePower: Float = -0.5f, l2ShrinkageRegularizationStrength: Float = 0.0f, initialAccumulatorValue: Float = 0.0f, clipGradient: ClipGradientAction = NoClipGradient())

Properties

clipGradient
Link copied to clipboard
val clipGradient: ClipGradientAction

Strategy of gradient clipping as sub-class of ClipGradientAction.

optimizerName
Link copied to clipboard
open override val optimizerName: String

Returns optimizer name.