Adam

class Adam(learningRate: Float, beta1: Float, beta2: Float, epsilon: Float, useNesterov: Boolean, clipGradient: ClipGradientAction) : Optimizer

Adam optimizer.

Updates variable according next formula:

lr_t := learning_rate * sqrt{1 - beta_2^t} / (1 - beta_1^t)
m_t := beta_1 * m_{t-1} + (1 - beta_1) * g
v_t := beta_2 * v_{t-1} + (1 - beta_2) * g * g
variable := variable - lr_t * m_t / sqrt{v_t} + epsilon

It is recommended to leave the parameters of this optimizer at their default values.

Constructors

Adam
Link copied to clipboard
fun Adam(learningRate: Float = 0.001f, beta1: Float = 0.9f, beta2: Float = 0.999f, epsilon: Float = 1e-07f, useNesterov: Boolean = false, clipGradient: ClipGradientAction = NoClipGradient())

Properties

clipGradient
Link copied to clipboard
val clipGradient: ClipGradientAction

Strategy of gradient clipping as subclass of ClipGradientAction.

optimizerName
Link copied to clipboard
open override val optimizerName: String

Returns optimizer name.