SGD
Stochastic gradient descent optimizer.
NOTE: It's not an equivalent for keras.sgd
, it is a pure SGD with simple 'variable' update by subtracting 'alpha' * 'delta' from it.
Constructors
SGD
Link copied to clipboard
fun SGD(learningRate: Float = 0.2f, clipGradient: ClipGradientAction = NoClipGradient())
Content copied to clipboard
SGD
Link copied to clipboard
Properties
clipGradient
Link copied to clipboard
Strategy of gradient clipping as subclass of ClipGradientAction.
optimizerName
Link copied to clipboard