ELU

class ELU(alpha: Float, name: String) : AbstractActivationLayer

Exponential Unit activation function.

It follows:

f(x) = x,                    if x 0
f(x) = alpha * (exp(x) - 1), if x <= 0

In contrast to ReLU it has negative values which push the mean of the activation closer to zero which enable faster learning as they bring the gradient to the natural gradient.

Since

0.3

Constructors

ELU
Link copied to clipboard
fun ELU(alpha: Float = 1.0f, name: String = "")

Creates ELU object.

Functions

build
Link copied to clipboard
open override fun build(tf: Ops, kGraph: KGraph, inputShape: Shape)

Extend this function to define variables in layer.

buildFromInboundLayers
Link copied to clipboard
fun buildFromInboundLayers(tf: Ops, kGraph: KGraph)

Extend this function to define variables in layer.

computeOutputShape
Link copied to clipboard
open override fun computeOutputShape(inputShape: Shape): Shape

Computes output shape, based on inputShape and Layer type.

computeOutputShapeFromInboundLayers
Link copied to clipboard
open fun computeOutputShapeFromInboundLayers(): TensorShape

Computes output shape, based on input shapes of inbound layers.

forward
Link copied to clipboard
open override fun forward(tf: Ops, input: Operand<Float>): Operand<Float>

Applies the activation functions to the input to produce the output.

open override fun forward(tf: Ops, input: Operand<Float>, isTraining: Operand<Boolean>, numberOfLosses: Operand<Float>?): Operand<Float>

Builds main layer input transformation with tf. Depends on Layer type.

open fun forward(tf: Ops, input: List<Operand<Float>>, isTraining: Operand<Boolean>, numberOfLosses: Operand<Float>?): Operand<Float>

Builds main layer input transformation with tf. Depends on Layer type.

invoke
Link copied to clipboard
operator fun invoke(vararg layers: Layer): Layer

Important part of functional API. It takes layers as input and saves them to the inboundLayers of the given layer.

toString
Link copied to clipboard
open override fun toString(): String

Properties

alpha
Link copied to clipboard
val alpha: Float = 1.0f

Hyperparameter that controls the value to which an ELU saturates for negative net inputs. Should be 0.

hasActivation
Link copied to clipboard
open override val hasActivation: Boolean

Returns True, if layer has internal activation function.

inboundLayers
Link copied to clipboard
var inboundLayers: MutableList<Layer>

Returns inbound layers.

isTrainable
Link copied to clipboard
var isTrainable: Boolean = true

True, if layer's weights could be changed during training. If false, layer's weights are frozen and could be changed during the training.

name
Link copied to clipboard
var name: String
outboundLayers
Link copied to clipboard
var outboundLayers: MutableList<Layer>

Returns outbound layers.

outputShape
Link copied to clipboard
lateinit var outputShape: TensorShape

Output data tensor shape.

paramCount
Link copied to clipboard
open override val paramCount: Int

Returns amount of neurons.

parentModel
Link copied to clipboard
var parentModel: TrainableModel? = null

Model where this layer is used.

weights
Link copied to clipboard
open override var weights: Map<String, Array<*>>

Layer's weights.