yadll.activations

Activation

Activation functions

get_activation(activator) Call an activation function from an activator object
linear(x) Linear activation function
sigmoid(x) Sigmoid function
ultra_fast_sigmoid(x) Ultra fast Sigmoid function return an approximated standard sigmoid
tanh(x) Tanh activation function
softmax(x) Softmax activation function
softplus(x) Softplus activation function \(\varphi(x) = \log{1 + \exp{x}}\)
relu(x[, alpha]) Rectified linear unit activation function
elu(x[, alpha]) Compute the element-wise exponential linear activation function.

Detailed description

yadll.activations.get_activation(activator)[source]

Call an activation function from an activator object

Parameters:

activator : activator

an activator is an activation function, a tuple of (activation function, dict of args), the name of the activation function as a str or a tuple (name of function, dict of args) example : activator = tanh or activator = (elu, {‘alpha’:0.5})

or activator = ‘tanh’ or activator = (‘elu’, {‘alpha’:0.5})

Returns:

an activation function

yadll.activations.linear(x)[source]

Linear activation function :math: varphi(x) = x

Parameters:

x : symbolic tensor

Tensor to compute the activation function for.

Returns:

symbolic tensor

The output of the identity applied to the activation x.

yadll.activations.sigmoid(x)[source]

Sigmoid function \(\varphi(x) = \frac{1}{1 + \exp{-x}}\)

Parameters:

x : symbolic tensor

Tensor to compute the activation function for.

Returns:

symbolic tensor of value in [0, 1]

The output of the sigmoid function applied to the activation x.

yadll.activations.ultra_fast_sigmoid(x)[source]

Ultra fast Sigmoid function return an approximated standard sigmoid \(\varphi(x) = \frac{1}{1+\exp{-x}}\)

Parameters:

x : symbolic tensor

Tensor to compute the activation function for.

Returns:

symbolic tensor of value in [0, 1]

The output of the sigmoid function applied to the activation x.

Notes

Use the Theano flag optimizer_including=local_ultra_fast_sigmoid to use

ultra_fast_sigmoid systematically instead of sigmoid.

yadll.activations.tanh(x)[source]

Tanh activation function \(\varphi(x) = \tanh(x)\)

Parameters:

x : symbolic tensor

Tensor to compute the activation function for.

Returns:

symbolic tensor of value in [-1, 1]

The output of the tanh function applied to the activation x.

yadll.activations.softmax(x)[source]

Softmax activation function \(\varphi(x)_j = \frac{\exp{x_j}}{\sum_{k=1}^K \exp{x_k}}\)

where \(K\) is the total number of neurons in the layer. This activation function gets applied row-wise.

Parameters:

x : symbolic tensor

Tensor to compute the activation function for.

Returns:

symbolic tensor where the sum of the row is 1 and each single value is in [0, 1]

The output of the softmax function applied to the activation x.

yadll.activations.softplus(x)[source]

Softplus activation function \(\varphi(x) = \log{1 + \exp{x}}\)

Parameters:

x : symbolic tensor

Tensor to compute the activation function for.

Returns:

symbolic tensor

The output of the softplus function applied to the activation x.

yadll.activations.relu(x, alpha=0)[source]

Rectified linear unit activation function \(\varphi(x) = \max{x, \alpha * x}\)

Parameters:

x : symbolic tensor

Tensor to compute the activation function for.

alpha : scalar or tensor, optional

Slope for negative input, usually between 0 and 1. The default value of 0 will lead to the standard rectifier, 1 will lead to a linear activation function, and any value in between will give a leaky rectifier. A shared variable (broadcastable against x) will result in a parameterized rectifier with learnable slope(s).

Returns:

symbolic tensor

Element-wise rectifier applied to the activation x.

Notes

This is numerically equivalent to T.switch(x > 0, x, alpha * x) (or T.maximum(x, alpha * x) for alpha < 1), but uses a faster formulation or an optimized Op, so we encourage to use this function.

References

[R55]Xavier Glorot, Antoine Bordes and Yoshua Bengio (2011): Deep sparse rectifier neural networks. AISTATS. http://jmlr.org/proceedings/papers/v15/glorot11a/glorot11a.pdf
yadll.activations.elu(x, alpha=1)[source]

Compute the element-wise exponential linear activation function.

Parameters:

x : symbolic tensor

Tensor to compute the activation function for.

alpha : scalar

Returns:

symbolic tensor

Element-wise exponential linear activation function applied to x.

References

[R77]

Djork-Arne Clevert, Thomas Unterthiner, Sepp Hochreiter

“Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)” <http://arxiv.org/abs/1511.07289>`.