Monday, December 11, 2017

Rectified Linear Unit Activation Function In Deep Learning


Sigmoid Activation function provides S shape curve graph and the values are always between 0 and 1. Sigmoid function converts the large negative values to 0 and high positive values to 1.

But with Sigmoid Activation function, we have major drawback is that it’s value is 0 near 0 and 1 which means during the back propagation method the weights values will never ever change and we will not get any output. Secondly, Sigmoid Activation Function faces the gradient vanishing problem at 0 and 1.

Below is the image of Sigmoid Curve:



To get rid from the above issues, we can use Rectified Linear Unit Activation Function which is also known as RELU. RELU function has range between 0 and Infinity. Hence Sigmoid Activation function can be used to predict the values between 0 and 1 whereas can be used to model real positive number. The best of RELU is that whenever we increase the input values of X, the gradient also changes.

RELU can be defined with the simple below mentioned Mathematical notation:
RELU(x) = MAX (0, x)
The functions says if the input value is 0 or less than 0, RELU will return 0 else it will return the input value of x.

Below is the image of Sigmoid Curve:

People who read this post also read :



No comments: