-
Notifications
You must be signed in to change notification settings - Fork 0
/
activations.py
48 lines (36 loc) · 1.7 KB
/
activations.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
import numpy as np
def ReLU(input):
'''
The simplest one, stills quite popular, even among its variations(LeakyReLU).
Returns 0 for everything equal or below 0. Else, returns the same number.
Its derivative is even more simple: Returns 0 for everything equal or below 0, else, returns 1.
'''
relu = np.maximum(input, 0)
drelu = np.ones(input.shape) * (x > 0)
return relu, drelu
def Sigmoid(input):
'''
This one is used specially to generate outputs for binary classification problems. It will return a value between 0 and 1, a probability, but each output
probability is independent from each other.
However, I really believe that one could use Sigmoid instead of Softmax without big annoyances, just don't use one-hot encoding and maybe it'll go fine.
'''
sig = 1/(1+np.exp(-input))
dsig = sig * (1 - sig)
return sig, dsig
def Softmax(input):
'''
This one is used specially to generate outputs for multi classification. It will generate an array where the sum of its elements will be equal to 1, so
remember to use one-hot encoding. Each element probability is dependent from each other(if an element has higher probability, other elements will have lower ones)
Also appreciated in Reinforcement Learning.
'''
input = input - np.max(input)
soft = (np.exp(input)/np.sum(np.exp(input), axis=0))
dsoft = soft * (1-soft) # As far as I know and researched about derivatives, softmax's derivative should be equal itself, but...ok.
return soft, dsoft
def Tanh(input):
'''
Classic and appreciated in GANs.
'''
tanh = np.tanh(input)
dtanh = 1-(tanh**2)
return tanh, dtanh