Neural Network
Home
Tutorials
Activations
Download
Linear
Sigmoid
Tanh
ReLU
ReLU6
LeakyReLU
ELU
Softplus
Softmax
Softmax
P
(
y
=
1
)
P(y=1)
P
(
y
=
1
)
-0.5
P
(
y
=
2
)
P(y=2)
P
(
y
=
2
)
0.5
P
(
y
=
3
)
P(y=3)
P
(
y
=
3
)
0
σ
(
x
)
i
=
e
x
i
∑
j
=
1
K
e
x
j
for
i
=
1
,
…
,
K
,
x
=
(
x
1
,
…
,
x
K
)
∈
R
K
,
σ
(
x
)
=
[
σ
(
x
)
1
⋮
σ
(
x
)
K
]
∈
(
0
,
1
)
K
,
\begin{aligned} \sigma(\mathbf x)_i &=\frac{e^{x_i}}{\sum_{j=1}^K e^{x_j}} \quad \text { for } i=1, \ldots, K, \\ \mathbf x &=(x_1, \ldots, x_K) \in \mathbb R^K, \\ \sigma(\mathbf x) &= \begin{bmatrix} \sigma(\mathbf x)_1 \\ \vdots \\ \sigma(\mathbf x)_K\end{bmatrix} \in (0, 1)^K, \\ \end{aligned}
σ
(
x
)
i
x
σ
(
x
)
=
∑
j
=
1
K
e
x
j
e
x
i
for
i
=
1
,
…
,
K
,
=
(
x
1
,
…
,
x
K
)
∈
R
K
,
=
σ
(
x
)
1
⋮
σ
(
x
)
K
∈
(
0
,
1
)
K
,