The softmax primitive performs softmax along a particular axis on data with arbitrary dimensions. All other axes are treated as independent (batch).
In general form, the operation is defined by the following formulas:
\[ dst(\overline{ou}, c, \overline{in}) = \frac {e^{src(\overline{ou}, c, \overline{in}) - \nu(\overline{ou}, \overline{in})}} { \sum\limits_{ic} e^{src(\overline{ou}, ic, \overline{in}) - \nu(\overline{ou}, \overline{in})} }, \]
where
\(\nu\) is used to produce more accurate results and defined as:
\[ \nu(\overline{ou}, \overline{in}) = \max\limits_{ic} src(\overline{ou}, ic, \overline{in}) \]
There is no difference between the dnnl_forward_training and dnnl_forward_inference propagation kinds.
The backward propagation computes \(diff\_src(ou, c, in)\), based on \(diff\_dst(ou, c, in)\) and \(dst(ou, c, in)\).
src
can be used as input and output for forward propagation, and diff_dst
can be used as input and output for backward propagation. In case of in-place operation, the original data will be overwritten.The softmax primitive doesn't support any post-ops or attributes.
The softmax primitive supports the following combinations of data types:
Propagation | Source / Destination |
---|---|
forward / backward | f32 |
forward | f16 |
The softmax primitive works with arbitrary data tensors. There is no special meaning associated with any logical dimensions. However, the softmax axis is typically referred to as channels (hence in formulas we use \(c\)).