tfsnippet.mathops

Package for neural network math operations.

This package contains advanced math operations for training and evaluating neural networks. Most of the operations accept an ops as their first argument.

You may pass tfsnippet.mathops.npyops or tfsnippet.mathops.tfops as the ops argument to obtain a NumPy or TensorFlow math operation.

tfsnippet.mathops.inception_score(ops, logits=None, probs=None, reduce_dims=None, clip_eps=1e-07)

Compute the Inception score (“Improved techniques for training gans”, Salimans, T. et al. 2016.) from given softmax logits or probs.

\[\begin{split}\begin{align*} \text{Inception score} &= \exp\left\{ \operatorname{\mathbb{E}}_{x}\Big[ \operatorname{\text{D}}_{KL}\left(p(y|x) \,\big\|\,p(y)\Big)\right] \right\} \\ p(y) &= \operatorname{\mathbb{E}}_{x}\left[p(y|x)\right] \end{align*}\end{split}\]
Parameters:
  • ops (tfops or npyops) – The math operations module.
  • logits – The softmax logits for \(p(y|x)\). The last dimension will be treated as the softmax dimension.
  • probs – The softmax probs for \(p(y|x)\). The last dimension will be treated as the softmax dimension. Ignored if logits is specified.
  • reduce_dims – If specified, only these dimension will be treated as the data dimensions, and reduced for computing the Inception score. If not specified, all dimensions except the softmax dimension will be treated as the data dimensions. (default None)
  • clip_eps – The epsilon value for clipping probs, in order to avoid numerical issues. (default 1e-7)
Returns:

The computed Inception score, with data dimension reduced.

tfsnippet.mathops.softmax_logits_kld(ops, p_logits, q_logits, keepdims=False)

Compute the KL-divergence between two softmax categorical distributions via logits. The last dimension of p and q are treated as the softmax dimension, and will be reduced for computing KL-divergence.

\[\operatorname{D}_{KL}(p(y)\|q(y)) = \sum_y p(y) \left(\log p(y) - \log q(y)\right)\]
Parameters:
  • ops (npyops or tfops) – The math operations module.
  • p_logits – Logits of softmax categorical \(p(y)\).
  • q_logits – Logits of softmax categorical \(q(y)\).
  • keepdims (bool) – Whether or not to keep the reduced dimension? (default False)
Returns:

The computed softmax categorical distributions KL-divergence.

tfsnippet.mathops.softmax_probs_kld(ops, p_probs, q_probs, keepdims=False, clip_eps=1e-07)

Compute the KL-divergence between two softmax categorical distributions via probs. The last dimension of p and q are treated as the softmax dimension, and will be reduced for computing KL-divergence.

\[\operatorname{D}_{KL}(p(y)\|q(y)) = \sum_y p(y) \left(\log p(y) - \log q(y)\right)\]
Parameters:
  • ops (npyops or tfops) – The math operations module.
  • p_probs – Probabilities of softmax categorical \(p(y)\).
  • q_probs – Probabilities of softmax categorical \(q(y)\).
  • keepdims (bool) – Whether or not to keep the reduced dimension? (default False)
  • clip_eps – The epsilon value for clipping p_probs and q_probs, in order to avoid numerical issues. (default 1e-7)
Returns:

The computed softmax categorical distributions KL-divergence.

tfsnippet.mathops.log_sum_exp(ops, x, axis=None, keepdims=False)

Compute \(\log \sum_{k=1}^K \exp(x_k)\).

\[\begin{split}\begin{align*} \log \sum_{k=1}^K \exp(x_k) &= \log \left[\exp(x_{max}) \sum_{k=1}^K \exp(x_k - x_{max})\right] \\ &= x_{max} + \log \sum_{k=1}^K \exp(x_k - x_{max}) \\ x_{max} &= \max x_k \end{align*}\end{split}\]
Parameters:
  • ops (npyops or tfops) – The math operations module.
  • x – The input x.
  • axis – The dimension to sum. Default None, all dimensions.
  • keepdims (bool) – Whether or not to keep the summed dimensions? (default False)
Returns:

The computed value.

tfsnippet.mathops.log_mean_exp(ops, x, axis=None, keepdims=False)

Compute \(\log \frac{1}{K} \sum_{k=1}^K \exp(x_k)\).

\[\begin{split}\begin{align*} \log \frac{1}{K} \sum_{k=1}^K \exp(x_k) &= \log \left[\exp(x_{max}) \frac{1}{K} \sum_{k=1}^K \exp(x_k - x_{max})\right] \\ &= x_{max} + \log \frac{1}{K} \sum_{k=1}^K \exp(x_k - x_{max}) \\ x_{max} &= \max x_k \end{align*}\end{split}\]
Parameters:
  • ops (npyops or tfops) – The math operations module.
  • x – The input x.
  • axis – The dimension to take average. Default None, all dimensions.
  • keepdims (bool) – Whether or not to keep the summed dimensions? (default False)
Returns:

The computed value.

tfsnippet.mathops.softmax(ops, logits)

Compute softmax from logits \(\alpha_k\). Note the corresponding multinomial distribution is defined as \(\pi_k = \exp(\alpha_k) / \sum_{k=1}^K \exp(\alpha_i)\).

\[\mathop{\text{softmax}}(x) = \frac{\exp \alpha_k} {\sum_{k=1}^K \exp\left(\alpha_i\right)} = \frac{\exp \left(\alpha_k - \alpha_{max}\right)} {\sum_{k=1}^K \exp\left(\alpha_i - \alpha_{max}\right)}\]
Parameters:
  • ops (npyops or tfops) – The math operations module.
  • logits – The un-normalized logits \(\alpha_k\) of \(p(x)\). The last dimension will be treated as the softmax dimension.
Returns:

The softmax outputs.

tfsnippet.mathops.log_softmax(ops, logits)

Compute log-softmax from logits \(\alpha_k\). Note the corresponding multinomial distribution is defined as \(\pi_k = \exp(\alpha_k) / \sum_{k=1}^K \exp(\alpha_i)\).

\[\log\mathop{\text{softmax}}(x) = \log \frac{\exp \alpha_k}{\sum_{k=1}^K \exp(\alpha_i)} = \alpha_k - \log \sum_{k=1}^K \exp(\alpha_i)\]
Parameters:
  • ops (npyops or tfops) – The math operations module.
  • logits – The un-normalized logits \(\alpha_k\) of \(p(x)\). The last dimension will be treated as the softmax dimension.
Returns:

The log-softmax outputs.