DiscretizedLogistic

class tfsnippet.DiscretizedLogistic(mean, log_scale, bin_size, min_val=None, max_val=None, dtype=tf.float32, biased_edges=True, discretize_given=True, discretize_sample=True, epsilon=1e-07)

Bases: tfsnippet.distributions.base.Distribution

Discretized logistic distribution (Kingma et. al, 2016).

For discrete value x with equal intervals:

p(x) = sigmoid((x - mean + bin_size * 0.5) / scale) -
    sigmoid((x - mean - bin_size * 0.5) / scale)

where delta is the interval between two possible values of x.

The min_val and max_val specifies the minimum and maximum possible value of x. It should constraint the generated samples, and if biased_edges is True, then:

p(x_min) = sigmoid((x_min - mean + bin_size * 0.5) / scale)
p(x_max) = 1 - sigmoid((x_max - mean - bin_size * 0.5) / scale)

Attributes Summary

base_distribution Get the base distribution of this distribution.
batch_shape Get the batch shape of the samples.
biased_edges Whether or not to use biased density for edge values?
bin_size Get the bin size.
discretize_given Whether or not to discretize given in log_prob() and prob()?
discretize_sample Whether or not to discretize the generated samples in sample()?
dtype Get the data type of samples.
is_continuous Whether or not the distribution is continuous?
is_reparameterized Whether or not the distribution is re-parameterized?
log_scale Get the log-scale.
max_val Get the maximum value.
mean Get the mean.
min_val Get the minimum value.
value_ndims Get the number of value dimensions in samples.

Methods Summary

batch_ndims_to_value(ndims) Convert the last few batch_ndims into value_ndims.
expand_value_ndims(ndims) Convert the last few batch_ndims into value_ndims.
get_batch_shape() Get the static batch shape of the samples.
log_prob(given[, group_ndims, name]) Compute the log-densities of x against the distribution.
prob(given[, group_ndims, name]) Compute the densities of x against the distribution.
sample([n_samples, group_ndims, …]) Generate samples from the distribution.

Attributes Documentation

base_distribution

Get the base distribution of this distribution.

For distribution other than tfsnippet.BatchToValueDistribution, this property should return this distribution itself.

Returns:The base distribution.
Return type:Distribution
batch_shape

Get the batch shape of the samples.

Returns:The batch shape as tensor.
Return type:tf.Tensor
biased_edges

Whether or not to use biased density for edge values?

bin_size

Get the bin size.

discretize_given

Whether or not to discretize given in log_prob() and prob()?

discretize_sample

Whether or not to discretize the generated samples in sample()?

dtype

Get the data type of samples.

Returns:Data type of the samples.
Return type:tf.DType
is_continuous

Whether or not the distribution is continuous?

Returns:A boolean indicating whether it is continuous.
Return type:bool
is_reparameterized

Whether or not the distribution is re-parameterized?

The re-parameterization trick is proposed in “Auto-Encoding Variational Bayes” (Kingma, D.P. and Welling), allowing the gradients to be propagated back along the samples. Note that the re-parameterization can be disabled by specifying is_reparameterized = False as an argument of sample().

Returns:A boolean indicating whether it is re-parameterized.
Return type:bool
log_scale

Get the log-scale.

max_val

Get the maximum value.

mean

Get the mean.

min_val

Get the minimum value.

value_ndims

Get the number of value dimensions in samples.

Returns:The number of value dimensions in samples.
Return type:int

Methods Documentation

batch_ndims_to_value(ndims)

Convert the last few batch_ndims into value_ndims.

For a particular Distribution, the number of dimensions between the samples and the log-probability of the samples should satisfy:

samples.ndims - distribution.value_ndims == log_det.ndims

We denote samples.ndims - distribution.value_ndims by batch_ndims. This method thus wraps the current distribution, converts the last few batch_ndims into value_ndims.

Parameters:ndims (int) – The last few batch_ndims to be converted into value_ndims. Must be non-negative.
Returns:The converted distribution.
Return type:Distribution
expand_value_ndims(ndims)

Convert the last few batch_ndims into value_ndims.

For a particular Distribution, the number of dimensions between the samples and the log-probability of the samples should satisfy:

samples.ndims - distribution.value_ndims == log_det.ndims

We denote samples.ndims - distribution.value_ndims by batch_ndims. This method thus wraps the current distribution, converts the last few batch_ndims into value_ndims.

Parameters:ndims (int) – The last few batch_ndims to be converted into value_ndims. Must be non-negative.
Returns:The converted distribution.
Return type:Distribution
get_batch_shape()

Get the static batch shape of the samples.

Returns:The batch shape.
Return type:tf.TensorShape
log_prob(given, group_ndims=0, name=None)

Compute the log-densities of x against the distribution.

Parameters:
  • given (Tensor) – The samples to be tested.
  • group_ndims (int or tf.Tensor) – If specified, the last group_ndims dimensions of the log-densities will be summed up. (default 0)
  • name – TensorFlow name scope of the graph nodes. (default “log_prob”).
Returns:

The log-densities of given.

Return type:

tf.Tensor

prob(given, group_ndims=0, name=None)

Compute the densities of x against the distribution.

Parameters:
  • given (Tensor) – The samples to be tested.
  • group_ndims (int or tf.Tensor) – If specified, the last group_ndims dimensions of the log-densities will be summed up. (default 0)
  • name – TensorFlow name scope of the graph nodes. (default “prob”).
Returns:

The densities of given.

Return type:

tf.Tensor

sample(n_samples=None, group_ndims=0, is_reparameterized=None, compute_density=None, name=None)

Generate samples from the distribution.

Parameters:
  • n_samples (int or tf.Tensor or None) – A 0-D int32 Tensor or None. How many independent samples to draw from the distribution. The samples will have shape [n_samples] + batch_shape + value_shape, or batch_shape + value_shape if n_samples is None.
  • group_ndims (int or tf.Tensor) – Number of dimensions at the end of [n_samples] + batch_shape to be considered as events group. This will effect the behavior of log_prob() and prob(). (default 0)
  • is_reparameterized (bool) – If True, raises RuntimeError if the distribution is not re-parameterized. If False, disable re-parameterization even if the distribution is re-parameterized. (default None, following the setting of distribution)
  • compute_density (bool) – Whether or not to immediately compute the log-density for the samples? (default None, determine by the distribution class itself)
  • name – TensorFlow name scope of the graph nodes. (default “sample”).
Returns:

The samples as

StochasticTensor.

Return type:

tfsnippet.stochastic.StochasticTensor