act_norm¶
-
tfsnippet.layers.
act_norm
(*args, **kwargs)¶ ActNorm proposed by (Kingma & Dhariwal, 2018).
Examples:
import tfsnippet as spt # apply act_norm on a dense layer x = spt.layers.dense(x, units, activation_fn=tf.nn.relu, normalizer_fn=functools.partial( act_norm, initializing=initializing)) # apply act_norm on a conv2d layer x = spt.layers.conv2d(x, out_channels, (3, 3), channels_last=channels_last, activation_fn=tf.nn.relu, normalizer_fn=functools.partial( act_norm, axis=-1 if channels_last else -3, value_ndims=3, initializing=initializing, ))
Parameters: - input (Tensor) – The input tensor.
- axis (int or Iterable[int]) – The axis to apply ActNorm. Dimensions not in axis will be averaged out when computing the mean of activations. Default -1, the last dimension. All items of the axis should be covered by value_ndims.
- initializing (bool) – Whether or not to use the input x to initialize
the layer parameters? (default
True
) - scale_type – One of {“exp”, “linear”}.
If “exp”,
y = (x + bias) * tf.exp(log_scale)
. If “linear”,y = (x + bias) * scale
. Default is “exp”. - bias_regularizer – The regularizer for bias.
- bias_constraint – The constraint for bias.
- log_scale_regularizer – The regularizer for log_scale.
- log_scale_constraint – The constraint for log_scale.
- scale_regularizer – The regularizer for scale.
- scale_constraint – The constraint for scale.
- trainable (bool) – Whether or not the variables are trainable?
- epsilon – Small float to avoid dividing by zero or taking logarithm of zero.
- name (str) – Default name of the variable scope. Will be uniquified. If not specified, generate one according to the class name.
- scope (str) – The name of the variable scope.
Returns: The output after the ActNorm has been applied.
Return type: tf.Tensor