INSTRUCTION
stringlengths
1
8.43k
RESPONSE
stringlengths
75
104k
Generates starting points for the Halton sequence procedure.
def _get_indices(num_results, sequence_indices, dtype, name=None): """Generates starting points for the Halton sequence procedure. The k'th element of the sequence is generated starting from a positive integer which must be distinct for each `k`. It is conventional to choose the starting point as `k` itself (or `k+1` if k is zero based). This function generates the starting integers for the required elements and reshapes the result for later use. Args: num_results: Positive scalar `Tensor` of dtype int32. The number of samples to generate. If this parameter is supplied, then `sequence_indices` should be None. sequence_indices: `Tensor` of dtype int32 and rank 1. The entries index into the Halton sequence starting with 0 and hence, must be whole numbers. For example, sequence_indices=[0, 5, 6] will produce the first, sixth and seventh elements of the sequence. If this parameter is not None then `n` must be None. dtype: The dtype of the sample. One of `float32` or `float64`. Default is `float32`. name: Python `str` name which describes ops created by this function. Returns: indices: `Tensor` of dtype `dtype` and shape = `[n, 1, 1]`. """ with tf.compat.v1.name_scope(name, '_get_indices', [num_results, sequence_indices]): if sequence_indices is None: num_results = tf.cast(num_results, dtype=dtype) sequence_indices = tf.range(num_results, dtype=dtype) else: sequence_indices = tf.cast(sequence_indices, dtype) # Shift the indices so they are 1 based. indices = sequence_indices + 1 # Reshape to make space for the event dimension and the place value # coefficients. return tf.reshape(indices, [-1, 1, 1])
Computes the number of terms in the place value expansion.
def _base_expansion_size(num, bases): """Computes the number of terms in the place value expansion. Let num = a0 + a1 b + a2 b^2 + ... ak b^k be the place value expansion of `num` in base b (ak <> 0). This function computes and returns `k+1` for each base `b` specified in `bases`. This can be inferred from the base `b` logarithm of `num` as follows: $$k = Floor(log_b (num)) + 1 = Floor( log(num) / log(b)) + 1$$ Args: num: Scalar `Tensor` of dtype either `float32` or `float64`. The number to compute the base expansion size of. bases: `Tensor` of the same dtype as num. The bases to compute the size against. Returns: Tensor of same dtype and shape as `bases` containing the size of num when written in that base. """ return tf.floor(tf.math.log(num) / tf.math.log(bases)) + 1
Returns sorted array of primes such that 2 < = prime < n.
def _primes_less_than(n): # Based on # https://stackoverflow.com/questions/2068372/fastest-way-to-list-all-primes-below-n-in-python/3035188#3035188 """Returns sorted array of primes such that `2 <= prime < n`.""" small_primes = np.array((2, 3, 5)) if n <= 6: return small_primes[small_primes < n] sieve = np.ones(n // 3 + (n % 6 == 2), dtype=np.bool) sieve[0] = False m = int(n ** 0.5) // 3 + 1 for i in range(m): if not sieve[i]: continue k = 3 * i + 1 | 1 sieve[k ** 2 // 3::2 * k] = False sieve[(k ** 2 + 4 * k - 2 * k * (i & 1)) // 3::2 * k] = False return np.r_[2, 3, 3 * np.nonzero(sieve)[0] + 1 | 1]
Returns the machine epsilon for the supplied dtype.
def _machine_eps(dtype): """Returns the machine epsilon for the supplied dtype.""" if isinstance(dtype, tf.DType): dtype = dtype.as_numpy_dtype() return np.finfo(dtype).eps
The Hager Zhang line search algorithm.
def hager_zhang(value_and_gradients_function, initial_step_size=None, value_at_initial_step=None, value_at_zero=None, converged=None, threshold_use_approximate_wolfe_condition=1e-6, shrinkage_param=0.66, expansion_param=5.0, sufficient_decrease_param=0.1, curvature_param=0.9, step_size_shrink_param=0.1, max_iterations=50, name=None): """The Hager Zhang line search algorithm. Performs an inexact line search based on the algorithm of [Hager and Zhang (2006)][2]. The univariate objective function `value_and_gradients_function` is typically generated by projecting a multivariate objective function along a search direction. Suppose the multivariate function to be minimized is `g(x1,x2, .. xn)`. Let (d1, d2, ..., dn) be the direction along which we wish to perform a line search. Then the projected univariate function to be used for line search is ```None f(a) = g(x1 + d1 * a, x2 + d2 * a, ..., xn + dn * a) ``` The directional derivative along (d1, d2, ..., dn) is needed for this procedure. This also corresponds to the derivative of the projected function `f(a)` with respect to `a`. Note that this derivative must be negative for `a = 0` if the direction is a descent direction. The usual stopping criteria for the line search is the satisfaction of the (weak) Wolfe conditions. For details of the Wolfe conditions, see ref. [3]. On a finite precision machine, the exact Wolfe conditions can be difficult to satisfy when one is very close to the minimum and as argued by [Hager and Zhang (2005)][1], one can only expect the minimum to be determined within square root of machine precision. To improve the situation, they propose to replace the Wolfe conditions with an approximate version depending on the derivative of the function which is applied only when one is very close to the minimum. The following algorithm implements this enhanced scheme. ### Usage: Primary use of line search methods is as an internal component of a class of optimization algorithms (called line search based methods as opposed to trust region methods). Hence, the end user will typically not want to access line search directly. In particular, inexact line search should not be confused with a univariate minimization method. The stopping criteria of line search is the satisfaction of Wolfe conditions and not the discovery of the minimum of the function. With this caveat in mind, the following example illustrates the standalone usage of the line search. ```python # Define value and gradient namedtuple ValueAndGradient = namedtuple('ValueAndGradient', ['x', 'f', 'df']) # Define a quadratic target with minimum at 1.3. def value_and_gradients_function(x): return ValueAndGradient(x=x, f=(x - 1.3) ** 2, df=2 * (x-1.3)) # Set initial step size. step_size = tf.constant(0.1) ls_result = tfp.optimizer.linesearch.hager_zhang( value_and_gradients_function, initial_step_size=step_size) # Evaluate the results. with tf.Session() as session: results = session.run(ls_result) # Ensure convergence. assert results.converged # If the line search converged, the left and the right ends of the # bracketing interval are identical. assert results.left.x == result.right.x # Print the number of evaluations and the final step size. print ("Final Step Size: %f, Evaluations: %d" % (results.left.x, results.func_evals)) ``` ### References: [1]: William Hager, Hongchao Zhang. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim., Vol 16. 1, pp. 170-172. 2005. https://www.math.lsu.edu/~hozhang/papers/cg_descent.pdf [2]: William Hager, Hongchao Zhang. Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Transactions on Mathematical Software, Vol 32., 1, pp. 113-137. 2006. http://users.clas.ufl.edu/hager/papers/CG/cg_compare.pdf [3]: Jorge Nocedal, Stephen Wright. Numerical Optimization. Springer Series in Operations Research. pp 33-36. 2006 Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that correspond to scalar tensors of real dtype containing the point at which the function was evaluated, the value of the function, and its derivative at that point. The other namedtuple fields, if present, should be tensors or sequences (possibly nested) of tensors. In usual optimization application, this function would be generated by projecting the multivariate objective function along some specific direction. The direction is determined by some other procedure but should be a descent direction (i.e. the derivative of the projected univariate function must be negative at 0.). Alternatively, the function may represent the batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned namedtuple should each be a tensor of shape [n], with the corresponding input points, function values, and derivatives at those input points. initial_step_size: (Optional) Scalar positive `Tensor` of real dtype, or a tensor of shape [n] in batching mode. The initial value (or values) to try to bracket the minimum. Default is `1.` as a float32. Note that this point need not necessarily bracket the minimum for the line search to work correctly but the supplied value must be greater than 0. A good initial value will make the search converge faster. value_at_initial_step: (Optional) The full return value of evaluating value_and_gradients_function at initial_step_size, i.e. a namedtuple with 'x', 'f', 'df', if already known by the caller. If supplied the value of `initial_step_size` will be ignored, otherwise the tuple will be computed by evaluating value_and_gradients_function. value_at_zero: (Optional) The full return value of value_and_gradients_function at `0.`, i.e. a namedtuple with 'x', 'f', 'df', if already known by the caller. If not supplied the tuple will be computed by evaluating value_and_gradients_function. converged: (Optional) In batching mode a tensor of shape [n], indicating batch members which have already converged and no further search should be performed. These batch members are also reported as converged in the output, and both their `left` and `right` are set to the `value_at_initial_step`. threshold_use_approximate_wolfe_condition: Scalar positive `Tensor` of real dtype. Corresponds to the parameter 'epsilon' in [Hager and Zhang (2006)][2]. Used to estimate the threshold at which the line search switches to approximate Wolfe conditions. shrinkage_param: Scalar positive Tensor of real dtype. Must be less than `1.`. Corresponds to the parameter `gamma` in [Hager and Zhang (2006)][2]. If the secant**2 step does not shrink the bracketing interval by this proportion, a bisection step is performed to reduce the interval width. expansion_param: Scalar positive `Tensor` of real dtype. Must be greater than `1.`. Used to expand the initial interval in case it does not bracket a minimum. Corresponds to `rho` in [Hager and Zhang (2006)][2]. sufficient_decrease_param: Positive scalar `Tensor` of real dtype. Bounded above by the curvature param. Corresponds to `delta` in the terminology of [Hager and Zhang (2006)][2]. curvature_param: Positive scalar `Tensor` of real dtype. Bounded above by `1.`. Corresponds to 'sigma' in the terminology of [Hager and Zhang (2006)][2]. step_size_shrink_param: Positive scalar `Tensor` of real dtype. Bounded above by `1`. If the supplied step size is too big (i.e. either the objective value or the gradient at that point is infinite), this factor is used to shrink the step size until it is finite. max_iterations: Positive scalar `Tensor` of integral dtype or None. The maximum number of iterations to perform in the line search. The number of iterations used to bracket the minimum are also counted against this parameter. name: (Optional) Python str. The name prefixed to the ops created by this function. If not supplied, the default name 'hager_zhang' is used. Returns: results: A namedtuple containing the following attributes. converged: Boolean `Tensor` of shape [n]. Whether a point satisfying Wolfe/Approx wolfe was found. failed: Boolean `Tensor` of shape [n]. Whether line search failed e.g. if either the objective function or the gradient are not finite at an evaluation point. iterations: Scalar int32 `Tensor`. Number of line search iterations made. func_evals: Scalar int32 `Tensor`. Number of function evaluations made. left: A namedtuple, as returned by value_and_gradients_function, of the left end point of the final bracketing interval. Values are equal to those of `right` on batch members where converged is True. Otherwise, it corresponds to the last interval computed. right: A namedtuple, as returned by value_and_gradients_function, of the right end point of the final bracketing interval. Values are equal to those of `left` on batch members where converged is True. Otherwise, it corresponds to the last interval computed. """ with tf.compat.v1.name_scope(name, 'hager_zhang', [ initial_step_size, value_at_initial_step, value_at_zero, converged, threshold_use_approximate_wolfe_condition, shrinkage_param, expansion_param, sufficient_decrease_param, curvature_param]): val_0, val_initial, f_lim, prepare_evals = _prepare_args( value_and_gradients_function, initial_step_size, value_at_initial_step, value_at_zero, threshold_use_approximate_wolfe_condition) valid_inputs = (hzl.is_finite(val_0) & (val_0.df < 0) & tf.math.is_finite(val_initial.x) & (val_initial.x > 0)) if converged is None: init_converged = tf.zeros_like(valid_inputs) # i.e. all false. else: init_converged = tf.convert_to_tensor(value=converged) failed = ~init_converged & ~valid_inputs active = ~init_converged & valid_inputs # Note: _fix_step_size returns immediately if either all inputs are invalid # or none of the active ones need fixing. fix_step_evals, val_c, fix_failed = _fix_step_size( value_and_gradients_function, val_initial, active, step_size_shrink_param) init_interval = HagerZhangLineSearchResult( converged=init_converged, failed=failed | fix_failed, func_evals=prepare_evals + fix_step_evals, iterations=tf.convert_to_tensor(value=0), left=val_0, right=hzl.val_where(init_converged, val_0, val_c)) def _apply_bracket_and_search(): """Bracketing and searching to do for valid inputs.""" return _bracket_and_search( value_and_gradients_function, init_interval, f_lim, max_iterations, shrinkage_param, expansion_param, sufficient_decrease_param, curvature_param) init_active = ~init_interval.failed & ~init_interval.converged return prefer_static.cond( tf.reduce_any(input_tensor=init_active), _apply_bracket_and_search, lambda: init_interval)
Shrinks the input step size until the value and grad become finite.
def _fix_step_size(value_and_gradients_function, val_c_input, active, step_size_shrink_param): """Shrinks the input step size until the value and grad become finite.""" # The maximum iterations permitted are determined as the number of halvings # it takes to reduce 1 to 0 in the given dtype. iter_max = np.ceil(-np.log2(_machine_eps(val_c_input.x.dtype))) def _cond(i, val_c, to_fix): del val_c # Unused. return (i < iter_max) & tf.reduce_any(input_tensor=to_fix) def _body(i, val_c, to_fix): next_c = tf.where(to_fix, val_c.x * step_size_shrink_param, val_c.x) next_val_c = value_and_gradients_function(next_c) still_to_fix = to_fix & ~hzl.is_finite(next_val_c) return (i + 1, next_val_c, still_to_fix) to_fix = active & ~hzl.is_finite(val_c_input) return tf.while_loop( cond=_cond, body=_body, loop_vars=(0, val_c_input, to_fix))
Brackets the minimum and performs a line search.
def _bracket_and_search( value_and_gradients_function, init_interval, f_lim, max_iterations, shrinkage_param, expansion_param, sufficient_decrease_param, curvature_param): """Brackets the minimum and performs a line search. Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that correspond to scalar tensors of real dtype containing the point at which the function was evaluated, the value of the function, and its derivative at that point. The other namedtuple fields, if present, should be tensors or sequences (possibly nested) of tensors. In usual optimization application, this function would be generated by projecting the multivariate objective function along some specific direction. The direction is determined by some other procedure but should be a descent direction (i.e. the derivative of the projected univariate function must be negative at 0.). Alternatively, the function may represent the batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned namedtuple should each be a tensor of shape [n], with the corresponding input points, function values, and derivatives at those input points. init_interval: Instance of `HagerZhangLineSearchResults` containing the initial line search interval. The gradient of init_interval.left must be negative (i.e. must be a descent direction), while init_interval.right must be positive and finite. f_lim: Scalar `Tensor` of float dtype. max_iterations: Positive scalar `Tensor` of integral dtype. The maximum number of iterations to perform in the line search. The number of iterations used to bracket the minimum are also counted against this parameter. shrinkage_param: Scalar positive Tensor of real dtype. Must be less than `1.`. Corresponds to the parameter `gamma` in [Hager and Zhang (2006)][2]. expansion_param: Scalar positive `Tensor` of real dtype. Must be greater than `1.`. Used to expand the initial interval in case it does not bracket a minimum. Corresponds to `rho` in [Hager and Zhang (2006)][2]. sufficient_decrease_param: Positive scalar `Tensor` of real dtype. Bounded above by the curvature param. Corresponds to `delta` in the terminology of [Hager and Zhang (2006)][2]. curvature_param: Positive scalar `Tensor` of real dtype. Bounded above by `1.`. Corresponds to 'sigma' in the terminology of [Hager and Zhang (2006)][2]. Returns: A namedtuple containing the following fields. converged: Boolean `Tensor` of shape [n]. Whether a point satisfying Wolfe/Approx wolfe was found. failed: Boolean `Tensor` of shape [n]. Whether line search failed e.g. if either the objective function or the gradient are not finite at an evaluation point. iterations: Scalar int32 `Tensor`. Number of line search iterations made. func_evals: Scalar int32 `Tensor`. Number of function evaluations made. left: A namedtuple, as returned by value_and_gradients_function, of the left end point of the updated bracketing interval. right: A namedtuple, as returned by value_and_gradients_function, of the right end point of the updated bracketing interval. """ bracket_result = hzl.bracket(value_and_gradients_function, init_interval, f_lim, max_iterations, expansion_param) converged = init_interval.converged | _very_close( bracket_result.left.x, bracket_result.right.x) # We fail if we have not yet converged but already exhausted all iterations. exhausted_iterations = ~converged & tf.greater_equal( bracket_result.iteration, max_iterations) line_search_args = HagerZhangLineSearchResult( converged=converged, failed=bracket_result.failed | exhausted_iterations, iterations=bracket_result.iteration, func_evals=bracket_result.num_evals, left=bracket_result.left, right=bracket_result.right) return _line_search_after_bracketing( value_and_gradients_function, line_search_args, init_interval.left, f_lim, max_iterations, sufficient_decrease_param, curvature_param, shrinkage_param)
The main loop of line search after the minimum has been bracketed.
def _line_search_after_bracketing( value_and_gradients_function, search_interval, val_0, f_lim, max_iterations, sufficient_decrease_param, curvature_param, shrinkage_param): """The main loop of line search after the minimum has been bracketed. Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that correspond to scalar tensors of real dtype containing the point at which the function was evaluated, the value of the function, and its derivative at that point. The other namedtuple fields, if present, should be tensors or sequences (possibly nested) of tensors. In usual optimization application, this function would be generated by projecting the multivariate objective function along some specific direction. The direction is determined by some other procedure but should be a descent direction (i.e. the derivative of the projected univariate function must be negative at 0.). Alternatively, the function may represent the batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned namedtuple should each be a tensor of shape [n], with the corresponding input points, function values, and derivatives at those input points. search_interval: Instance of `HagerZhangLineSearchResults` containing the current line search interval. val_0: A namedtuple as returned by value_and_gradients_function evaluated at `0.`. The gradient must be negative (i.e. must be a descent direction). f_lim: Scalar `Tensor` of float dtype. max_iterations: Positive scalar `Tensor` of integral dtype. The maximum number of iterations to perform in the line search. The number of iterations used to bracket the minimum are also counted against this parameter. sufficient_decrease_param: Positive scalar `Tensor` of real dtype. Bounded above by the curvature param. Corresponds to `delta` in the terminology of [Hager and Zhang (2006)][2]. curvature_param: Positive scalar `Tensor` of real dtype. Bounded above by `1.`. Corresponds to 'sigma' in the terminology of [Hager and Zhang (2006)][2]. shrinkage_param: Scalar positive Tensor of real dtype. Must be less than `1.`. Corresponds to the parameter `gamma` in [Hager and Zhang (2006)][2]. Returns: A namedtuple containing the following fields. converged: Boolean `Tensor` of shape [n]. Whether a point satisfying Wolfe/Approx wolfe was found. failed: Boolean `Tensor` of shape [n]. Whether line search failed e.g. if either the objective function or the gradient are not finite at an evaluation point. iterations: Scalar int32 `Tensor`. Number of line search iterations made. func_evals: Scalar int32 `Tensor`. Number of function evaluations made. left: A namedtuple, as returned by value_and_gradients_function, of the left end point of the updated bracketing interval. right: A namedtuple, as returned by value_and_gradients_function, of the right end point of the updated bracketing interval. """ def _loop_cond(curr_interval): """Loop condition.""" active = ~(curr_interval.converged | curr_interval.failed) return (curr_interval.iterations < max_iterations) & tf.reduce_any(input_tensor=active) def _loop_body(curr_interval): """The loop body.""" secant2_raw_result = hzl.secant2( value_and_gradients_function, val_0, curr_interval, f_lim, sufficient_decrease_param, curvature_param) secant2_result = HagerZhangLineSearchResult( converged=secant2_raw_result.converged, failed=secant2_raw_result.failed, iterations=curr_interval.iterations + 1, func_evals=secant2_raw_result.num_evals, left=secant2_raw_result.left, right=secant2_raw_result.right) should_check_shrinkage = ~(secant2_result.converged | secant2_result.failed) def _do_check_shrinkage(): """Check if interval has shrinked enough.""" old_width = curr_interval.right.x - curr_interval.left.x new_width = secant2_result.right.x - secant2_result.left.x sufficient_shrinkage = new_width < old_width * shrinkage_param func_is_flat = ( _very_close(curr_interval.left.f, curr_interval.right.f) & _very_close(secant2_result.left.f, secant2_result.right.f)) new_converged = ( should_check_shrinkage & sufficient_shrinkage & func_is_flat) needs_inner_bisect = should_check_shrinkage & ~sufficient_shrinkage inner_bisect_args = secant2_result._replace( converged=secant2_result.converged | new_converged) def _apply_inner_bisect(): return _line_search_inner_bisection( value_and_gradients_function, inner_bisect_args, needs_inner_bisect, f_lim) return prefer_static.cond( tf.reduce_any(input_tensor=needs_inner_bisect), _apply_inner_bisect, lambda: inner_bisect_args) next_args = prefer_static.cond( tf.reduce_any(input_tensor=should_check_shrinkage), _do_check_shrinkage, lambda: secant2_result) interval_shrunk = ( ~next_args.failed & _very_close(next_args.left.x, next_args.right.x)) return [next_args._replace(converged=next_args.converged | interval_shrunk)] return tf.while_loop( cond=_loop_cond, body=_loop_body, loop_vars=[search_interval], parallel_iterations=1)[0]
Performs bisection and updates the interval.
def _line_search_inner_bisection( value_and_gradients_function, search_interval, active, f_lim): """Performs bisection and updates the interval.""" midpoint = (search_interval.left.x + search_interval.right.x) / 2 val_mid = value_and_gradients_function(midpoint) is_valid_mid = hzl.is_finite(val_mid) still_active = active & is_valid_mid new_failed = active & ~is_valid_mid next_inteval = search_interval._replace( failed=search_interval.failed | new_failed, func_evals=search_interval.func_evals + 1) def _apply_update(): update_result = hzl.update( value_and_gradients_function, next_inteval.left, next_inteval.right, val_mid, f_lim, active=still_active) return HagerZhangLineSearchResult( converged=next_inteval.converged, failed=next_inteval.failed | update_result.failed, iterations=next_inteval.iterations + update_result.iteration, func_evals=next_inteval.func_evals + update_result.num_evals, left=update_result.left, right=update_result.right) return prefer_static.cond( tf.reduce_any(input_tensor=still_active), _apply_update, lambda: next_inteval)
Prepares the arguments for the line search initialization.
def _prepare_args(value_and_gradients_function, initial_step_size, val_initial, val_0, approximate_wolfe_threshold): """Prepares the arguments for the line search initialization. Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that correspond to scalar tensors of real dtype containing the point at which the function was evaluated, the value of the function, and its derivative at that point. The other namedtuple fields, if present, should be tensors or sequences (possibly nested) of tensors. In usual optimization application, this function would be generated by projecting the multivariate objective function along some specific direction. The direction is determined by some other procedure but should be a descent direction (i.e. the derivative of the projected univariate function must be negative at 0.). Alternatively, the function may represent the batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned namedtuple should each be a tensor of shape [n], with the corresponding input points, function values, and derivatives at those input points. initial_step_size: Scalar positive `Tensor` of real dtype, or a tensor of shape [n] in batching mode. The initial value (or values) to try to bracket the minimum. Default is `1.` as a float32. Note that this point need not necessarily bracket the minimum for the line search to work correctly but the supplied value must be greater than 0. A good initial value will make the search converge faster. val_initial: The full return value of evaluating value_and_gradients_function at initial_step_size, i.e. a namedtuple with 'x', 'f', 'df', if already known by the caller. If not None the value of `initial_step_size` will be ignored, otherwise the tuple will be computed by evaluating value_and_gradients_function. val_0: The full return value of value_and_gradients_function at `0.`, i.e. a namedtuple with 'x', 'f', 'df', if already known by the caller. If None the tuple will be computed by evaluating value_and_gradients_function. approximate_wolfe_threshold: Scalar positive `Tensor` of real dtype. Corresponds to the parameter 'epsilon' in [Hager and Zhang (2006)][2]. Used to estimate the threshold at which the line search switches to approximate Wolfe conditions. Returns: left: A namedtuple, as returned by value_and_gradients_function, containing the value and derivative of the function at `0.`. val_initial: A namedtuple, as returned by value_and_gradients_function, containing the value and derivative of the function at `initial_step_size`. f_lim: Real `Tensor` of shape [n]. The function value threshold for the approximate Wolfe conditions to be checked. eval_count: Scalar int32 `Tensor`. The number of target function evaluations made by this function. """ eval_count = 0 if val_initial is None: if initial_step_size is not None: initial_step_size = tf.convert_to_tensor(value=initial_step_size) else: initial_step_size = tf.convert_to_tensor(value=1.0, dtype=tf.float32) val_initial = value_and_gradients_function(initial_step_size) eval_count += 1 if val_0 is None: x_0 = tf.zeros_like(val_initial.x) val_0 = value_and_gradients_function(x_0) eval_count += 1 f_lim = val_0.f + (approximate_wolfe_threshold * tf.abs(val_0.f)) return val_0, val_initial, f_lim, tf.convert_to_tensor(value=eval_count)
Converts a bool tensor to a string with True/ False values.
def _to_str(x): """Converts a bool tensor to a string with True/False values.""" x = tf.convert_to_tensor(value=x) if x.dtype == tf.bool: return tf.where(x, tf.fill(x.shape, 'True'), tf.fill(x.shape, 'False')) return x
Wrapper for tf. Print which supports lists and namedtuples for printing.
def _print(pass_through_tensor, values): """Wrapper for tf.Print which supports lists and namedtuples for printing.""" flat_values = [] for value in values: # Checks if it is a namedtuple. if hasattr(value, '_fields'): for field in value._fields: flat_values.extend([field, _to_str(getattr(value, field))]) continue if isinstance(value, (list, tuple)): for v in value: flat_values.append(_to_str(v)) continue flat_values.append(_to_str(value)) return tf.compat.v1.Print(pass_through_tensor, flat_values)
Batched KL divergence KL ( a || b ) for multivariate Normals.
def _kl_brute_force(a, b, name=None): """Batched KL divergence `KL(a || b)` for multivariate Normals. With `X`, `Y` both multivariate Normals in `R^k` with means `mu_a`, `mu_b` and covariance `C_a`, `C_b` respectively, ``` KL(a || b) = 0.5 * ( L - k + T + Q ), L := Log[Det(C_b)] - Log[Det(C_a)] T := trace(C_b^{-1} C_a), Q := (mu_b - mu_a)^T C_b^{-1} (mu_b - mu_a), ``` This `Op` computes the trace by solving `C_b^{-1} C_a`. Although efficient methods for solving systems with `C_b` may be available, a dense version of (the square root of) `C_a` is used, so performance is `O(B s k**2)` where `B` is the batch size, and `s` is the cost of solving `C_b x = y` for vectors `x` and `y`. Args: a: Instance of `MultivariateNormalLinearOperator`. b: Instance of `MultivariateNormalLinearOperator`. name: (optional) name to use for created ops. Default "kl_mvn". Returns: Batchwise `KL(a || b)`. """ def squared_frobenius_norm(x): """Helper to make KL calculation slightly more readable.""" # http://mathworld.wolfram.com/FrobeniusNorm.html # The gradient of KL[p,q] is not defined when p==q. The culprit is # tf.norm, i.e., we cannot use the commented out code. # return tf.square(tf.norm(x, ord="fro", axis=[-2, -1])) return tf.reduce_sum(input_tensor=tf.square(x), axis=[-2, -1]) # TODO(b/35041439): See also b/35040945. Remove this function once LinOp # supports something like: # A.inverse().solve(B).norm(order='fro', axis=[-1, -2]) def is_diagonal(x): """Helper to identify if `LinearOperator` has only a diagonal component.""" return (isinstance(x, tf.linalg.LinearOperatorIdentity) or isinstance(x, tf.linalg.LinearOperatorScaledIdentity) or isinstance(x, tf.linalg.LinearOperatorDiag)) with tf.name_scope(name or "kl_mvn"): # Calculation is based on: # http://stats.stackexchange.com/questions/60680/kl-divergence-between-two-multivariate-gaussians # and, # https://en.wikipedia.org/wiki/Matrix_norm#Frobenius_norm # i.e., # If Ca = AA', Cb = BB', then # tr[inv(Cb) Ca] = tr[inv(B)' inv(B) A A'] # = tr[inv(B) A A' inv(B)'] # = tr[(inv(B) A) (inv(B) A)'] # = sum_{ij} (inv(B) A)_{ij}**2 # = ||inv(B) A||_F**2 # where ||.||_F is the Frobenius norm and the second equality follows from # the cyclic permutation property. if is_diagonal(a.scale) and is_diagonal(b.scale): # Using `stddev` because it handles expansion of Identity cases. b_inv_a = (a.stddev() / b.stddev())[..., tf.newaxis] else: b_inv_a = b.scale.solve(a.scale.to_dense()) kl_div = ( b.scale.log_abs_determinant() - a.scale.log_abs_determinant() + 0.5 * (-tf.cast(a.scale.domain_dimension_tensor(), a.dtype) + squared_frobenius_norm(b_inv_a) + squared_frobenius_norm( b.scale.solve((b.mean() - a.mean())[..., tf.newaxis])))) tensorshape_util.set_shape( kl_div, tf.broadcast_static_shape(a.batch_shape, b.batch_shape)) return kl_div
Use Gauss - Hermite quadrature to form quadrature on K - 1 simplex.
def quadrature_scheme_softmaxnormal_gauss_hermite( normal_loc, normal_scale, quadrature_size, validate_args=False, name=None): """Use Gauss-Hermite quadrature to form quadrature on `K - 1` simplex. A `SoftmaxNormal` random variable `Y` may be generated via ``` Y = SoftmaxCentered(X), X = Normal(normal_loc, normal_scale) ``` Note: for a given `quadrature_size`, this method is generally less accurate than `quadrature_scheme_softmaxnormal_quantiles`. Args: normal_loc: `float`-like `Tensor` with shape `[b1, ..., bB, K-1]`, B>=0. The location parameter of the Normal used to construct the SoftmaxNormal. normal_scale: `float`-like `Tensor`. Broadcastable with `normal_loc`. The scale parameter of the Normal used to construct the SoftmaxNormal. quadrature_size: Python `int` scalar representing the number of quadrature points. validate_args: Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. name: Python `str` name prefixed to Ops created by this class. Returns: grid: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the convex combination of affine parameters for `K` components. `grid[..., :, n]` is the `n`-th grid point, living in the `K - 1` simplex. probs: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the associated with each grid point. """ with tf.name_scope( name or "quadrature_scheme_softmaxnormal_gauss_hermite"): normal_loc = tf.convert_to_tensor(value=normal_loc, name="normal_loc") npdt = dtype_util.as_numpy_dtype(normal_loc.dtype) normal_scale = tf.convert_to_tensor( value=normal_scale, dtype=npdt, name="normal_scale") normal_scale = maybe_check_quadrature_param( normal_scale, "normal_scale", validate_args) grid, probs = np.polynomial.hermite.hermgauss(deg=quadrature_size) grid = grid.astype(npdt) probs = probs.astype(npdt) probs /= np.linalg.norm(probs, ord=1, keepdims=True) probs = tf.convert_to_tensor(value=probs, name="probs", dtype=npdt) grid = softmax( -distribution_util.pad( (normal_loc[..., tf.newaxis] + np.sqrt(2.) * normal_scale[..., tf.newaxis] * grid), axis=-2, front=True), axis=-2) # shape: [B, components, deg] return grid, probs
Use SoftmaxNormal quantiles to form quadrature on K - 1 simplex.
def quadrature_scheme_softmaxnormal_quantiles( normal_loc, normal_scale, quadrature_size, validate_args=False, name=None): """Use SoftmaxNormal quantiles to form quadrature on `K - 1` simplex. A `SoftmaxNormal` random variable `Y` may be generated via ``` Y = SoftmaxCentered(X), X = Normal(normal_loc, normal_scale) ``` Args: normal_loc: `float`-like `Tensor` with shape `[b1, ..., bB, K-1]`, B>=0. The location parameter of the Normal used to construct the SoftmaxNormal. normal_scale: `float`-like `Tensor`. Broadcastable with `normal_loc`. The scale parameter of the Normal used to construct the SoftmaxNormal. quadrature_size: Python `int` scalar representing the number of quadrature points. validate_args: Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. name: Python `str` name prefixed to Ops created by this class. Returns: grid: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the convex combination of affine parameters for `K` components. `grid[..., :, n]` is the `n`-th grid point, living in the `K - 1` simplex. probs: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the associated with each grid point. """ with tf.name_scope(name or "softmax_normal_grid_and_probs"): normal_loc = tf.convert_to_tensor(value=normal_loc, name="normal_loc") dt = dtype_util.base_dtype(normal_loc.dtype) normal_scale = tf.convert_to_tensor( value=normal_scale, dtype=dt, name="normal_scale") normal_scale = maybe_check_quadrature_param( normal_scale, "normal_scale", validate_args) dist = normal.Normal(loc=normal_loc, scale=normal_scale) def _get_batch_ndims(): """Helper to get rank(dist.batch_shape), statically if possible.""" ndims = tensorshape_util.rank(dist.batch_shape) if ndims is None: ndims = tf.shape(input=dist.batch_shape_tensor())[0] return ndims batch_ndims = _get_batch_ndims() def _get_final_shape(qs): """Helper to build `TensorShape`.""" bs = tensorshape_util.with_rank_at_least(dist.batch_shape, 1) num_components = tf.compat.dimension_value(bs[-1]) if num_components is not None: num_components += 1 tail = tf.TensorShape([num_components, qs]) return bs[:-1].concatenate(tail) def _compute_quantiles(): """Helper to build quantiles.""" # Omit {0, 1} since they might lead to Inf/NaN. zero = tf.zeros([], dtype=dist.dtype) edges = tf.linspace(zero, 1., quadrature_size + 3)[1:-1] # Expand edges so its broadcast across batch dims. edges = tf.reshape( edges, shape=tf.concat( [[-1], tf.ones([batch_ndims], dtype=tf.int32)], axis=0)) quantiles = dist.quantile(edges) quantiles = softmax_centered_bijector.SoftmaxCentered().forward(quantiles) # Cyclically permute left by one. perm = tf.concat([tf.range(1, 1 + batch_ndims), [0]], axis=0) quantiles = tf.transpose(a=quantiles, perm=perm) tensorshape_util.set_shape( quantiles, _get_final_shape(quadrature_size + 1)) return quantiles quantiles = _compute_quantiles() # Compute grid as quantile midpoints. grid = (quantiles[..., :-1] + quantiles[..., 1:]) / 2. # Set shape hints. tensorshape_util.set_shape(grid, _get_final_shape(quadrature_size)) # By construction probs is constant, i.e., `1 / quadrature_size`. This is # important, because non-constant probs leads to non-reparameterizable # samples. probs = tf.fill( dims=[quadrature_size], value=1. / tf.cast(quadrature_size, dist.dtype)) return grid, probs
Helper which checks validity of loc and scale init args.
def maybe_check_quadrature_param(param, name, validate_args): """Helper which checks validity of `loc` and `scale` init args.""" with tf.name_scope("check_" + name): assertions = [] if tensorshape_util.rank(param.shape) is not None: if tensorshape_util.rank(param.shape) == 0: raise ValueError("Mixing params must be a (batch of) vector; " "{}.rank={} is not at least one.".format( name, tensorshape_util.rank(param.shape))) elif validate_args: assertions.append( assert_util.assert_rank_at_least( param, 1, message=("Mixing params must be a (batch of) vector; " "{}.rank is not at least one.".format(name)))) # TODO(jvdillon): Remove once we support k-mixtures. if tensorshape_util.with_rank_at_least(param.shape, 1)[-1] is not None: if tf.compat.dimension_value(param.shape[-1]) != 1: raise NotImplementedError("Currently only bimixtures are supported; " "{}.shape[-1]={} is not 1.".format( name, tf.compat.dimension_value( param.shape[-1]))) elif validate_args: assertions.append( assert_util.assert_equal( tf.shape(input=param)[-1], 1, message=("Currently only bimixtures are supported; " "{}.shape[-1] is not 1.".format(name)))) if assertions: return distribution_util.with_dependencies(assertions, param) return param
Helper to infer batch_shape and event_shape.
def determine_batch_event_shapes(grid, endpoint_affine): """Helper to infer batch_shape and event_shape.""" with tf.name_scope("determine_batch_event_shapes"): # grid # shape: [B, k, q] # endpoint_affine # len=k, shape: [B, d, d] batch_shape = grid.shape[:-2] batch_shape_tensor = tf.shape(input=grid)[:-2] event_shape = None event_shape_tensor = None def _set_event_shape(shape, shape_tensor): if event_shape is None: return shape, shape_tensor return (tf.broadcast_static_shape(event_shape, shape), tf.broadcast_dynamic_shape(event_shape_tensor, shape_tensor)) for aff in endpoint_affine: if aff.shift is not None: batch_shape = tf.broadcast_static_shape(batch_shape, aff.shift.shape[:-1]) batch_shape_tensor = tf.broadcast_dynamic_shape( batch_shape_tensor, tf.shape(input=aff.shift)[:-1]) event_shape, event_shape_tensor = _set_event_shape( aff.shift.shape[-1:], tf.shape(input=aff.shift)[-1:]) if aff.scale is not None: batch_shape = tf.broadcast_static_shape(batch_shape, aff.scale.batch_shape) batch_shape_tensor = tf.broadcast_dynamic_shape( batch_shape_tensor, aff.scale.batch_shape_tensor()) event_shape, event_shape_tensor = _set_event_shape( tf.TensorShape([aff.scale.range_dimension]), aff.scale.range_dimension_tensor()[tf.newaxis]) return batch_shape, batch_shape_tensor, event_shape, event_shape_tensor
Helper which interpolates between two locs.
def interpolate_loc(grid, loc): """Helper which interpolates between two locs.""" if len(loc) != 2: raise NotImplementedError("Currently only bimixtures are supported; " "len(scale)={} is not 2.".format(len(loc))) deg = tf.compat.dimension_value( tensorshape_util.with_rank_at_least(grid.shape, 1)[-1]) if deg is None: raise ValueError("Num quadrature grid points must be known prior " "to graph execution.") with tf.name_scope("interpolate_loc"): if loc is None or loc[0] is None and loc[1] is None: return [None]*deg # shape: [B, 1, k, deg] w = grid[..., tf.newaxis, :, :] loc = [ x[..., tf.newaxis] # shape: [B, e, 1] if x is not None else None for x in loc ] if loc[0] is None: x = w[..., 1, :] * loc[1] # shape: [B, e, deg] elif loc[1] is None: x = w[..., 0, :] * loc[0] # shape: [B, e, deg] else: delta = loc[0] - loc[1] x = w[..., 0, :] * delta + loc[1] # shape: [B, e, deg] return [x[..., k] for k in range(deg)]
Helper which interpolates between two scales.
def interpolate_scale(grid, scale): """Helper which interpolates between two scales.""" if len(scale) != 2: raise NotImplementedError("Currently only bimixtures are supported; " "len(scale)={} is not 2.".format(len(scale))) deg = tf.compat.dimension_value( tensorshape_util.with_rank_at_least(grid.shape, 1)[-1]) if deg is None: raise ValueError("Num quadrature grid points must be known prior " "to graph execution.") with tf.name_scope("interpolate_scale"): return [linop_add_lib.add_operators([ linop_scale(grid[..., k, q], s) for k, s in enumerate(scale) ])[0] for q in range(deg)]
Creates weighted LinOp from existing LinOp.
def linop_scale(w, op): """Creates weighted `LinOp` from existing `LinOp`.""" # We assume w > 0. (This assumption only relates to the is_* attributes.) with tf.name_scope("linop_scale"): # TODO(b/35301104): LinearOperatorComposition doesn't combine operators, so # special case combinations here. Once it does, this function can be # replaced by: # return linop_composition_lib.LinearOperatorComposition([ # scaled_identity(w), op]) def scaled_identity(w): return tf.linalg.LinearOperatorScaledIdentity( num_rows=op.range_dimension_tensor(), multiplier=w, is_non_singular=op.is_non_singular, is_self_adjoint=op.is_self_adjoint, is_positive_definite=op.is_positive_definite) if isinstance(op, tf.linalg.LinearOperatorIdentity): return scaled_identity(w) if isinstance(op, tf.linalg.LinearOperatorScaledIdentity): return scaled_identity(w * op.multiplier) if isinstance(op, tf.linalg.LinearOperatorDiag): return tf.linalg.LinearOperatorDiag( diag=w[..., tf.newaxis] * op.diag_part(), is_non_singular=op.is_non_singular, is_self_adjoint=op.is_self_adjoint, is_positive_definite=op.is_positive_definite) if isinstance(op, tf.linalg.LinearOperatorLowerTriangular): return tf.linalg.LinearOperatorLowerTriangular( tril=w[..., tf.newaxis, tf.newaxis] * op.to_dense(), is_non_singular=op.is_non_singular, is_self_adjoint=op.is_self_adjoint, is_positive_definite=op.is_positive_definite) raise NotImplementedError( "Unsupported Linop type ({})".format(type(op).__name__))
Concatenates input vectors statically if possible.
def concat_vectors(*args): """Concatenates input vectors, statically if possible.""" args_ = [tf.get_static_value(x) for x in args] if any(vec is None for vec in args_): return tf.concat(args, axis=0) return [val for vec in args_ for val in vec]
Equivalent to tf. nn. softmax but works around b/ 70297725.
def softmax(x, axis, name=None): """Equivalent to tf.nn.softmax but works around b/70297725.""" with tf.name_scope(name or "softmax"): x = tf.convert_to_tensor(value=x, name="x") ndims = ( tensorshape_util.rank(x.shape) if tensorshape_util.rank(x.shape) is not None else tf.rank( x, name="ndims")) axis = tf.convert_to_tensor(value=axis, dtype=tf.int32, name="axis") axis_ = tf.get_static_value(axis) if axis_ is not None: axis = np.int(ndims + axis_ if axis_ < 0 else axis_) else: axis = tf.where(axis < 0, ndims + axis, axis) return tf.nn.softmax(x, axis=axis)
Ensures self. distribution. mean () has [ batch event ] shape.
def _expand_base_distribution_mean(self): """Ensures `self.distribution.mean()` has `[batch, event]` shape.""" single_draw_shape = concat_vectors(self.batch_shape_tensor(), self.event_shape_tensor()) m = tf.reshape( self.distribution.mean(), # A scalar. shape=tf.ones_like(single_draw_shape, dtype=tf.int32)) m = tf.tile(m, multiples=single_draw_shape) tensorshape_util.set_shape( m, tensorshape_util.concatenate(self.batch_shape, self.event_shape)) return m
Multiply tensor of vectors by matrices assuming values stored are logs.
def _log_vector_matrix(vs, ms): """Multiply tensor of vectors by matrices assuming values stored are logs.""" return tf.reduce_logsumexp(input_tensor=vs[..., tf.newaxis] + ms, axis=-2)
Multiply tensor of matrices by vectors assuming values stored are logs.
def _log_matrix_vector(ms, vs): """Multiply tensor of matrices by vectors assuming values stored are logs.""" return tf.reduce_logsumexp(input_tensor=ms + vs[..., tf.newaxis, :], axis=-1)
Multiply tensor of vectors by matrices.
def _vector_matrix(vs, ms): """Multiply tensor of vectors by matrices.""" return tf.reduce_sum(input_tensor=vs[..., tf.newaxis] * ms, axis=-2)
Tabulate log probabilities from a batch of distributions.
def _extract_log_probs(num_states, dist): """Tabulate log probabilities from a batch of distributions.""" states = tf.reshape(tf.range(num_states), tf.concat([[num_states], tf.ones_like(dist.batch_shape_tensor())], axis=0)) return distribution_util.move_dimension(dist.log_prob(states), 0, -1)
Compute marginal pdf for each individual observable.
def _marginal_hidden_probs(self): """Compute marginal pdf for each individual observable.""" initial_log_probs = tf.broadcast_to(self._log_init, tf.concat([self.batch_shape_tensor(), [self._num_states]], axis=0)) # initial_log_probs :: batch_shape num_states if self._num_steps > 1: transition_log_probs = self._log_trans def forward_step(log_probs, _): return _log_vector_matrix(log_probs, transition_log_probs) dummy_index = tf.zeros(self._num_steps - 1, dtype=tf.float32) forward_log_probs = tf.scan(forward_step, dummy_index, initializer=initial_log_probs, name="forward_log_probs") forward_log_probs = tf.concat([[initial_log_probs], forward_log_probs], axis=0) else: forward_log_probs = initial_log_probs[tf.newaxis, ...] # returns :: num_steps batch_shape num_states return tf.exp(forward_log_probs)
Compute marginal posterior distribution for each state.
def posterior_marginals(self, observations, name=None): """Compute marginal posterior distribution for each state. This function computes, for each time step, the marginal conditional probability that the hidden Markov model was in each possible state given the observations that were made at each time step. So if the hidden states are `z[0],...,z[num_steps - 1]` and the observations are `x[0], ..., x[num_steps - 1]`, then this function computes `P(z[i] | x[0], ..., x[num_steps - 1])` for all `i` from `0` to `num_steps - 1`. This operation is sometimes called smoothing. It uses a form of the forward-backward algorithm. Note: the behavior of this function is undefined if the `observations` argument represents impossible observations from the model. Args: observations: A tensor representing a batch of observations made on the hidden Markov model. The rightmost dimension of this tensor gives the steps in a sequence of observations from a single sample from the hidden Markov model. The size of this dimension should match the `num_steps` parameter of the hidden Markov model object. The other dimensions are the dimensions of the batch and these are broadcast with the hidden Markov model's parameters. name: Python `str` name prefixed to Ops created by this class. Default value: "HiddenMarkovModel". Returns: posterior_marginal: A `Categorical` distribution object representing the marginal probability of the hidden Markov model being in each state at each step. The rightmost dimension of the `Categorical` distributions batch will equal the `num_steps` parameter providing one marginal distribution for each step. The other dimensions are the dimensions corresponding to the batch of observations. Raises: ValueError: if rightmost dimension of `observations` does not have size `num_steps`. """ with tf.name_scope(name or "posterior_marginals"): with tf.control_dependencies(self._runtime_assertions): observation_tensor_shape = tf.shape(input=observations) with self._observation_shape_preconditions(observation_tensor_shape): observation_batch_shape = observation_tensor_shape[ :-1 - self._underlying_event_rank] observation_event_shape = observation_tensor_shape[ -1 - self._underlying_event_rank:] batch_shape = tf.broadcast_dynamic_shape(observation_batch_shape, self.batch_shape_tensor()) log_init = tf.broadcast_to(self._log_init, tf.concat([batch_shape, [self._num_states]], axis=0)) log_transition = self._log_trans observations = tf.broadcast_to(observations, tf.concat([batch_shape, observation_event_shape], axis=0)) observation_rank = tf.rank(observations) underlying_event_rank = self._underlying_event_rank observations = distribution_util.move_dimension( observations, observation_rank - underlying_event_rank - 1, 0) observations = tf.expand_dims( observations, observation_rank - underlying_event_rank) observation_log_probs = self._observation_distribution.log_prob( observations) log_adjoint_prob = tf.zeros_like(log_init) def forward_step(log_previous_step, log_prob_observation): return _log_vector_matrix(log_previous_step, log_transition) + log_prob_observation log_prob = log_init + observation_log_probs[0] forward_log_probs = tf.scan(forward_step, observation_log_probs[1:], initializer=log_prob, name="forward_log_probs") forward_log_probs = tf.concat([[log_prob], forward_log_probs], axis=0) def backward_step(log_previous_step, log_prob_observation): return _log_matrix_vector(log_transition, log_prob_observation + log_previous_step) backward_log_adjoint_probs = tf.scan( backward_step, observation_log_probs[1:], initializer=log_adjoint_prob, reverse=True, name="backward_log_adjoint_probs") total_log_prob = tf.reduce_logsumexp( input_tensor=forward_log_probs[-1], axis=-1) backward_log_adjoint_probs = tf.concat([backward_log_adjoint_probs, [log_adjoint_prob]], axis=0) log_likelihoods = forward_log_probs + backward_log_adjoint_probs marginal_log_probs = distribution_util.move_dimension( log_likelihoods - total_log_prob[..., tf.newaxis], 0, -2) return categorical.Categorical(logits=marginal_log_probs)
Compute maximum likelihood sequence of hidden states.
def posterior_mode(self, observations, name=None): """Compute maximum likelihood sequence of hidden states. When this function is provided with a sequence of observations `x[0], ..., x[num_steps - 1]`, it returns the sequence of hidden states `z[0], ..., z[num_steps - 1]`, drawn from the underlying Markov chain, that is most likely to yield those observations. It uses the [Viterbi algorithm]( https://en.wikipedia.org/wiki/Viterbi_algorithm). Note: the behavior of this function is undefined if the `observations` argument represents impossible observations from the model. Note: if there isn't a unique most likely sequence then one of the equally most likely sequences is chosen. Args: observations: A tensor representing a batch of observations made on the hidden Markov model. The rightmost dimensions of this tensor correspond to the dimensions of the observation distributions of the underlying Markov chain. The next dimension from the right indexes the steps in a sequence of observations from a single sample from the hidden Markov model. The size of this dimension should match the `num_steps` parameter of the hidden Markov model object. The other dimensions are the dimensions of the batch and these are broadcast with the hidden Markov model's parameters. name: Python `str` name prefixed to Ops created by this class. Default value: "HiddenMarkovModel". Returns: posterior_mode: A `Tensor` representing the most likely sequence of hidden states. The rightmost dimension of this tensor will equal the `num_steps` parameter providing one hidden state for each step. The other dimensions are those of the batch. Raises: ValueError: if the `observations` tensor does not consist of sequences of `num_steps` observations. #### Examples ```python tfd = tfp.distributions # A simple weather model. # Represent a cold day with 0 and a hot day with 1. # Suppose the first day of a sequence has a 0.8 chance of being cold. initial_distribution = tfd.Categorical(probs=[0.8, 0.2]) # Suppose a cold day has a 30% chance of being followed by a hot day # and a hot day has a 20% chance of being followed by a cold day. transition_distribution = tfd.Categorical(probs=[[0.7, 0.3], [0.2, 0.8]]) # Suppose additionally that on each day the temperature is # normally distributed with mean and standard deviation 0 and 5 on # a cold day and mean and standard deviation 15 and 10 on a hot day. observation_distribution = tfd.Normal(loc=[0., 15.], scale=[5., 10.]) # This gives the hidden Markov model: model = tfd.HiddenMarkovModel( initial_distribution=initial_distribution, transition_distribution=transition_distribution, observation_distribution=observation_distribution, num_steps=7) # Suppose we observe gradually rising temperatures over a week: temps = [-2., 0., 2., 4., 6., 8., 10.] # We can now compute the most probable sequence of hidden states: model.posterior_mode(temps) # The result is [0 0 0 0 0 1 1] telling us that the transition # from "cold" to "hot" most likely happened between the # 5th and 6th days. ``` """ with tf.name_scope(name or "posterior_mode"): with tf.control_dependencies(self._runtime_assertions): observation_tensor_shape = tf.shape(input=observations) with self._observation_shape_preconditions(observation_tensor_shape): observation_batch_shape = observation_tensor_shape[ :-1 - self._underlying_event_rank] observation_event_shape = observation_tensor_shape[ -1 - self._underlying_event_rank:] batch_shape = tf.broadcast_dynamic_shape(observation_batch_shape, self.batch_shape_tensor()) log_init = tf.broadcast_to(self._log_init, tf.concat([batch_shape, [self._num_states]], axis=0)) observations = tf.broadcast_to(observations, tf.concat([batch_shape, observation_event_shape], axis=0)) observation_rank = tf.rank(observations) underlying_event_rank = self._underlying_event_rank observations = distribution_util.move_dimension( observations, observation_rank - underlying_event_rank - 1, 0) # We need to compute the probability of each observation for # each possible state. # This requires inserting an extra index just before the # observation event indices that will be broadcast with the # last batch index in `observation_distribution`. observations = tf.expand_dims( observations, observation_rank - underlying_event_rank) observation_log_probs = self._observation_distribution.log_prob( observations) log_prob = log_init + observation_log_probs[0] if self._num_steps == 1: most_likely_end = tf.argmax(input=log_prob, axis=-1) return most_likely_end[..., tf.newaxis] def forward_step(previous_step_pair, log_prob_observation): log_prob_previous = previous_step_pair[0] log_prob = (log_prob_previous[..., tf.newaxis] + self._log_trans + log_prob_observation[..., tf.newaxis, :]) most_likely_given_successor = tf.argmax(input=log_prob, axis=-2) max_log_p_given_successor = tf.reduce_max(input_tensor=log_prob, axis=-2) return (max_log_p_given_successor, most_likely_given_successor) forward_log_probs, all_most_likely_given_successor = tf.scan( forward_step, observation_log_probs[1:], initializer=(log_prob, tf.zeros(tf.shape(input=log_init), dtype=tf.int64)), name="forward_log_probs") most_likely_end = tf.argmax(input=forward_log_probs[-1], axis=-1) # We require the operation that gives C from A and B where # C[i...j] = A[i...j, B[i...j]] # and A = most_likely_given_successor # B = most_likely_successor. # tf.gather requires indices of known shape so instead we use # reduction with tf.one_hot(B) to pick out elements from B def backward_step(most_likely_successor, most_likely_given_successor): return tf.reduce_sum( input_tensor=(most_likely_given_successor * tf.one_hot(most_likely_successor, self._num_states, dtype=tf.int64)), axis=-1) backward_scan = tf.scan( backward_step, all_most_likely_given_successor, most_likely_end, reverse=True) most_likely_sequences = tf.concat([backward_scan, [most_likely_end]], axis=0) return distribution_util.move_dimension(most_likely_sequences, 0, -1)
Chooses a random direction in the event space.
def _choose_random_direction(current_state_parts, batch_rank, seed=None): """Chooses a random direction in the event space.""" seed_gen = distributions.SeedStream(seed, salt='_choose_random_direction') # Chooses the random directions across each of the input components. rnd_direction_parts = [ tf.random.normal( tf.shape(input=current_state_part), dtype=tf.float32, seed=seed_gen()) for current_state_part in current_state_parts ] # Sum squares over all of the input components. Note this takes all # components into account. sum_squares = sum( tf.reduce_sum( input_tensor=rnd_direction**2., axis=tf.range(batch_rank, tf.rank(rnd_direction)), keepdims=True) for rnd_direction in rnd_direction_parts) # Normalizes the random direction fragments. rnd_direction_parts = [rnd_direction / tf.sqrt(sum_squares) for rnd_direction in rnd_direction_parts] return rnd_direction_parts
Applies a single iteration of slice sampling update.
def _sample_next(target_log_prob_fn, current_state_parts, step_sizes, max_doublings, current_target_log_prob, batch_rank, seed=None, name=None): """Applies a single iteration of slice sampling update. Applies hit and run style slice sampling. Chooses a uniform random direction on the unit sphere in the event space. Applies the one dimensional slice sampling update along that direction. Args: target_log_prob_fn: Python callable which takes an argument like `*current_state_parts` and returns its (possibly unnormalized) log-density under the target distribution. current_state_parts: Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s). The first `independent_chain_ndims` of the `Tensor`(s) index different chains. step_sizes: Python `list` of `Tensor`s. Provides a measure of the width of the density. Used to find the slice bounds. Must broadcast with the shape of `current_state_parts`. max_doublings: Integer number of doublings to allow while locating the slice boundaries. current_target_log_prob: `Tensor` representing the value of `target_log_prob_fn(*current_state_parts)`. The only reason to specify this argument is to reduce TF graph size. batch_rank: Integer. The number of axes in the state that correspond to independent batches. seed: Python integer to seed random number generators. name: Python `str` name prefixed to Ops created by this function. Default value: `None` (i.e., 'find_slice_bounds'). Returns: proposed_state_parts: Tensor or Python list of `Tensor`s representing the state(s) of the Markov chain(s) at each result step. Has same shape as input `current_state_parts`. proposed_target_log_prob: `Tensor` representing the value of `target_log_prob_fn` at `next_state`. bounds_satisfied: Boolean `Tensor` of the same shape as the log density. True indicates whether the an interval containing the slice for that batch was found successfully. direction: `Tensor` or Python list of `Tensors`s representing the direction along which the slice was sampled. Has the same shape and dtype(s) as `current_state_parts`. upper_bounds: `Tensor` of batch shape and the dtype of the input state. The upper bounds of the slices along the sampling direction. lower_bounds: `Tensor` of batch shape and the dtype of the input state. The lower bounds of the slices along the sampling direction. """ with tf.compat.v1.name_scope(name, 'sample_next', [ current_state_parts, step_sizes, max_doublings, current_target_log_prob, batch_rank ]): # First step: Choose a random direction. # Direction is a list of tensors. The i'th tensor should have the same shape # as the i'th state part. direction = _choose_random_direction(current_state_parts, batch_rank=batch_rank, seed=seed) # Interpolates the step sizes for the chosen direction. # Applies an ellipsoidal interpolation to compute the step direction for # the chosen direction. Suppose we are given step sizes for each direction. # Label these s_1, s_2, ... s_k. These are the step sizes to use if moving # in a direction parallel to one of the axes. Consider an ellipsoid which # intercepts the i'th axis at s_i. The step size for a direction specified # by the unit vector (n_1, n_2 ...n_k) is then defined as the intersection # of the line through this vector with this ellipsoid. # # One can show that the length of the vector from the origin to the # intersection point is given by: # 1 / sqrt(n_1^2 / s_1^2 + n_2^2 / s_2^2 + ...). # # Proof: # The equation of the ellipsoid is: # Sum_i [x_i^2 / s_i^2 ] = 1. Let n be a unit direction vector. Points # along the line given by n may be parameterized as alpha*n where alpha is # the distance along the vector. Plugging this into the equation for the # ellipsoid, we get: # alpha^2 ( n_1^2 / s_1^2 + n_2^2 / s_2^2 + ...) = 1 # so alpha = \sqrt { \frac{1} { ( n_1^2 / s_1^2 + n_2^2 / s_2^2 + ...) } } reduce_axes = [tf.range(batch_rank, tf.rank(dirn_part)) for dirn_part in direction] components = [ tf.reduce_sum( input_tensor=(dirn_part / step_size)**2, axis=reduce_axes[i]) for i, (step_size, dirn_part) in enumerate(zip(step_sizes, direction)) ] step_size = tf.math.rsqrt(tf.add_n(components)) # Computes the rank of a tensor. Uses the static rank if possible. def _get_rank(x): return (len(x.shape.as_list()) if x.shape.dims is not None else tf.rank(x)) state_part_ranks = [_get_rank(part) for part in current_state_parts] def _step_along_direction(alpha): """Converts the scalar alpha into an n-dim vector with full state info. Computes x_0 + alpha * direction where x_0 is the current state and direction is the direction chosen above. Args: alpha: A tensor of shape equal to the batch dimensions of `current_state_parts`. Returns: state_parts: Tensor or Python list of `Tensor`s representing the state(s) of the Markov chain(s) for a given alpha and a given chosen direction. Has the same shape as `current_state_parts`. """ padded_alphas = [_right_pad(alpha, final_rank=part_rank) for part_rank in state_part_ranks] state_parts = [state_part + padded_alpha * direction_part for state_part, direction_part, padded_alpha in zip(current_state_parts, direction, padded_alphas)] return state_parts def projected_target_log_prob_fn(alpha): """The target log density projected along the chosen direction. Args: alpha: A tensor of shape equal to the batch dimensions of `current_state_parts`. Returns: Target log density evaluated at x_0 + alpha * direction where x_0 is the current state and direction is the direction chosen above. Has the same shape as `alpha`. """ return target_log_prob_fn(*_step_along_direction(alpha)) alpha_init = tf.zeros_like(current_target_log_prob, dtype=current_state_parts[0].dtype.base_dtype) [ next_alpha, next_target_log_prob, bounds_satisfied, upper_bounds, lower_bounds ] = ssu.slice_sampler_one_dim(projected_target_log_prob_fn, x_initial=alpha_init, max_doublings=max_doublings, step_size=step_size, seed=seed) return [ _step_along_direction(next_alpha), next_target_log_prob, bounds_satisfied, direction, upper_bounds, lower_bounds ]
Helper which computes fn_result if needed.
def _maybe_call_fn(fn, fn_arg_list, fn_result=None, description='target_log_prob'): """Helper which computes `fn_result` if needed.""" fn_arg_list = (list(fn_arg_list) if mcmc_util.is_list_like(fn_arg_list) else [fn_arg_list]) if fn_result is None: fn_result = fn(*fn_arg_list) if not fn_result.dtype.is_floating: raise TypeError('`{}` must be a `Tensor` with `float` `dtype`.'.format( description)) return fn_result
Pads the shape of x to the right to be of rank final_rank.
def _right_pad(x, final_rank): """Pads the shape of x to the right to be of rank final_rank. Expands the dims of `x` to the right such that its rank is equal to final_rank. For example, if `x` is of shape [1, 5, 7, 2] and `final_rank` is 7, we return padded_x, which is of shape [1, 5, 7, 2, 1, 1, 1]. Args: x: The tensor whose shape is to be padded. final_rank: Scalar int32 `Tensor` or Python `int`. The desired rank of x. Returns: padded_x: A tensor of rank final_rank. """ padded_shape = tf.concat( [tf.shape(input=x), tf.ones(final_rank - tf.rank(x), dtype=tf.int32)], axis=0) static_padded_shape = None if x.shape.is_fully_defined() and isinstance(final_rank, int): static_padded_shape = x.shape.as_list() extra_dims = final_rank - len(static_padded_shape) static_padded_shape.extend([1] * extra_dims) padded_x = tf.reshape(x, static_padded_shape or padded_shape) return padded_x
Processes input args to meet list - like assumptions.
def _prepare_args(target_log_prob_fn, state, step_size, target_log_prob=None, maybe_expand=False, description='target_log_prob'): """Processes input args to meet list-like assumptions.""" state_parts = list(state) if mcmc_util.is_list_like(state) else [state] state_parts = [ tf.convert_to_tensor(value=s, name='current_state') for s in state_parts ] target_log_prob = _maybe_call_fn( target_log_prob_fn, state_parts, target_log_prob, description) step_sizes = (list(step_size) if mcmc_util.is_list_like(step_size) else [step_size]) step_sizes = [ tf.convert_to_tensor( value=s, name='step_size', dtype=target_log_prob.dtype) for s in step_sizes ] if len(step_sizes) == 1: step_sizes *= len(state_parts) if len(state_parts) != len(step_sizes): raise ValueError('There should be exactly one `step_size` or it should ' 'have same length as `current_state`.') def maybe_flatten(x): return x if maybe_expand or mcmc_util.is_list_like(state) else x[0] return [ maybe_flatten(state_parts), maybe_flatten(step_sizes), target_log_prob ]
Runs one iteration of Slice Sampler.
def one_step(self, current_state, previous_kernel_results): """Runs one iteration of Slice Sampler. Args: current_state: `Tensor` or Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s). The first `r` dimensions index independent chains, `r = tf.rank(target_log_prob_fn(*current_state))`. previous_kernel_results: `collections.namedtuple` containing `Tensor`s representing values from previous calls to this function (or from the `bootstrap_results` function.) Returns: next_state: Tensor or Python list of `Tensor`s representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape as `current_state`. kernel_results: `collections.namedtuple` of internal calculations used to advance the chain. Raises: ValueError: if there isn't one `step_size` or a list with same length as `current_state`. TypeError: if `not target_log_prob.dtype.is_floating`. """ with tf.compat.v1.name_scope( name=mcmc_util.make_name(self.name, 'slice', 'one_step'), values=[ self.step_size, self.max_doublings, self._seed_stream, current_state, previous_kernel_results.target_log_prob ]): with tf.compat.v1.name_scope('initialize'): [ current_state_parts, step_sizes, current_target_log_prob ] = _prepare_args( self.target_log_prob_fn, current_state, self.step_size, previous_kernel_results.target_log_prob, maybe_expand=True) max_doublings = tf.convert_to_tensor( value=self.max_doublings, dtype=tf.int32, name='max_doublings') independent_chain_ndims = distribution_util.prefer_static_rank( current_target_log_prob) [ next_state_parts, next_target_log_prob, bounds_satisfied, direction, upper_bounds, lower_bounds ] = _sample_next( self.target_log_prob_fn, current_state_parts, step_sizes, max_doublings, current_target_log_prob, independent_chain_ndims, seed=self._seed_stream() ) def maybe_flatten(x): return x if mcmc_util.is_list_like(current_state) else x[0] return [ maybe_flatten(next_state_parts), SliceSamplerKernelResults( target_log_prob=next_target_log_prob, bounds_satisfied=bounds_satisfied, direction=direction, upper_bounds=upper_bounds, lower_bounds=lower_bounds ), ]
Initialize from a uniform [ - 2 2 ] distribution in unconstrained space.
def sample_uniform_initial_state(parameter, return_constrained=True, init_sample_shape=(), seed=None): """Initialize from a uniform [-2, 2] distribution in unconstrained space. Args: parameter: `sts.Parameter` named tuple instance. return_constrained: if `True`, re-applies the constraining bijector to return initializations in the original domain. Otherwise, returns initializations in the unconstrained space. Default value: `True`. init_sample_shape: `sample_shape` of the sampled initializations. Default value: `[]`. seed: Python integer to seed the random number generator. Returns: uniform_initializer: `Tensor` of shape `concat([init_sample_shape, parameter.prior.batch_shape, transformed_event_shape])`, where `transformed_event_shape` is `parameter.prior.event_shape`, if `return_constrained=True`, and otherwise it is `parameter.bijector.inverse_event_shape(parameteter.prior.event_shape)`. """ unconstrained_prior_sample = parameter.bijector.inverse( parameter.prior.sample(init_sample_shape, seed=seed)) uniform_initializer = 4 * tf.random.uniform( tf.shape(input=unconstrained_prior_sample), dtype=unconstrained_prior_sample.dtype, seed=seed) - 2 if return_constrained: return parameter.bijector.forward(uniform_initializer) else: return uniform_initializer
Built a transformed - normal variational dist over a parameter s support.
def _build_trainable_posterior(param, initial_loc_fn): """Built a transformed-normal variational dist over a parameter's support.""" loc = tf.compat.v1.get_variable( param.name + '_loc', initializer=lambda: initial_loc_fn(param), dtype=param.prior.dtype, use_resource=True) scale = tf.nn.softplus( tf.compat.v1.get_variable( param.name + '_scale', initializer=lambda: -4 * tf.ones_like(initial_loc_fn(param)), dtype=param.prior.dtype, use_resource=True)) q = tfd.Normal(loc=loc, scale=scale) # Ensure the `event_shape` of the variational distribution matches the # parameter. if (param.prior.event_shape.ndims is None or param.prior.event_shape.ndims > 0): q = tfd.Independent( q, reinterpreted_batch_ndims=param.prior.event_shape.ndims) # Transform to constrained parameter space. return tfd.TransformedDistribution(q, param.bijector)
Build a loss function for variational inference in STS models.
def build_factored_variational_loss(model, observed_time_series, init_batch_shape=(), seed=None, name=None): """Build a loss function for variational inference in STS models. Variational inference searches for the distribution within some family of approximate posteriors that minimizes a divergence between the approximate posterior `q(z)` and true posterior `p(z|observed_time_series)`. By converting inference to optimization, it's generally much faster than sampling-based inference algorithms such as HMC. The tradeoff is that the approximating family rarely contains the true posterior, so it may miss important aspects of posterior structure (in particular, dependence between variables) and should not be blindly trusted. Results may vary; it's generally wise to compare to HMC to evaluate whether inference quality is sufficient for your task at hand. This method constructs a loss function for variational inference using the Kullback-Liebler divergence `KL[q(z) || p(z|observed_time_series)]`, with an approximating family given by independent Normal distributions transformed to the appropriate parameter space for each parameter. Minimizing this loss (the negative ELBO) maximizes a lower bound on the log model evidence `-log p(observed_time_series)`. This is equivalent to the 'mean-field' method implemented in [1]. and is a standard approach. The resulting posterior approximations are unimodal; they will tend to underestimate posterior uncertainty when the true posterior contains multiple modes (the `KL[q||p]` divergence encourages choosing a single mode) or dependence between variables. Args: model: An instance of `StructuralTimeSeries` representing a time-series model. This represents a joint distribution over time-series and their parameters with batch shape `[b1, ..., bN]`. observed_time_series: `float` `Tensor` of shape `concat([sample_shape, model.batch_shape, [num_timesteps, 1]]) where `sample_shape` corresponds to i.i.d. observations, and the trailing `[1]` dimension may (optionally) be omitted if `num_timesteps > 1`. May optionally be an instance of `tfp.sts.MaskedTimeSeries`, which includes a mask `Tensor` to specify timesteps with missing observations. init_batch_shape: Batch shape (Python `tuple`, `list`, or `int`) of initial states to optimize in parallel. Default value: `()`. (i.e., just run a single optimization). seed: Python integer to seed the random number generator. name: Python `str` name prefixed to ops created by this function. Default value: `None` (i.e., 'build_factored_variational_loss'). Returns: variational_loss: `float` `Tensor` of shape `concat([init_batch_shape, model.batch_shape])`, encoding a stochastic estimate of an upper bound on the negative model evidence `-log p(y)`. Minimizing this loss performs variational inference; the gap between the variational bound and the true (generally unknown) model evidence corresponds to the divergence `KL[q||p]` between the approximate and true posterior. variational_distributions: `collections.OrderedDict` giving the approximate posterior for each model parameter. The keys are Python `str` parameter names in order, corresponding to `[param.name for param in model.parameters]`. The values are `tfd.Distribution` instances with batch shape `concat([init_batch_shape, model.batch_shape])`; these will typically be of the form `tfd.TransformedDistribution(tfd.Normal(...), bijector=param.bijector)`. #### Examples Assume we've built a structural time-series model: ```python day_of_week = tfp.sts.Seasonal( num_seasons=7, observed_time_series=observed_time_series, name='day_of_week') local_linear_trend = tfp.sts.LocalLinearTrend( observed_time_series=observed_time_series, name='local_linear_trend') model = tfp.sts.Sum(components=[day_of_week, local_linear_trend], observed_time_series=observed_time_series) ``` To run variational inference, we simply construct the loss and optimize it: ```python (variational_loss, variational_distributions) = tfp.sts.build_factored_variational_loss( model=model, observed_time_series=observed_time_series) train_op = tf.train.AdamOptimizer(0.1).minimize(variational_loss) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for step in range(200): _, loss_ = sess.run((train_op, variational_loss)) if step % 20 == 0: print("step {} loss {}".format(step, loss_)) posterior_samples_ = sess.run({ param_name: q.sample(50) for param_name, q in variational_distributions.items()}) ``` As a more complex example, we might try to avoid local optima by optimizing from multiple initializations in parallel, and selecting the result with the lowest loss: ```python (variational_loss, variational_distributions) = tfp.sts.build_factored_variational_loss( model=model, observed_time_series=observed_time_series, init_batch_shape=[10]) train_op = tf.train.AdamOptimizer(0.1).minimize(variational_loss) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for step in range(200): _, loss_ = sess.run((train_op, variational_loss)) if step % 20 == 0: print("step {} losses {}".format(step, loss_)) # Draw multiple samples to reduce Monte Carlo error in the optimized # variational bounds. avg_loss = np.mean( [sess.run(variational_loss) for _ in range(25)], axis=0) best_posterior_idx = np.argmin(avg_loss, axis=0).astype(np.int32) ``` #### References [1]: Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M. Blei. Automatic Differentiation Variational Inference. In _Journal of Machine Learning Research_, 2017. https://arxiv.org/abs/1603.00788 """ with tf.compat.v1.name_scope( name, 'build_factored_variational_loss', values=[observed_time_series]) as name: seed = tfd.SeedStream( seed, salt='StructuralTimeSeries_build_factored_variational_loss') variational_distributions = collections.OrderedDict() variational_samples = [] for param in model.parameters: def initial_loc_fn(param): return sample_uniform_initial_state( param, return_constrained=True, init_sample_shape=init_batch_shape, seed=seed()) q = _build_trainable_posterior(param, initial_loc_fn=initial_loc_fn) variational_distributions[param.name] = q variational_samples.append(q.sample(seed=seed())) # Multiple initializations (similar to HMC chains) manifest as an extra # param batch dimension, so we need to add corresponding batch dimension(s) # to `observed_time_series`. observed_time_series = sts_util.pad_batch_dimension_for_multiple_chains( observed_time_series, model, chain_batch_shape=init_batch_shape) # Construct the variational bound. log_prob_fn = model.joint_log_prob(observed_time_series) expected_log_joint = log_prob_fn(*variational_samples) entropy = tf.reduce_sum( input_tensor=[ -q.log_prob(sample) for (q, sample) in zip( variational_distributions.values(), variational_samples) ], axis=0) variational_loss = -(expected_log_joint + entropy) # -ELBO return variational_loss, variational_distributions
Run an optimizer within the graph to minimize a loss function.
def _minimize_in_graph(build_loss_fn, num_steps=200, optimizer=None): """Run an optimizer within the graph to minimize a loss function.""" optimizer = tf.compat.v1.train.AdamOptimizer( 0.1) if optimizer is None else optimizer def train_loop_body(step): train_op = optimizer.minimize( build_loss_fn if tf.executing_eagerly() else build_loss_fn()) return tf.tuple(tensors=[tf.add(step, 1)], control_inputs=[train_op]) minimize_op = tf.compat.v1.while_loop( cond=lambda step: step < num_steps, body=train_loop_body, loop_vars=[tf.constant(0)], return_same_structure=True)[0] # Always return a single op. return minimize_op
Draw posterior samples using Hamiltonian Monte Carlo ( HMC ).
def fit_with_hmc(model, observed_time_series, num_results=100, num_warmup_steps=50, num_leapfrog_steps=15, initial_state=None, initial_step_size=None, chain_batch_shape=(), num_variational_steps=150, variational_optimizer=None, seed=None, name=None): """Draw posterior samples using Hamiltonian Monte Carlo (HMC). Markov chain Monte Carlo (MCMC) methods are considered the gold standard of Bayesian inference; under suitable conditions and in the limit of infinitely many draws they generate samples from the true posterior distribution. HMC [1] uses gradients of the model's log-density function to propose samples, allowing it to exploit posterior geometry. However, it is computationally more expensive than variational inference and relatively sensitive to tuning. This method attempts to provide a sensible default approach for fitting StructuralTimeSeries models using HMC. It first runs variational inference as a fast posterior approximation, and initializes the HMC sampler from the variational posterior, using the posterior standard deviations to set per-variable step sizes (equivalently, a diagonal mass matrix). During the warmup phase, it adapts the step size to target an acceptance rate of 0.75, which is thought to be in the desirable range for optimal mixing [2]. Args: model: An instance of `StructuralTimeSeries` representing a time-series model. This represents a joint distribution over time-series and their parameters with batch shape `[b1, ..., bN]`. observed_time_series: `float` `Tensor` of shape `concat([sample_shape, model.batch_shape, [num_timesteps, 1]]) where `sample_shape` corresponds to i.i.d. observations, and the trailing `[1]` dimension may (optionally) be omitted if `num_timesteps > 1`. May optionally be an instance of `tfp.sts.MaskedTimeSeries`, which includes a mask `Tensor` to specify timesteps with missing observations. num_results: Integer number of Markov chain draws. Default value: `100`. num_warmup_steps: Integer number of steps to take before starting to collect results. The warmup steps are also used to adapt the step size towards a target acceptance rate of 0.75. Default value: `50`. num_leapfrog_steps: Integer number of steps to run the leapfrog integrator for. Total progress per HMC step is roughly proportional to `step_size * num_leapfrog_steps`. Default value: `15`. initial_state: Optional Python `list` of `Tensor`s, one for each model parameter, representing the initial state(s) of the Markov chain(s). These should have shape `concat([chain_batch_shape, param.prior.batch_shape, param.prior.event_shape])`. If `None`, the initial state is set automatically using a sample from a variational posterior. Default value: `None`. initial_step_size: Python `list` of `Tensor`s, one for each model parameter, representing the step size for the leapfrog integrator. Must broadcast with the shape of `initial_state`. Larger step sizes lead to faster progress, but too-large step sizes make rejection exponentially more likely. If `None`, the step size is set automatically using the standard deviation of a variational posterior. Default value: `None`. chain_batch_shape: Batch shape (Python `tuple`, `list`, or `int`) of chains to run in parallel. Default value: `[]` (i.e., a single chain). num_variational_steps: Python `int` number of steps to run the variational optimization to determine the initial state and step sizes. Default value: `150`. variational_optimizer: Optional `tf.train.Optimizer` instance to use in the variational optimization. If `None`, defaults to `tf.train.AdamOptimizer(0.1)`. Default value: `None`. seed: Python integer to seed the random number generator. name: Python `str` name prefixed to ops created by this function. Default value: `None` (i.e., 'fit_with_hmc'). Returns: samples: Python `list` of `Tensors` representing posterior samples of model parameters, with shapes `[concat([[num_results], chain_batch_shape, param.prior.batch_shape, param.prior.event_shape]) for param in model.parameters]`. kernel_results: A (possibly nested) `tuple`, `namedtuple` or `list` of `Tensor`s representing internal calculations made within the HMC sampler. #### Examples Assume we've built a structural time-series model: ```python day_of_week = tfp.sts.Seasonal( num_seasons=7, observed_time_series=observed_time_series, name='day_of_week') local_linear_trend = tfp.sts.LocalLinearTrend( observed_time_series=observed_time_series, name='local_linear_trend') model = tfp.sts.Sum(components=[day_of_week, local_linear_trend], observed_time_series=observed_time_series) ``` To draw posterior samples using HMC under default settings: ```python samples, kernel_results = tfp.sts.fit_with_hmc(model, observed_time_series) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) samples_, kernel_results_ = sess.run((samples, kernel_results)) print("acceptance rate: {}".format( np.mean(kernel_results_.inner_results.is_accepted, axis=0))) print("posterior means: {}".format( {param.name: np.mean(param_draws, axis=0) for (param, param_draws) in zip(model.parameters, samples_)})) ``` We can also run multiple chains. This may help diagnose convergence issues and allows us to exploit vectorization to draw samples more quickly, although warmup still requires the same number of sequential steps. ```python from matplotlib import pylab as plt samples, kernel_results = tfp.sts.fit_with_hmc( model, observed_time_series, chain_batch_shape=[10]) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) samples_, kernel_results_ = sess.run((samples, kernel_results)) print("acceptance rate: {}".format( np.mean(kernel_results_.inner_results.inner_results.is_accepted, axis=0))) # Plot the sampled traces for each parameter. If the chains have mixed, their # traces should all cover the same region of state space, frequently crossing # over each other. for (param, param_draws) in zip(model.parameters, samples_): if param.prior.event_shape.ndims > 0: print("Only plotting traces for scalar parameters, skipping {}".format( param.name)) continue plt.figure(figsize=[10, 4]) plt.title(param.name) plt.plot(param_draws) plt.ylabel(param.name) plt.xlabel("HMC step") # Combining the samples from multiple chains into a single dimension allows # us to easily pass sampled parameters to downstream forecasting methods. combined_samples_ = [np.reshape(param_draws, [-1] + list(param_draws.shape[2:])) for param_draws in samples_] ``` For greater flexibility, you may prefer to implement your own sampler using the TensorFlow Probability primitives in `tfp.mcmc`. The following recipe constructs a basic HMC sampler, using a `TransformedTransitionKernel` to incorporate constraints on the parameter space. ```python transformed_hmc_kernel = mcmc.TransformedTransitionKernel( inner_kernel=mcmc.SimpleStepSizeAdaptation( inner_kernel=mcmc.HamiltonianMonteCarlo( target_log_prob_fn=model.joint_log_prob(observed_time_series), step_size=step_size, num_leapfrog_steps=num_leapfrog_steps, state_gradients_are_stopped=True, seed=seed), num_adaptation_steps = int(0.8 * num_warmup_steps)), bijector=[param.bijector for param in model.parameters]) # Initialize from a Uniform[-2, 2] distribution in unconstrained space. initial_state = [tfp.sts.sample_uniform_initial_state( param, return_constrained=True) for param in model.parameters] samples, kernel_results = tfp.mcmc.sample_chain( kernel=transformed_hmc_kernel, num_results=num_results, current_state=initial_state, num_burnin_steps=num_warmup_steps) ``` #### References [1]: Radford Neal. MCMC Using Hamiltonian Dynamics. _Handbook of Markov Chain Monte Carlo_, 2011. https://arxiv.org/abs/1206.1901 [2] M.J. Betancourt, Simon Byrne, and Mark Girolami. Optimizing The Integrator Step Size for Hamiltonian Monte Carlo. https://arxiv.org/abs/1411.6669 """ with tf.compat.v1.name_scope( name, 'fit_with_hmc', values=[observed_time_series]) as name: seed = tfd.SeedStream(seed, salt='StructuralTimeSeries_fit_with_hmc') # Initialize state and step sizes from a variational posterior if not # specified. if initial_step_size is None or initial_state is None: # To avoid threading variational distributions through the training # while loop, we build our own copy here. `make_template` ensures # that our variational distributions share the optimized parameters. def make_variational(): return build_factored_variational_loss( model, observed_time_series, init_batch_shape=chain_batch_shape, seed=seed()) make_variational = tf.compat.v1.make_template('make_variational', make_variational) _, variational_distributions = make_variational() minimize_op = _minimize_in_graph( build_loss_fn=lambda: make_variational()[0], # return just the loss. num_steps=num_variational_steps, optimizer=variational_optimizer) with tf.control_dependencies([minimize_op]): if initial_state is None: initial_state = [tf.stop_gradient(d.sample()) for d in variational_distributions.values()] # Set step sizes using the unconstrained variational distribution. if initial_step_size is None: initial_step_size = [ transformed_q.distribution.stddev() for transformed_q in variational_distributions.values()] # Multiple chains manifest as an extra param batch dimension, so we need to # add a corresponding batch dimension to `observed_time_series`. observed_time_series = sts_util.pad_batch_dimension_for_multiple_chains( observed_time_series, model, chain_batch_shape=chain_batch_shape) # Run HMC to sample from the posterior on parameters. samples, kernel_results = mcmc.sample_chain( num_results=num_results, current_state=initial_state, num_burnin_steps=num_warmup_steps, kernel=mcmc.SimpleStepSizeAdaptation( inner_kernel=mcmc.TransformedTransitionKernel( inner_kernel=mcmc.HamiltonianMonteCarlo( target_log_prob_fn=model.joint_log_prob( observed_time_series), step_size=initial_step_size, num_leapfrog_steps=num_leapfrog_steps, state_gradients_are_stopped=True, seed=seed()), bijector=[param.bijector for param in model.parameters]), num_adaptation_steps=int(num_warmup_steps * 0.8), adaptation_rate=tf.convert_to_tensor( value=0.1, dtype=initial_state[0].dtype)), parallel_iterations=1 if seed is not None else 10) return samples, kernel_results
Compute mean and variance accounting for a mask.
def moments_of_masked_time_series(time_series_tensor, broadcast_mask): """Compute mean and variance, accounting for a mask. Args: time_series_tensor: float `Tensor` time series of shape `concat([batch_shape, [num_timesteps]])`. broadcast_mask: bool `Tensor` of the same shape as `time_series`. Returns: mean: float `Tensor` of shape `batch_shape`. variance: float `Tensor` of shape `batch_shape`. """ num_unmasked_entries = tf.cast( tf.reduce_sum(input_tensor=tf.cast(~broadcast_mask, tf.int32), axis=-1), time_series_tensor.dtype) # Manually compute mean and variance, excluding masked entries. mean = (tf.reduce_sum(input_tensor=tf.where( broadcast_mask, tf.zeros_like(time_series_tensor), time_series_tensor), axis=-1) / num_unmasked_entries) variance = (tf.reduce_sum(input_tensor=tf.where( broadcast_mask, tf.zeros_like(time_series_tensor), (time_series_tensor - mean[..., tf.newaxis]) ** 2), axis=-1) / num_unmasked_entries) return mean, variance
Get the first unmasked entry of each time series in the batch.
def initial_value_of_masked_time_series(time_series_tensor, broadcast_mask): """Get the first unmasked entry of each time series in the batch. Args: time_series_tensor: float `Tensor` of shape [..., num_timesteps]. broadcast_mask: bool `Tensor` of same shape as `time_series`. """ num_timesteps = tf.shape(input=time_series_tensor)[-1] # Compute the index of the first unmasked entry for each series in the batch. unmasked_negindices = ( tf.cast(~broadcast_mask, tf.int32) * tf.range(num_timesteps, 0, -1)) first_unmasked_indices = num_timesteps - tf.reduce_max( input_tensor=unmasked_negindices, axis=-1) if first_unmasked_indices.shape.ndims is None: raise NotImplementedError( 'Cannot compute initial values of a masked time series with' 'dynamic rank.') # `batch_gather` requires static rank # Extract the initial value for each series in the batch. return tf.squeeze(tf.compat.v1.batch_gather( params=time_series_tensor, indices=first_unmasked_indices[..., tf.newaxis]), axis=-1)
Get broadcast batch shape from distributions statically if possible.
def broadcast_batch_shape(distributions): """Get broadcast batch shape from distributions, statically if possible.""" # Static case batch_shape = distributions[0].batch_shape for distribution in distributions: batch_shape = tf.broadcast_static_shape(batch_shape, distribution.batch_shape) if batch_shape.is_fully_defined(): return batch_shape.as_list() # Fallback on dynamic. batch_shape = distributions[0].batch_shape_tensor() for distribution in distributions: batch_shape = tf.broadcast_dynamic_shape(batch_shape, distribution.batch_shape_tensor()) return tf.convert_to_tensor(value=batch_shape)
Expand the observed time series with extra batch dimension ( s ).
def pad_batch_dimension_for_multiple_chains( observed_time_series, model, chain_batch_shape): """"Expand the observed time series with extra batch dimension(s).""" # Running with multiple chains introduces an extra batch dimension. In # general we also need to pad the observed time series with a matching batch # dimension. # # For example, suppose our model has batch shape [3, 4] and # the observed time series has shape `concat([[5], [3, 4], [100])`, # corresponding to `sample_shape`, `batch_shape`, and `num_timesteps` # respectively. The model will produce distributions with batch shape # `concat([chain_batch_shape, [3, 4]])`, so we pad `observed_time_series` to # have matching shape `[5, 1, 3, 4, 100]`, where the added `1` dimension # between the sample and batch shapes will broadcast to `chain_batch_shape`. [ # Extract mask and guarantee `event_ndims=2`. observed_time_series, is_missing ] = canonicalize_observed_time_series_with_mask(observed_time_series) event_ndims = 2 # event_shape = [num_timesteps, observation_size=1] model_batch_ndims = ( model.batch_shape.ndims if model.batch_shape.ndims is not None else tf.shape(input=model.batch_shape_tensor())[0]) # Compute ndims from chain_batch_shape. chain_batch_shape = tf.convert_to_tensor( value=chain_batch_shape, name='chain_batch_shape', dtype=tf.int32) if not chain_batch_shape.shape.is_fully_defined(): raise ValueError('Batch shape must have static rank. (given: {})'.format( chain_batch_shape)) if chain_batch_shape.shape.ndims == 0: # expand int `k` to `[k]`. chain_batch_shape = chain_batch_shape[tf.newaxis] chain_batch_ndims = tf.compat.dimension_value(chain_batch_shape.shape[0]) def do_padding(observed_time_series_tensor): current_sample_shape = tf.shape( input=observed_time_series_tensor)[:-(model_batch_ndims + event_ndims)] current_batch_and_event_shape = tf.shape( input=observed_time_series_tensor)[-(model_batch_ndims + event_ndims):] return tf.reshape( tensor=observed_time_series_tensor, shape=tf.concat([ current_sample_shape, tf.ones([chain_batch_ndims], dtype=tf.int32), current_batch_and_event_shape], axis=0)) # Padding is only needed if the observed time series has sample shape. observed_time_series = prefer_static.cond( (dist_util.prefer_static_rank(observed_time_series) > model_batch_ndims + event_ndims), lambda: do_padding(observed_time_series), lambda: observed_time_series) if is_missing is not None: is_missing = prefer_static.cond( (dist_util.prefer_static_rank(is_missing) > model_batch_ndims + event_ndims), lambda: do_padding(is_missing), lambda: is_missing) return missing_values_util.MaskedTimeSeries(observed_time_series, is_missing=is_missing) return observed_time_series
Combine MultivariateNormals into a factored joint distribution.
def factored_joint_mvn(distributions): """Combine MultivariateNormals into a factored joint distribution. Given a list of multivariate normal distributions `dist[i] = Normal(loc[i], scale[i])`, construct the joint distribution given by concatenating independent samples from these distributions. This is multivariate normal with mean vector given by the concatenation of the component mean vectors, and block-diagonal covariance matrix in which the blocks are the component covariances. Note that for computational efficiency, multivariate normals are represented by a 'scale' (factored covariance) linear operator rather than the full covariance matrix. Args: distributions: Python `iterable` of MultivariateNormal distribution instances (e.g., `tfd.MultivariateNormalDiag`, `tfd.MultivariateNormalTriL`, etc.). These must be broadcastable to a consistent batch shape, but may have different event shapes (i.e., defined over spaces of different dimension). Returns: joint_distribution: An instance of `tfd.MultivariateNormalLinearOperator` representing the joint distribution constructed by concatenating an independent sample from each input distributions. """ graph_parents = [tensor for distribution in distributions for tensor in distribution._graph_parents] # pylint: disable=protected-access with tf.compat.v1.name_scope('factored_joint_mvn', values=graph_parents): # We explicitly broadcast the `locs` so that we can concatenate them. # We don't have direct numerical access to the `scales`, which are arbitrary # linear operators, but `LinearOperatorBlockDiag` appears to do the right # thing without further intervention. dtype = tf.debugging.assert_same_float_dtype(distributions) broadcast_ones = tf.ones(broadcast_batch_shape(distributions), dtype=dtype)[..., tf.newaxis] return MultivariateNormalLinearOperator( loc=tf.concat([mvn.mean() * broadcast_ones for mvn in distributions], axis=-1), scale=tfl.LinearOperatorBlockDiag([mvn.scale for mvn in distributions], is_square=True))
Attempt to sum MultivariateNormal distributions.
def sum_mvns(distributions): """Attempt to sum MultivariateNormal distributions. The sum of (multivariate) normal random variables is itself (multivariate) normal, with mean given by the sum of means and (co)variance given by the sum of (co)variances. This method exploits this fact to compute the sum of a list of `tfd.MultivariateNormalDiag` objects. It may in the future be extended to support summation of other forms of (Multivariate)Normal distributions. Args: distributions: Python `iterable` of `tfd.MultivariateNormalDiag` distribution instances. These must all have the same event shape, and broadcast to a consistent batch shape. Returns: sum_distribution: A `tfd.MultivariateNormalDiag` instance with mean equal to the sum of input means and covariance equal to the sum of input covariances. """ graph_parents = [tensor for distribution in distributions for tensor in distribution._graph_parents] # pylint: disable=protected-access with tf.compat.v1.name_scope('sum_mvns', values=graph_parents): if all([isinstance(mvn, tfd.MultivariateNormalDiag) for mvn in distributions]): return tfd.MultivariateNormalDiag( loc=sum([mvn.mean() for mvn in distributions]), scale_diag=tf.sqrt(sum([ mvn.scale.diag**2 for mvn in distributions]))) else: raise NotImplementedError( 'Sums of distributions other than MultivariateNormalDiag are not ' 'currently implemented. (given: {})'.format(distributions))
Compute statistics of a provided time series as heuristic initialization.
def empirical_statistics(observed_time_series): """Compute statistics of a provided time series, as heuristic initialization. Args: observed_time_series: `Tensor` representing a time series, or batch of time series, of shape either `batch_shape + [num_timesteps, 1]` or `batch_shape + [num_timesteps]` (allowed if `num_timesteps > 1`). Returns: observed_mean: `Tensor` of shape `batch_shape`, giving the empirical mean of each time series in the batch. observed_stddev: `Tensor` of shape `batch_shape`, giving the empirical standard deviation of each time series in the batch. observed_initial_centered: `Tensor of shape `batch_shape`, giving the initial value of each time series in the batch after centering (subtracting the mean). """ with tf.compat.v1.name_scope( 'empirical_statistics', values=[observed_time_series]): [ observed_time_series, mask ] = canonicalize_observed_time_series_with_mask(observed_time_series) squeezed_series = observed_time_series[..., 0] if mask is None: observed_mean, observed_variance = tf.nn.moments( x=squeezed_series, axes=-1) observed_initial = squeezed_series[..., 0] else: broadcast_mask = tf.broadcast_to(tf.cast(mask, tf.bool), tf.shape(input=squeezed_series)) observed_mean, observed_variance = ( missing_values_util.moments_of_masked_time_series( squeezed_series, broadcast_mask=broadcast_mask)) try: observed_initial = ( missing_values_util.initial_value_of_masked_time_series( squeezed_series, broadcast_mask=broadcast_mask)) except NotImplementedError: tf.compat.v1.logging.warn( 'Cannot compute initial values for a masked time series' 'with dynamic shape; using the mean instead. This will' 'affect heuristic priors and may change the results of' 'inference.') observed_initial = observed_mean observed_stddev = tf.sqrt(observed_variance) observed_initial_centered = observed_initial - observed_mean return observed_mean, observed_stddev, observed_initial_centered
Ensures observed_time_series_tensor has a trailing dimension of size 1.
def _maybe_expand_trailing_dim(observed_time_series_tensor): """Ensures `observed_time_series_tensor` has a trailing dimension of size 1. The `tfd.LinearGaussianStateSpaceModel` Distribution has event shape of `[num_timesteps, observation_size]`, but canonical BSTS models are univariate, so their observation_size is always `1`. The extra trailing dimension gets annoying, so this method allows arguments with or without the extra dimension. There is no ambiguity except in the trivial special case where `num_timesteps = 1`; this can be avoided by specifying any unit-length series in the explicit `[num_timesteps, 1]` style. Most users should not call this method directly, and instead call `canonicalize_observed_time_series_with_mask`, which handles converting to `Tensor` and specifying an optional missingness mask. Args: observed_time_series_tensor: `Tensor` of shape `batch_shape + [num_timesteps, 1]` or `batch_shape + [num_timesteps]`, where `num_timesteps > 1`. Returns: expanded_time_series: `Tensor` of shape `batch_shape + [num_timesteps, 1]`. """ with tf.compat.v1.name_scope( 'maybe_expand_trailing_dim', values=[observed_time_series_tensor]): if (observed_time_series_tensor.shape.ndims is not None and tf.compat.dimension_value( observed_time_series_tensor.shape[-1]) is not None): expanded_time_series = ( observed_time_series_tensor if observed_time_series_tensor.shape[-1] == 1 else observed_time_series_tensor[..., tf.newaxis]) else: expanded_time_series = tf.cond( pred=tf.equal(tf.shape(input=observed_time_series_tensor)[-1], 1), true_fn=lambda: observed_time_series_tensor, false_fn=lambda: observed_time_series_tensor[..., tf.newaxis]) return expanded_time_series
Extract a Tensor with canonical shape and optional mask.
def canonicalize_observed_time_series_with_mask( maybe_masked_observed_time_series): """Extract a Tensor with canonical shape and optional mask. Args: maybe_masked_observed_time_series: a `Tensor`-like object with shape `[..., num_timesteps]` or `[..., num_timesteps, 1]`, or a `tfp.sts.MaskedTimeSeries` containing such an object. Returns: masked_time_series: a `tfp.sts.MaskedTimeSeries` namedtuple, in which the `observed_time_series` is converted to `Tensor` with canonical shape `[..., num_timesteps, 1]`, and `is_missing` is either `None` or a boolean `Tensor`. """ with tf.compat.v1.name_scope('canonicalize_observed_time_series_with_mask'): if hasattr(maybe_masked_observed_time_series, 'is_missing'): observed_time_series = ( maybe_masked_observed_time_series.time_series) is_missing = maybe_masked_observed_time_series.is_missing else: observed_time_series = maybe_masked_observed_time_series is_missing = None observed_time_series = tf.convert_to_tensor(value=observed_time_series, name='observed_time_series') observed_time_series = _maybe_expand_trailing_dim(observed_time_series) if is_missing is not None: is_missing = tf.convert_to_tensor( value=is_missing, name='is_missing', dtype_hint=tf.bool) return missing_values_util.MaskedTimeSeries(observed_time_series, is_missing=is_missing)
Construct a predictive normal distribution that mixes over posterior draws.
def mix_over_posterior_draws(means, variances): """Construct a predictive normal distribution that mixes over posterior draws. Args: means: float `Tensor` of shape `[num_posterior_draws, ..., num_timesteps]`. variances: float `Tensor` of shape `[num_posterior_draws, ..., num_timesteps]`. Returns: mixture_dist: `tfd.MixtureSameFamily(tfd.Independent(tfd.Normal))` instance representing a uniform mixture over the posterior samples, with `batch_shape = ...` and `event_shape = [num_timesteps]`. """ # The inputs `means`, `variances` have shape # `concat([ # [num_posterior_draws], # sample_shape, # batch_shape, # [num_timesteps]])` # Because MixtureSameFamily mixes over the rightmost batch dimension, # we need to move the `num_posterior_draws` dimension to be rightmost # in the batch shape. This requires use of `Independent` (to preserve # `num_timesteps` as part of the event shape) and `move_dimension`. # TODO(b/120245392): enhance `MixtureSameFamily` to reduce along an # arbitrary axis, and eliminate `move_dimension` calls here. with tf.compat.v1.name_scope( 'mix_over_posterior_draws', values=[means, variances]): num_posterior_draws = dist_util.prefer_static_value( tf.shape(input=means))[0] component_observations = tfd.Independent( distribution=tfd.Normal( loc=dist_util.move_dimension(means, 0, -2), scale=tf.sqrt(dist_util.move_dimension(variances, 0, -2))), reinterpreted_batch_ndims=1) return tfd.MixtureSameFamily( mixture_distribution=tfd.Categorical( logits=tf.zeros([num_posterior_draws], dtype=component_observations.dtype)), components_distribution=component_observations)
Calculate the batched KL divergence KL ( a || b ) with a and b Uniform.
def _kl_uniform_uniform(a, b, name=None): """Calculate the batched KL divergence KL(a || b) with a and b Uniform. Note that the KL divergence is infinite if the support of `a` is not a subset of the support of `b`. Args: a: instance of a Uniform distribution object. b: instance of a Uniform distribution object. name: (optional) Name to use for created operations. default is "kl_uniform_uniform". Returns: Batchwise KL(a || b) """ with tf.name_scope(name or "kl_uniform_uniform"): # Consistent with # http://www.mast.queensu.ca/~communications/Papers/gil-msc11.pdf, page 60 # Watch out for the change in conventions--they use 'a' and 'b' to refer to # lower and upper bounds respectively there. final_batch_shape = distribution_util.get_broadcast_shape( a.low, b.low, a.high, b.high) dtype = dtype_util.common_dtype( [a.low, a.high, b.low, b.high], tf.float32) return tf.where((b.low <= a.low) & (a.high <= b.high), tf.math.log(b.high - b.low) - tf.math.log(a.high - a.low), tf.broadcast_to( dtype_util.as_numpy_dtype(dtype)(np.inf), final_batch_shape))
high - low.
def range(self, name="range"): """`high - low`.""" with self._name_scope(name): return self.high - self.low
Factory for making summary statistics eg mean mode stddev.
def _make_summary_statistic(attr): """Factory for making summary statistics, eg, mean, mode, stddev.""" def _fn(self): if any(self._dist_fn_args): # pylint: disable=protected-access raise ValueError( 'Can only compute ' + attr + ' when all distributions are ' 'independent; {}'.format(self.model)) return self._unflatten(getattr(d(), attr)() for d in self._dist_fn_wrapped) # pylint: disable=protected-access return _fn
Creates dist_fn_wrapped which calls dist_fn with all prev nodes.
def _unify_call_signature(i, dist_fn): """Creates `dist_fn_wrapped` which calls `dist_fn` with all prev nodes. Args: i: Python `int` corresponding to position in topologically sorted DAG. dist_fn: Python `callable` which takes a subset of previously constructed distributions (in reverse order) and produces a new distribution instance. Returns: dist_fn_wrapped: Python `callable` which takes all previous distributions (in non reverse order) and produces a new distribution instance. args: `tuple` of `str` representing the arg names of `dist_fn` (and in non wrapped, "natural" order). `None` is returned only if the input is not a `callable`. """ if distribution_util.is_distribution_instance(dist_fn): return (lambda *_: dist_fn), None if not callable(dist_fn): raise TypeError('{} must be either `tfd.Distribution`-like or ' '`callable`.'.format(dist_fn)) args = _get_required_args(dist_fn) if not args: return (lambda *_: dist_fn()), () @functools.wraps(dist_fn) def dist_fn_wrapped(*xs): """Calls `dist_fn` with reversed and truncated args.""" if i != len(xs): raise ValueError( 'Internal Error: Unexpected number of inputs provided to {}-th ' 'distribution maker (dist_fn: {}, expected: {}, saw: {}).'.format( i, dist_fn, i, len(xs))) if len(xs) < len(args): raise ValueError( 'Internal Error: Too few inputs provided to {}-th distribution maker ' '(dist_fn: {}, expected: {}, saw: {}).'.format( i, dist_fn, len(args), len(xs))) return dist_fn(*reversed(xs[-len(args):])) return dist_fn_wrapped, args
Uses arg names to resolve distribution names.
def _resolve_distribution_names(dist_fn_args, dist_names, leaf_name): """Uses arg names to resolve distribution names.""" if dist_names is None: dist_names = [] else: dist_names = dist_names.copy() n = len(dist_fn_args) dist_names.extend([None]*(n - len(dist_names))) for i_, args in enumerate(reversed(dist_fn_args)): if not args: continue # There's no args to analyze. i = n - i_ - 1 for j, arg_name in enumerate(args): dist_names[i - j - 1] = arg_name j = 0 for i_ in range(len(dist_names)): i = n - i_ - 1 if dist_names[i] is None: dist_names[i] = leaf_name if j == 0 else leaf_name + str(j) j += 1 return tuple(dist_names)
Returns the distribution s required args.
def _get_required_args(fn): """Returns the distribution's required args.""" argspec = tf_inspect.getfullargspec(fn) args = argspec.args if tf_inspect.isclass(fn): args = args[1:] # Remove the `self` arg. if argspec.defaults: # Remove the args which have defaults. By convention we only feed # *required args*. This means some distributions must always be wrapped # with a `lambda`, e.g., `lambda logits: tfd.Bernoulli(logits=logits)` # or `lambda probs: tfd.Bernoulli(probs=probs)`. args = args[:-len(argspec.defaults)] return tuple(args)
Calculate the KL divergence between two JointDistributionSequential s.
def _kl_joint_joint(d0, d1, name=None): """Calculate the KL divergence between two `JointDistributionSequential`s. Args: d0: instance of a `JointDistributionSequential` object. d1: instance of a `JointDistributionSequential` object. name: (optional) Name to use for created operations. Default value: `"kl_joint_joint"`. Returns: kl_joint_joint: `Tensor` The sum of KL divergences between elemental distributions of two joint distributions. Raises: ValueError: when joint distributions have a different number of elemental distributions. ValueError: when either joint distribution has a distribution with dynamic dependency, i.e., when either joint distribution is not a collection of independent distributions. """ if len(d0._dist_fn_wrapped) != len(d1._dist_fn_wrapped): # pylint: disable=protected-access raise ValueError( 'Can only compute KL divergence between when each has the' 'same number of component distributions.') if (not all(a is None for a in d0._dist_fn_args) or # pylint: disable=protected-access not all(a is None for a in d1._dist_fn_args)): # pylint: disable=protected-access raise ValueError( 'Can only compute KL divergence when all distributions are ' 'independent.') with tf.name_scope(name or 'kl_jointseq_jointseq'): return sum(kullback_leibler.kl_divergence(d0_(), d1_()) for d0_, d1_ in zip(d0._dist_fn_wrapped, d1._dist_fn_wrapped))
Creates dist_fn dist_fn_wrapped dist_fn_args.
def _build(self, model): """Creates `dist_fn`, `dist_fn_wrapped`, `dist_fn_args`.""" if not isinstance(model, collections.Sequence): raise TypeError('`model` must be `list`-like (saw: {}).'.format( type(model).__name__)) self._dist_fn = model self._dist_fn_wrapped, self._dist_fn_args = zip(*[ _unify_call_signature(i, dist_fn) for i, dist_fn in enumerate(model)])
Creates a tuple of tuple s of dependencies.
def _resolve_graph(self, distribution_names=None, leaf_name='x'): """Creates a `tuple` of `tuple`s of dependencies. This function is **experimental**. That said, we encourage its use and ask that you report problems to `tfprobability@tensorflow.org`. Args: distribution_names: `list` of `str` or `None` names corresponding to each of `model` elements. (`None`s are expanding into the appropriate `str`.) leaf_name: `str` used when no maker depends on a particular `model` element. Returns: graph: `tuple` of `(str tuple)` pairs representing the name of each distribution (maker) and the names of its dependencies. #### Example ```python d = tfd.JointDistributionSequential([ tfd.Independent(tfd.Exponential(rate=[100, 120]), 1), lambda e: tfd.Gamma(concentration=e[..., 0], rate=e[..., 1]), tfd.Normal(loc=0, scale=2.), lambda n, g: tfd.Normal(loc=n, scale=g), ]) d._resolve_graph() # ==> ( # ('e', ()), # ('g', ('e',)), # ('n', ()), # ('x', ('n', 'g')), # ) ``` """ # This function additionally depends on: # self._dist_fn_args # self._dist_fn_wrapped # TODO(b/129008220): Robustify this procedure. Eg, handle collisions better, # ignore args prefixed with `_`. if distribution_names is None or any(self._dist_fn_args): distribution_names = _resolve_distribution_names( self._dist_fn_args, distribution_names, leaf_name) if len(set(distribution_names)) != len(distribution_names): raise ValueError('Distribution names must be unique: {}'.format( distribution_names)) if len(distribution_names) != len(self._dist_fn_wrapped): raise ValueError('Distribution names must be 1:1 with `rvs`.') return tuple(zip(distribution_names, tuple(() if a is None else a for a in self._dist_fn_args)))
Shannon entropy in nats.
def _entropy(self): """Shannon entropy in nats.""" if any(self._dist_fn_args): raise ValueError( 'Can only compute entropy when all distributions are independent.') return sum(joint_distribution_lib.maybe_check_wont_broadcast( (d().entropy() for d in self._dist_fn_wrapped), self.validate_args))
Decorator function for argument bounds checking.
def check_arg_in_support(f): """Decorator function for argument bounds checking. This decorator is meant to be used with methods that require the first argument to be in the support of the distribution. If `validate_args` is `True`, the method is wrapped with an assertion that the first argument is greater than or equal to `loc`, since the support of the half-Cauchy distribution is given by `[loc, infinity)`. Args: f: method to be decorated. Returns: Returns a decorated method that, when `validate_args` attribute of the class is `True`, will assert that all elements in the first argument are within the support of the distribution before executing the original method. """ @functools.wraps(f) def _check_arg_and_apply_f(*args, **kwargs): dist = args[0] x = args[1] with tf.control_dependencies([ assert_util.assert_greater_equal( x, dist.loc, message="x is not in the support of the distribution") ] if dist.validate_args else []): return f(*args, **kwargs) return _check_arg_and_apply_f
Returns f ( x ) if x is in the support and default_value otherwise.
def _extend_support_with_default_value(self, x, f, default_value): """Returns `f(x)` if x is in the support, and `default_value` otherwise. Given `f` which is defined on the support of this distribution (`x >= loc`), extend the function definition to the real line by defining `f(x) = default_value` for `x < loc`. Args: x: Floating-point `Tensor` to evaluate `f` at. f: Callable that takes in a `Tensor` and returns a `Tensor`. This represents the function whose domain of definition we want to extend. default_value: Python or numpy literal representing the value to use for extending the domain. Returns: `Tensor` representing an extension of `f(x)`. """ with tf.name_scope("extend_support_with_default_value"): x = tf.convert_to_tensor(value=x, dtype=self.dtype, name="x") loc = self.loc + tf.zeros_like(self.scale) + tf.zeros_like(x) x = x + tf.zeros_like(loc) # Substitute out-of-support values in x with values that are in the # support of the distribution before applying f. y = f(tf.where(x < loc, self._inv_z(0.5) + tf.zeros_like(x), x)) if default_value == 0.: default_value = tf.zeros_like(y) elif default_value == 1.: default_value = tf.ones_like(y) else: default_value = tf.fill( dims=tf.shape(input=y), value=dtype_util.as_numpy_dtype(self.dtype)(default_value)) return tf.where(x < loc, default_value, y)
Processes input args to meet list - like assumptions.
def _prepare_args(log_likelihood_fn, state, log_likelihood=None, description='log_likelihood'): """Processes input args to meet list-like assumptions.""" state_parts = list(state) if mcmc_util.is_list_like(state) else [state] state_parts = [tf.convert_to_tensor(s, name='current_state') for s in state_parts] log_likelihood = _maybe_call_fn( log_likelihood_fn, state_parts, log_likelihood, description) return [state_parts, log_likelihood]
Runs one iteration of the Elliptical Slice Sampler.
def one_step(self, current_state, previous_kernel_results): """Runs one iteration of the Elliptical Slice Sampler. Args: current_state: `Tensor` or Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s). The first `r` dimensions index independent chains, `r = tf.rank(log_likelihood_fn(*normal_sampler_fn()))`. previous_kernel_results: `collections.namedtuple` containing `Tensor`s representing values from previous calls to this function (or from the `bootstrap_results` function.) Returns: next_state: Tensor or Python list of `Tensor`s representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape as `current_state`. kernel_results: `collections.namedtuple` of internal calculations used to advance the chain. Raises: TypeError: if `not log_likelihood.dtype.is_floating`. """ with tf.compat.v1.name_scope( name=mcmc_util.make_name(self.name, 'elliptical_slice', 'one_step'), values=[self._seed_stream, current_state, previous_kernel_results.log_likelihood]): with tf.compat.v1.name_scope('initialize'): [ init_state_parts, init_log_likelihood ] = _prepare_args( self.log_likelihood_fn, current_state, previous_kernel_results.log_likelihood) normal_samples = self.normal_sampler_fn(self._seed_stream()) # pylint: disable=not-callable normal_samples = list(normal_samples) if mcmc_util.is_list_like( normal_samples) else [normal_samples] u = tf.random.uniform( shape=tf.shape(init_log_likelihood), seed=self._seed_stream(), dtype=init_log_likelihood.dtype.base_dtype, ) threshold = init_log_likelihood + tf.math.log(u) starting_angle = tf.random.uniform( shape=tf.shape(init_log_likelihood), minval=0., maxval=2 * np.pi, name='angle', seed=self._seed_stream(), dtype=init_log_likelihood.dtype.base_dtype, ) starting_angle_min = starting_angle - 2 * np.pi starting_angle_max = starting_angle starting_state_parts = _rotate_on_ellipse( init_state_parts, normal_samples, starting_angle) starting_log_likelihood = self.log_likelihood_fn(*starting_state_parts) # pylint: disable=not-callable def chain_not_done( angle, angle_min, angle_max, current_state_parts, current_log_likelihood): del angle, angle_min, angle_max, current_state_parts return tf.reduce_any(current_log_likelihood < threshold) def sample_next_angle( angle, angle_min, angle_max, current_state_parts, current_log_likelihood): """Slice sample a new angle, and rotate init_state by that amount.""" chain_not_done = current_log_likelihood < threshold # Box in on angle. Only update angles for which we haven't generated a # point that beats the threshold. angle_min = tf.where( tf.math.logical_and(angle < 0, chain_not_done), angle, angle_min) angle_max = tf.where( tf.math.logical_and(angle >= 0, chain_not_done), angle, angle_max) new_angle = tf.random.uniform( shape=tf.shape(current_log_likelihood), minval=angle_min, maxval=angle_max, seed=self._seed_stream(), dtype=angle.dtype.base_dtype ) angle = tf.where(chain_not_done, new_angle, angle) next_state_parts = _rotate_on_ellipse( init_state_parts, normal_samples, angle) new_state_parts = [] broadcasted_chain_not_done = _right_pad_with_ones( chain_not_done, tf.rank(next_state_parts[0])) for n_state, c_state in zip(next_state_parts, current_state_parts): new_state_part = tf.where( tf.broadcast_to( broadcasted_chain_not_done, tf.shape(n_state)), n_state, c_state) new_state_parts.append(new_state_part) return ( angle, angle_min, angle_max, new_state_parts, self.log_likelihood_fn(*new_state_parts) # pylint: disable=not-callable ) [ next_angle, _, _, next_state_parts, next_log_likelihood, ] = tf.while_loop( cond=chain_not_done, body=sample_next_angle, loop_vars=[ starting_angle, starting_angle_min, starting_angle_max, starting_state_parts, starting_log_likelihood ]) return [ next_state_parts if mcmc_util.is_list_like( current_state) else next_state_parts[0], EllipticalSliceSamplerKernelResults( log_likelihood=next_log_likelihood, angle=next_angle, normal_samples=normal_samples, ), ]
Visualizes sequences as TensorBoard summaries.
def image_summary(seqs, name, num=None): """Visualizes sequences as TensorBoard summaries. Args: seqs: A tensor of shape [n, t, h, w, c]. name: String name of this summary. num: Integer for the number of examples to visualize. Defaults to all examples. """ seqs = tf.clip_by_value(seqs, 0., 1.) seqs = tf.unstack(seqs[:num]) joined_seqs = [tf.concat(tf.unstack(seq), 1) for seq in seqs] joined_seqs = tf.expand_dims(tf.concat(joined_seqs, 0), 0) tf.compat.v2.summary.image( name, joined_seqs, max_outputs=1, step=tf.compat.v1.train.get_or_create_global_step())
Visualizes the reconstruction of inputs in TensorBoard.
def visualize_reconstruction(inputs, reconstruct, num=3, name="reconstruction"): """Visualizes the reconstruction of inputs in TensorBoard. Args: inputs: A tensor of the original inputs, of shape [batch, timesteps, h, w, c]. reconstruct: A tensor of a reconstruction of inputs, of shape [batch, timesteps, h, w, c]. num: Integer for the number of examples to visualize. name: String name of this summary. """ reconstruct = tf.clip_by_value(reconstruct, 0., 1.) inputs_and_reconstruct = tf.concat((inputs[:num], reconstruct[:num]), axis=0) image_summary(inputs_and_reconstruct, name)
Visualizes a qualitative analysis of a given model.
def visualize_qualitative_analysis(inputs, model, samples=1, batch_size=3, length=8): """Visualizes a qualitative analysis of a given model. Args: inputs: A tensor of the original inputs, of shape [batch, timesteps, h, w, c]. model: A DisentangledSequentialVAE model. samples: Number of samples to draw from the latent distributions. batch_size: Number of sequences to generate. length: Number of timesteps to generate for each sequence. """ average = lambda dist: tf.reduce_mean( input_tensor=dist.mean(), axis=0) # avg over samples with tf.compat.v1.name_scope("val_reconstruction"): reconstruct = functools.partial(model.reconstruct, inputs=inputs, samples=samples) visualize_reconstruction(inputs, average(reconstruct())) visualize_reconstruction(inputs, average(reconstruct(sample_static=True)), name="static_prior") visualize_reconstruction(inputs, average(reconstruct(sample_dynamic=True)), name="dynamic_prior") visualize_reconstruction(inputs, average(reconstruct(swap_static=True)), name="swap_static") visualize_reconstruction(inputs, average(reconstruct(swap_dynamic=True)), name="swap_dynamic") with tf.compat.v1.name_scope("generation"): generate = functools.partial(model.generate, batch_size=batch_size, length=length, samples=samples) image_summary(average(generate(fix_static=True)), "fix_static") image_summary(average(generate(fix_dynamic=True)), "fix_dynamic")
Summarize the parameters of a distribution.
def summarize_dist_params(dist, name, name_scope="dist_params"): """Summarize the parameters of a distribution. Args: dist: A Distribution object with mean and standard deviation parameters. name: The name of the distribution. name_scope: The name scope of this summary. """ with tf.compat.v1.name_scope(name_scope): tf.compat.v2.summary.histogram( name="{}/{}".format(name, "mean"), data=dist.mean(), step=tf.compat.v1.train.get_or_create_global_step()) tf.compat.v2.summary.histogram( name="{}/{}".format(name, "stddev"), data=dist.stddev(), step=tf.compat.v1.train.get_or_create_global_step())
Summarize the mean of a tensor in nats and bits per unit.
def summarize_mean_in_nats_and_bits(inputs, units, name, nats_name_scope="nats", bits_name_scope="bits_per_dim"): """Summarize the mean of a tensor in nats and bits per unit. Args: inputs: A tensor of values measured in nats. units: The units of the tensor with which to compute the mean bits per unit. name: The name of the tensor. nats_name_scope: The name scope of the nats summary. bits_name_scope: The name scope of the bits summary. """ mean = tf.reduce_mean(input_tensor=inputs) with tf.compat.v1.name_scope(nats_name_scope): tf.compat.v2.summary.scalar( name, mean, step=tf.compat.v1.train.get_or_create_global_step()) with tf.compat.v1.name_scope(bits_name_scope): tf.compat.v2.summary.scalar( name, mean / units / tf.math.log(2.), step=tf.compat.v1.train.get_or_create_global_step())
Runs the model to generate multivariate normal distribution.
def call(self, inputs): """Runs the model to generate multivariate normal distribution. Args: inputs: Unused. Returns: A MultivariateNormalDiag distribution with event shape [dimensions], batch shape [], and sample shape [sample_shape, dimensions]. """ del inputs # unused with tf.compat.v1.name_scope(self._name): return tfd.MultivariateNormalDiag(self.loc, self.scale_diag)
Returns an initial state for the LSTM cell.
def zero_state(self, sample_batch_shape=()): """Returns an initial state for the LSTM cell. Args: sample_batch_shape: A 0D or 1D tensor of the combined sample and batch shape. Returns: A tuple of the initial previous output at timestep 0 of shape [sample_batch_shape, dimensions], and the cell state. """ h0 = tf.zeros([1, self.hidden_size]) c0 = tf.zeros([1, self.hidden_size]) combined_shape = tf.concat((tf.convert_to_tensor( value=sample_batch_shape, dtype=tf.int32), [self.dimensions]), axis=-1) previous_output = tf.zeros(combined_shape) return previous_output, (h0, c0)
Runs the model to generate a distribution for a single timestep.
def call(self, inputs, state): """Runs the model to generate a distribution for a single timestep. This generates a batched MultivariateNormalDiag distribution using the output of the recurrent model at the current timestep to parameterize the distribution. Args: inputs: The sampled value of `z` at the previous timestep, i.e., `z_{t-1}`, of shape [..., dimensions]. `z_0` should be set to the empty matrix. state: A tuple containing the (hidden, cell) state. Returns: A tuple of a MultivariateNormalDiag distribution, and the state of the recurrent function at the end of the current timestep. The distribution will have event shape [dimensions], batch shape [...], and sample shape [sample_shape, ..., dimensions]. """ # In order to allow the user to pass in a single example without a batch # dimension, we always expand the input to at least two dimensions, then # fix the output shape to remove the batch dimension if necessary. original_shape = inputs.shape if len(original_shape) < 2: inputs = tf.reshape(inputs, [1, -1]) out, state = self.lstm_cell(inputs, state) out = self.output_layer(out) correct_shape = tf.concat((original_shape[:-1], tf.shape(input=out)[-1:]), 0) out = tf.reshape(out, correct_shape) loc = out[..., :self.dimensions] scale_diag = tf.nn.softplus(out[..., self.dimensions:]) + 1e-5 # keep > 0 return tfd.MultivariateNormalDiag(loc=loc, scale_diag=scale_diag), state
Runs the model to generate a distribution p ( x_t | z_t f ).
def call(self, inputs): """Runs the model to generate a distribution p(x_t | z_t, f). Args: inputs: A tuple of (z_{1:T}, f), where `z_{1:T}` is a tensor of shape [..., batch_size, timesteps, latent_size_dynamic], and `f` is of shape [..., batch_size, latent_size_static]. Returns: A batched Independent distribution wrapping a set of Normal distributions over the pixels of x_t, where the Independent distribution has event shape [height, width, channels], batch shape [batch_size, timesteps], and sample shape [sample_shape, batch_size, timesteps, height, width, channels]. """ # We explicitly broadcast f to the same shape as z other than the final # dimension, because `tf.concat` can't automatically do this. dynamic, static = inputs timesteps = tf.shape(input=dynamic)[-2] static = static[..., tf.newaxis, :] + tf.zeros([timesteps, 1]) latents = tf.concat([dynamic, static], axis=-1) # (sample, N, T, latents) out = self.dense(latents) out = tf.reshape(out, (-1, 1, 1, self.hidden_size)) out = self.conv_transpose1(out) out = self.conv_transpose2(out) out = self.conv_transpose3(out) out = self.conv_transpose4(out) # (sample*N*T, h, w, c) expanded_shape = tf.concat( (tf.shape(input=latents)[:-1], tf.shape(input=out)[1:]), axis=0) out = tf.reshape(out, expanded_shape) # (sample, N, T, h, w, c) return tfd.Independent( distribution=tfd.Normal(loc=out, scale=1.), reinterpreted_batch_ndims=3, # wrap (h, w, c) name="decoded_image")
Runs the model to generate an intermediate representation of x_t.
def call(self, inputs): """Runs the model to generate an intermediate representation of x_t. Args: inputs: A batch of image sequences `x_{1:T}` of shape `[sample_shape, batch_size, timesteps, height, width, channels]`. Returns: A batch of intermediate representations of shape [sample_shape, batch_size, timesteps, hidden_size]. """ image_shape = tf.shape(input=inputs)[-3:] collapsed_shape = tf.concat(([-1], image_shape), axis=0) out = tf.reshape(inputs, collapsed_shape) # (sample*batch*T, h, w, c) out = self.conv1(out) out = self.conv2(out) out = self.conv3(out) out = self.conv4(out) expanded_shape = tf.concat((tf.shape(input=inputs)[:-3], [-1]), axis=0) return tf.reshape(out, expanded_shape)
Runs the model to generate a distribution q ( f | x_ { 1: T } ).
def call(self, inputs): """Runs the model to generate a distribution `q(f | x_{1:T})`. This generates a list of batched MultivariateNormalDiag distributions using the output of the recurrent model at each timestep to parameterize each distribution. Args: inputs: A batch of intermediate representations of image frames across all timesteps, of shape [..., batch_size, timesteps, hidden_size]. Returns: A batched MultivariateNormalDiag distribution with event shape [latent_size], batch shape [..., batch_size], and sample shape [sample_shape, ..., batch_size, latent_size]. """ # TODO(dusenberrymw): Remove these reshaping commands after b/113126249 is # fixed. collapsed_shape = tf.concat(([-1], tf.shape(input=inputs)[-2:]), axis=0) out = tf.reshape(inputs, collapsed_shape) # (sample*batch_size, T, hidden) out = self.bilstm(out) # (sample*batch_size, hidden) expanded_shape = tf.concat((tf.shape(input=inputs)[:-2], [-1]), axis=0) out = tf.reshape(out, expanded_shape) # (sample, batch_size, hidden) out = self.output_layer(out) # (sample, batch_size, 2*latent_size) loc = out[..., :self.latent_size] scale_diag = tf.nn.softplus(out[..., self.latent_size:]) + 1e-5 # keep > 0 return tfd.MultivariateNormalDiag(loc=loc, scale_diag=scale_diag)
Runs the model to generate a distribution q ( z_ { 1: T } | x_ { 1: T } ).
def call(self, inputs): """Runs the model to generate a distribution `q(z_{1:T} | x_{1:T})`. Args: inputs: A batch of intermediate representations of image frames across all timesteps, of shape [..., batch_size, timesteps, hidden_size]. Returns: A batch of MultivariateNormalDiag distributions with event shape [latent_size], batch shape [..., batch_size, timesteps], and sample shape [sample_shape, ..., batch_size, timesteps, latent_size]. """ out = self.dense(inputs) # (..., batch, time, hidden) out = self.output_layer(out) # (..., batch, time, 2*latent) loc = out[..., :self.latent_size] scale_diag = tf.nn.softplus(out[..., self.latent_size:]) + 1e-5 # keep > 0 return tfd.MultivariateNormalDiag(loc=loc, scale_diag=scale_diag)
Runs the model to generate a distribution q ( z_ { 1: T } | x_ { 1: T } f ).
def call(self, inputs): """Runs the model to generate a distribution `q(z_{1:T} | x_{1:T}, f)`. This generates a list of batched MultivariateNormalDiag distributions using the output of the recurrent model at each timestep to parameterize each distribution. Args: inputs: A tuple of a batch of intermediate representations of image frames across all timesteps of shape [..., batch_size, timesteps, dimensions], and a sample of the static latent variable `f` of shape [..., batch_size, latent_size]. Returns: A batch of MultivariateNormalDiag distributions with event shape [latent_size], batch shape [broadcasted_shape, batch_size, timesteps], and sample shape [sample_shape, broadcasted_shape, batch_size, timesteps, latent_size], where `broadcasted_shape` is the broadcasted sampled shape between the inputs and static sample. """ # We explicitly broadcast `x` and `f` to the same shape other than the final # dimension, because `tf.concat` can't automatically do this. This will # entail adding a `timesteps` dimension to `f` to give the shape `(..., # batch, timesteps, latent)`, and then broadcasting the sample shapes of # both tensors to the same shape. features, static_sample = inputs length = tf.shape(input=features)[-2] static_sample = static_sample[..., tf.newaxis, :] + tf.zeros([length, 1]) sample_shape_static = tf.shape(input=static_sample)[:-3] sample_shape_inputs = tf.shape(input=features)[:-3] broadcast_shape_inputs = tf.concat((sample_shape_static, [1, 1, 1]), 0) broadcast_shape_static = tf.concat((sample_shape_inputs, [1, 1, 1]), 0) features = features + tf.zeros(broadcast_shape_inputs) static_sample = static_sample + tf.zeros(broadcast_shape_static) # `combined` will have shape (..., batch, T, hidden+latent). combined = tf.concat((features, static_sample), axis=-1) # TODO(dusenberrymw): Remove these reshaping commands after b/113126249 is # fixed. collapsed_shape = tf.concat(([-1], tf.shape(input=combined)[-2:]), axis=0) out = tf.reshape(combined, collapsed_shape) out = self.bilstm(out) # (sample*batch, T, hidden_size) out = self.rnn(out) # (sample*batch, T, hidden_size) expanded_shape = tf.concat( (tf.shape(input=combined)[:-2], tf.shape(input=out)[1:]), axis=0) out = tf.reshape(out, expanded_shape) # (sample, batch, T, hidden_size) out = self.output_layer(out) # (sample, batch, T, 2*latent_size) loc = out[..., :self.latent_size] scale_diag = tf.nn.softplus(out[..., self.latent_size:]) + 1e-5 # keep > 0 return tfd.MultivariateNormalDiag(loc=loc, scale_diag=scale_diag)
Generate new sequences.
def generate(self, batch_size, length, samples=1, fix_static=False, fix_dynamic=False): """Generate new sequences. Args: batch_size: Number of sequences to generate. length: Number of timesteps to generate for each sequence. samples: Number of samples to draw from the latent distributions. fix_static: Boolean for whether or not to share the same random sample of the static latent variable `f` from its prior across all examples. fix_dynamic: Boolean for whether or not to share the same random sample of the dynamic latent variable `z_{1:T}` from its prior across all examples. Returns: A batched Independent distribution wrapping a set of Normal distributions over the pixels of the generated sequences, where the Independent distribution has event shape [height, width, channels], batch shape [samples, batch_size, timesteps], and sample shape [sample_shape, samples, batch_size, timesteps, height, width, channels]. """ static_sample, _ = self.sample_static_prior(samples, batch_size, fix_static) dynamic_sample, _ = self.sample_dynamic_prior(samples, batch_size, length, fix_dynamic) likelihood = self.decoder((dynamic_sample, static_sample)) return likelihood
Reconstruct the given input sequences.
def reconstruct(self, inputs, samples=1, sample_static=False, sample_dynamic=False, swap_static=False, swap_dynamic=False, fix_static=False, fix_dynamic=False): """Reconstruct the given input sequences. Args: inputs: A batch of image sequences `x_{1:T}` of shape `[batch_size, timesteps, height, width, channels]`. samples: Number of samples to draw from the latent distributions. sample_static: Boolean for whether or not to randomly sample the static latent variable `f` from its prior distribution. sample_dynamic: Boolean for whether or not to randomly sample the dynamic latent variable `z_{1:T}` from its prior distribution. swap_static: Boolean for whether or not to swap the encodings for the static latent variable `f` between the examples. swap_dynamic: Boolean for whether or not to swap the encodings for the dynamic latent variable `z_{1:T}` between the examples. fix_static: Boolean for whether or not to share the same random sample of the static latent variable `f` from its prior across all examples. fix_dynamic: Boolean for whether or not to share the same random sample of the dynamic latent variable `z_{1:T}` from its prior across all examples. Returns: A batched Independent distribution wrapping a set of Normal distributions over the pixels of the reconstruction of the input, where the Independent distribution has event shape [height, width, channels], batch shape [samples, batch_size, timesteps], and sample shape [sample_shape, samples, batch_size, timesteps, height, width, channels]. """ batch_size = tf.shape(input=inputs)[-5] length = len(tf.unstack(inputs, axis=-4)) # hack for graph mode features = self.compressor(inputs) # (..., batch, timesteps, hidden) if sample_static: static_sample, _ = self.sample_static_prior( samples, batch_size, fix_static) else: static_sample, _ = self.sample_static_posterior(features, samples) if swap_static: static_sample = tf.reverse(static_sample, axis=[1]) if sample_dynamic: dynamic_sample, _ = self.sample_dynamic_prior( samples, batch_size, length, fix_dynamic) else: dynamic_sample, _ = self.sample_dynamic_posterior( features, samples, static_sample) if swap_dynamic: dynamic_sample = tf.reverse(dynamic_sample, axis=[1]) likelihood = self.decoder((dynamic_sample, static_sample)) return likelihood
Sample the static latent prior.
def sample_static_prior(self, samples, batch_size, fixed=False): """Sample the static latent prior. Args: samples: Number of samples to draw from the latent distribution. batch_size: Number of sequences to sample. fixed: Boolean for whether or not to share the same random sample across all sequences. Returns: A tuple of a sample tensor of shape [samples, batch_size, latent_size], and a MultivariateNormalDiag distribution from which the tensor was sampled, with event shape [latent_size], and batch shape []. """ dist = self.static_prior() if fixed: # in either case, shape is (samples, batch, latent) sample = dist.sample((samples, 1)) + tf.zeros([batch_size, 1]) else: sample = dist.sample((samples, batch_size)) return sample, dist
Sample the static latent posterior.
def sample_static_posterior(self, inputs, samples): """Sample the static latent posterior. Args: inputs: A batch of intermediate representations of image frames across all timesteps, of shape [..., batch_size, timesteps, hidden_size]. samples: Number of samples to draw from the latent distribution. Returns: A tuple of a sample tensor of shape [samples, batch_size, latent_size], and a MultivariateNormalDiag distribution from which the tensor was sampled, with event shape [latent_size], and batch shape [..., batch_size]. """ dist = self.static_encoder(inputs) sample = dist.sample(samples) return sample, dist
Sample the dynamic latent prior.
def sample_dynamic_prior(self, samples, batch_size, length, fixed=False): """Sample the dynamic latent prior. Args: samples: Number of samples to draw from the latent distribution. batch_size: Number of sequences to sample. length: Number of timesteps to sample for each sequence. fixed: Boolean for whether or not to share the same random sample across all sequences. Returns: A tuple of a sample tensor of shape [samples, batch_size, length latent_size], and a MultivariateNormalDiag distribution from which the tensor was sampled, with event shape [latent_size], and batch shape [samples, 1, length] if fixed or [samples, batch_size, length] otherwise. """ if fixed: sample_batch_size = 1 else: sample_batch_size = batch_size sample, state = self.dynamic_prior.zero_state([samples, sample_batch_size]) locs = [] scale_diags = [] sample_list = [] for _ in range(length): dist, state = self.dynamic_prior(sample, state) sample = dist.sample() locs.append(dist.parameters["loc"]) scale_diags.append(dist.parameters["scale_diag"]) sample_list.append(sample) sample = tf.stack(sample_list, axis=2) loc = tf.stack(locs, axis=2) scale_diag = tf.stack(scale_diags, axis=2) if fixed: # tile along the batch axis sample = sample + tf.zeros([batch_size, 1, 1]) return sample, tfd.MultivariateNormalDiag(loc=loc, scale_diag=scale_diag)
Sample the static latent posterior.
def sample_dynamic_posterior(self, inputs, samples, static_sample=None): """Sample the static latent posterior. Args: inputs: A batch of intermediate representations of image frames across all timesteps, of shape [..., batch_size, timesteps, hidden_size]. samples: Number of samples to draw from the latent distribution. static_sample: A tensor sample of the static latent variable `f` of shape [..., batch_size, latent_size]. Only used for the full dynamic posterior formulation. Returns: A tuple of a sample tensor of shape [samples, batch_size, length latent_size], and a MultivariateNormalDiag distribution from which the tensor was sampled, with event shape [latent_size], and batch shape [broadcasted_shape, batch_size, length], where `broadcasted_shape` is the broadcasted sampled shape between the inputs and static sample. Raises: ValueError: If the "full" latent posterior formulation is being used, yet a static latent sample was not provided. """ if self.latent_posterior == "factorized": dist = self.dynamic_encoder(inputs) samples = dist.sample(samples) # (s, N, T, lat) else: # full if static_sample is None: raise ValueError( "The full dynamic posterior requires a static latent sample") dist = self.dynamic_encoder((inputs, static_sample)) samples = dist.sample() # (samples, N, latent) return samples, dist
Static batch shape of models represented by this component.
def batch_shape(self): """Static batch shape of models represented by this component. Returns: batch_shape: A `tf.TensorShape` giving the broadcast batch shape of all model parameters. This should match the batch shape of derived state space models, i.e., `self.make_state_space_model(...).batch_shape`. It may be partially defined or unknown. """ batch_shape = tf.TensorShape([]) for param in self.parameters: batch_shape = tf.broadcast_static_shape( batch_shape, param.prior.batch_shape) return batch_shape
Runtime batch shape of models represented by this component.
def batch_shape_tensor(self): """Runtime batch shape of models represented by this component. Returns: batch_shape: `int` `Tensor` giving the broadcast batch shape of all model parameters. This should match the batch shape of derived state space models, i.e., `self.make_state_space_model(...).batch_shape_tensor()`. """ batch_shape = tf.constant([], dtype=tf.int32) for param in self.parameters: batch_shape = tf.broadcast_dynamic_shape( batch_shape, param.prior.batch_shape_tensor()) return batch_shape
If given an ordered list of parameter values build a name: value map.
def _canonicalize_param_vals_as_map(self, param_vals): """If given an ordered list of parameter values, build a name:value map. This is a utility method that allows parameter values to be specified as either lists or dicts, by transforming lists to a canonical dict representation. Args: param_vals: Python list (or other `iterable`) of `Tensor` values corresponding to the parameters listed in `self.parameters`, OR a map (Python `dict`) of parameter names to values. Returns: param_map: Python `dict` mapping from the names given in `self.parameters` to the specified parameter values. """ if hasattr(param_vals, 'keys'): param_map = param_vals else: param_map = {p.name: v for (p, v) in zip(self.parameters, param_vals)} return param_map
Instantiate this model as a Distribution over specified num_timesteps.
def make_state_space_model(self, num_timesteps, param_vals=None, initial_state_prior=None, initial_step=0): """Instantiate this model as a Distribution over specified `num_timesteps`. Args: num_timesteps: Python `int` number of timesteps to model. param_vals: a list of `Tensor` parameter values in order corresponding to `self.parameters`, or a dict mapping from parameter names to values. initial_state_prior: an optional `Distribution` instance overriding the default prior on the model's initial state. This is used in forecasting ("today's prior is yesterday's posterior"). initial_step: optional `int` specifying the initial timestep to model. This is relevant when the model contains time-varying components, e.g., holidays or seasonality. Returns: dist: a `LinearGaussianStateSpaceModel` Distribution object. """ return self._make_state_space_model( num_timesteps=num_timesteps, param_map=self._canonicalize_param_vals_as_map(param_vals), initial_state_prior=initial_state_prior, initial_step=initial_step)
Sample from the joint prior over model parameters and trajectories.
def prior_sample(self, num_timesteps, initial_step=0, params_sample_shape=(), trajectories_sample_shape=(), seed=None): """Sample from the joint prior over model parameters and trajectories. Args: num_timesteps: Scalar `int` `Tensor` number of timesteps to model. initial_step: Optional scalar `int` `Tensor` specifying the starting timestep. Default value: 0. params_sample_shape: Number of possible worlds to sample iid from the parameter prior, or more generally, `Tensor` `int` shape to fill with iid samples. Default value: [] (i.e., draw a single sample and don't expand the shape). trajectories_sample_shape: For each sampled set of parameters, number of trajectories to sample, or more generally, `Tensor` `int` shape to fill with iid samples. Default value: [] (i.e., draw a single sample and don't expand the shape). seed: Python `int` random seed. Returns: trajectories: `float` `Tensor` of shape `trajectories_sample_shape + params_sample_shape + [num_timesteps, 1]` containing all sampled trajectories. param_samples: list of sampled parameter value `Tensor`s, in order corresponding to `self.parameters`, each of shape `params_sample_shape + prior.batch_shape + prior.event_shape`. """ seed = distributions.SeedStream( seed, salt='StructuralTimeSeries_prior_sample') with tf.compat.v1.name_scope( 'prior_sample', values=[num_timesteps, params_sample_shape, trajectories_sample_shape]): param_samples = [ p.prior.sample(params_sample_shape, seed=seed(), name=p.name) for p in self.parameters ] model = self.make_state_space_model( num_timesteps=num_timesteps, initial_step=initial_step, param_vals=param_samples) return model.sample(trajectories_sample_shape, seed=seed()), param_samples
Build the joint density log p ( params ) + log p ( y|params ) as a callable.
def joint_log_prob(self, observed_time_series): """Build the joint density `log p(params) + log p(y|params)` as a callable. Args: observed_time_series: Observed `Tensor` trajectories of shape `sample_shape + batch_shape + [num_timesteps, 1]` (the trailing `1` dimension is optional if `num_timesteps > 1`), where `batch_shape` should match `self.batch_shape` (the broadcast batch shape of all priors on parameters for this structural time series model). May optionally be an instance of `tfp.sts.MaskedTimeSeries`, which includes a mask `Tensor` to specify timesteps with missing observations. Returns: log_joint_fn: A function taking a `Tensor` argument for each model parameter, in canonical order, and returning a `Tensor` log probability of shape `batch_shape`. Note that, *unlike* `tfp.Distributions` `log_prob` methods, the `log_joint` sums over the `sample_shape` from y, so that `sample_shape` does not appear in the output log_prob. This corresponds to viewing multiple samples in `y` as iid observations from a single model, which is typically the desired behavior for parameter inference. """ with tf.compat.v1.name_scope( 'joint_log_prob', values=[observed_time_series]): [ observed_time_series, mask ] = sts_util.canonicalize_observed_time_series_with_mask( observed_time_series) num_timesteps = distribution_util.prefer_static_value( tf.shape(input=observed_time_series))[-2] def log_joint_fn(*param_vals): """Generated log-density function.""" # Sum the log_prob values from parameter priors. param_lp = sum([ param.prior.log_prob(param_val) for (param, param_val) in zip(self.parameters, param_vals) ]) # Build a linear Gaussian state space model and evaluate the marginal # log_prob on observations. lgssm = self.make_state_space_model( param_vals=param_vals, num_timesteps=num_timesteps) observation_lp = lgssm.log_prob(observed_time_series, mask=mask) # Sum over likelihoods from iid observations. Without this sum, # adding `param_lp + observation_lp` would broadcast the param priors # over the sample shape, which incorrectly multi-counts the param # priors. sample_ndims = tf.maximum(0, tf.rank(observation_lp) - tf.rank(param_lp)) observation_lp = tf.reduce_sum( input_tensor=observation_lp, axis=tf.range(sample_ndims)) return param_lp + observation_lp return log_joint_fn
Computes the min_event_ndims associated with the give list of bijectors.
def _compute_min_event_ndims(bijector_list, compute_forward=True): """Computes the min_event_ndims associated with the give list of bijectors. Given a list `bijector_list` of bijectors, compute the min_event_ndims that is associated with the composition of bijectors in that list. min_event_ndims is the # of right most dimensions for which the bijector has done necessary computation on (i.e. the non-broadcastable part of the computation). We can derive the min_event_ndims for a chain of bijectors as follows: In the case where there are no rank changing bijectors, this will simply be `max(b.forward_min_event_ndims for b in bijector_list)`. This is because the bijector with the most forward_min_event_ndims requires the most dimensions, and hence the chain also requires operating on those dimensions. However in the case of rank changing, more care is needed in determining the exact amount of dimensions. Padding dimensions causes subsequent bijectors to operate on the padded dimensions, and Removing dimensions causes bijectors to operate more left. Args: bijector_list: List of bijectors to be composed by chain. compute_forward: Boolean. If True, computes the min_event_ndims associated with a forward call to Chain, and otherwise computes the min_event_ndims associated with an inverse call to Chain. The latter is the same as the min_event_ndims associated with a forward call to Invert(Chain(....)). Returns: min_event_ndims """ min_event_ndims = 0 # This is a mouthful, but what this encapsulates is that if not for rank # changing bijectors, we'd only need to compute the largest of the min # required ndims. Hence "max_min". Due to rank changing bijectors, we need to # account for synthetic rank growth / synthetic rank decrease from a rank # changing bijector. rank_changed_adjusted_max_min_event_ndims = 0 if compute_forward: bijector_list = reversed(bijector_list) for b in bijector_list: if compute_forward: current_min_event_ndims = b.forward_min_event_ndims current_inverse_min_event_ndims = b.inverse_min_event_ndims else: current_min_event_ndims = b.inverse_min_event_ndims current_inverse_min_event_ndims = b.forward_min_event_ndims # New dimensions were touched. if rank_changed_adjusted_max_min_event_ndims < current_min_event_ndims: min_event_ndims += ( current_min_event_ndims - rank_changed_adjusted_max_min_event_ndims) rank_changed_adjusted_max_min_event_ndims = max( current_min_event_ndims, rank_changed_adjusted_max_min_event_ndims) # If the number of dimensions has increased via forward, then # inverse_min_event_ndims > forward_min_event_ndims, and hence the # dimensions we computed on, have moved left (so we have operated # on additional dimensions). # Conversely, if the number of dimensions has decreased via forward, # then we have inverse_min_event_ndims < forward_min_event_ndims, # and so we will have operated on fewer right most dimensions. number_of_changed_dimensions = ( current_min_event_ndims - current_inverse_min_event_ndims) rank_changed_adjusted_max_min_event_ndims -= number_of_changed_dimensions return min_event_ndims
Convert a vector size to a matrix size.
def vector_size_to_square_matrix_size(d, validate_args, name=None): """Convert a vector size to a matrix size.""" if isinstance(d, (float, int, np.generic, np.ndarray)): n = (-1 + np.sqrt(1 + 8 * d)) / 2. if float(int(n)) != n: raise ValueError("Vector length is not a triangular number.") return int(n) else: with tf.name_scope(name or "vector_size_to_square_matrix_size") as name: n = (-1. + tf.sqrt(1 + 8. * tf.cast(d, dtype=tf.float32))) / 2. if validate_args: with tf.control_dependencies([ assert_util.assert_equal( tf.cast(tf.cast(n, dtype=tf.int32), dtype=tf.float32), n, message="Vector length is not a triangular number") ]): n = tf.identity(n) return tf.cast(n, d.dtype)
Numpy implementation of tf. argsort.
def _argsort(values, axis=-1, direction='ASCENDING', stable=False, name=None): # pylint: disable=unused-argument """Numpy implementation of `tf.argsort`.""" if direction == 'ASCENDING': pass elif direction == 'DESCENDING': values = np.negative(values) else: raise ValueError('Unrecognized direction: {}.'.format(direction)) return np.argsort(values, axis, kind='stable' if stable else 'quicksort')
Numpy implementation of tf. sort.
def _sort(values, axis=-1, direction='ASCENDING', stable=False, name=None): # pylint: disable=unused-argument """Numpy implementation of `tf.sort`.""" if direction == 'ASCENDING': pass elif direction == 'DESCENDING': values = np.negative(values) else: raise ValueError('Unrecognized direction: {}.'.format(direction)) result = np.sort(values, axis, kind='stable' if stable else 'quicksort') if direction == 'DESCENDING': return np.negative(result) return result
Calculate the batched KL divergence KL ( a || b ) with a and b Gumbel.
def _kl_gumbel_gumbel(a, b, name=None): """Calculate the batched KL divergence KL(a || b) with a and b Gumbel. Args: a: instance of a Gumbel distribution object. b: instance of a Gumbel distribution object. name: (optional) Name to use for created operations. default is "kl_gumbel_gumbel". Returns: Batchwise KL(a || b) """ with tf.name_scope(name or "kl_gumbel_gumbel"): # Consistent with # http://www.mast.queensu.ca/~communications/Papers/gil-msc11.pdf, page 64 # The paper uses beta to refer to scale and mu to refer to loc. # There is actually an error in the solution as printed; this is based on # the second-to-last step of the derivation. The value as printed would be # off by (a.loc - b.loc) / b.scale. return (tf.math.log(b.scale) - tf.math.log(a.scale) + np.euler_gamma * (a.scale / b.scale - 1.) + tf.math.expm1((b.loc - a.loc) / b.scale + tf.math.lgamma(a.scale / b.scale + 1.)) + (a.loc - b.loc) / b.scale)
Normal distribution function.
def ndtr(x, name="ndtr"): """Normal distribution function. Returns the area under the Gaussian probability density function, integrated from minus infinity to x: ``` 1 / x ndtr(x) = ---------- | exp(-0.5 t**2) dt sqrt(2 pi) /-inf = 0.5 (1 + erf(x / sqrt(2))) = 0.5 erfc(x / sqrt(2)) ``` Args: x: `Tensor` of type `float32`, `float64`. name: Python string. A name for the operation (default="ndtr"). Returns: ndtr: `Tensor` with `dtype=x.dtype`. Raises: TypeError: if `x` is not floating-type. """ with tf.name_scope(name): x = tf.convert_to_tensor(value=x, name="x") if dtype_util.as_numpy_dtype(x.dtype) not in [np.float32, np.float64]: raise TypeError( "x.dtype=%s is not handled, see docstring for supported types." % x.dtype) return _ndtr(x)
Implements ndtr core logic.
def _ndtr(x): """Implements ndtr core logic.""" half_sqrt_2 = tf.constant( 0.5 * np.sqrt(2.), dtype=x.dtype, name="half_sqrt_2") w = x * half_sqrt_2 z = tf.abs(w) y = tf.where( tf.less(z, half_sqrt_2), 1. + tf.math.erf(w), tf.where(tf.greater(w, 0.), 2. - tf.math.erfc(z), tf.math.erfc(z))) return 0.5 * y
The inverse of the CDF of the Normal distribution function.
def ndtri(p, name="ndtri"): """The inverse of the CDF of the Normal distribution function. Returns x such that the area under the pdf from minus infinity to x is equal to p. A piece-wise rational approximation is done for the function. This is a port of the implementation in netlib. Args: p: `Tensor` of type `float32`, `float64`. name: Python string. A name for the operation (default="ndtri"). Returns: x: `Tensor` with `dtype=p.dtype`. Raises: TypeError: if `p` is not floating-type. """ with tf.name_scope(name): p = tf.convert_to_tensor(value=p, name="p") if dtype_util.as_numpy_dtype(p.dtype) not in [np.float32, np.float64]: raise TypeError( "p.dtype=%s is not handled, see docstring for supported types." % p.dtype) return _ndtri(p)
Implements ndtri core logic.
def _ndtri(p): """Implements ndtri core logic.""" # Constants used in piece-wise rational approximations. Taken from the cephes # library: # https://root.cern.ch/doc/v608/SpecFuncCephesInv_8cxx_source.html p0 = list(reversed([-5.99633501014107895267E1, 9.80010754185999661536E1, -5.66762857469070293439E1, 1.39312609387279679503E1, -1.23916583867381258016E0])) q0 = list(reversed([1.0, 1.95448858338141759834E0, 4.67627912898881538453E0, 8.63602421390890590575E1, -2.25462687854119370527E2, 2.00260212380060660359E2, -8.20372256168333339912E1, 1.59056225126211695515E1, -1.18331621121330003142E0])) p1 = list(reversed([4.05544892305962419923E0, 3.15251094599893866154E1, 5.71628192246421288162E1, 4.40805073893200834700E1, 1.46849561928858024014E1, 2.18663306850790267539E0, -1.40256079171354495875E-1, -3.50424626827848203418E-2, -8.57456785154685413611E-4])) q1 = list(reversed([1.0, 1.57799883256466749731E1, 4.53907635128879210584E1, 4.13172038254672030440E1, 1.50425385692907503408E1, 2.50464946208309415979E0, -1.42182922854787788574E-1, -3.80806407691578277194E-2, -9.33259480895457427372E-4])) p2 = list(reversed([3.23774891776946035970E0, 6.91522889068984211695E0, 3.93881025292474443415E0, 1.33303460815807542389E0, 2.01485389549179081538E-1, 1.23716634817820021358E-2, 3.01581553508235416007E-4, 2.65806974686737550832E-6, 6.23974539184983293730E-9])) q2 = list(reversed([1.0, 6.02427039364742014255E0, 3.67983563856160859403E0, 1.37702099489081330271E0, 2.16236993594496635890E-1, 1.34204006088543189037E-2, 3.28014464682127739104E-4, 2.89247864745380683936E-6, 6.79019408009981274425E-9])) def _create_polynomial(var, coeffs): """Compute n_th order polynomial via Horner's method.""" coeffs = np.array(coeffs, dtype_util.as_numpy_dtype(var.dtype)) if not coeffs.size: return tf.zeros_like(var) return coeffs[0] + _create_polynomial(var, coeffs[1:]) * var maybe_complement_p = tf.where(p > -np.expm1(-2.), 1. - p, p) # Write in an arbitrary value in place of 0 for p since 0 will cause NaNs # later on. The result from the computation when p == 0 is not used so any # number that doesn't result in NaNs is fine. sanitized_mcp = tf.where( maybe_complement_p <= 0., tf.fill(tf.shape(input=p), dtype_util.as_numpy_dtype(p.dtype)(0.5)), maybe_complement_p) # Compute x for p > exp(-2): x/sqrt(2pi) = w + w**3 P0(w**2)/Q0(w**2). w = sanitized_mcp - 0.5 ww = w ** 2 x_for_big_p = w + w * ww * (_create_polynomial(ww, p0) / _create_polynomial(ww, q0)) x_for_big_p *= -np.sqrt(2. * np.pi) # Compute x for p <= exp(-2): x = z - log(z)/z - (1/z) P(1/z) / Q(1/z), # where z = sqrt(-2. * log(p)), and P/Q are chosen between two different # arrays based on whether p < exp(-32). z = tf.sqrt(-2. * tf.math.log(sanitized_mcp)) first_term = z - tf.math.log(z) / z second_term_small_p = ( _create_polynomial(1. / z, p2) / _create_polynomial(1. / z, q2) / z) second_term_otherwise = ( _create_polynomial(1. / z, p1) / _create_polynomial(1. / z, q1) / z) x_for_small_p = first_term - second_term_small_p x_otherwise = first_term - second_term_otherwise x = tf.where(sanitized_mcp > np.exp(-2.), x_for_big_p, tf.where(z >= 8.0, x_for_small_p, x_otherwise)) x = tf.where(p > 1. - np.exp(-2.), x, -x) infinity_scalar = tf.constant(np.inf, dtype=p.dtype) infinity = tf.fill(tf.shape(input=p), infinity_scalar) x_nan_replaced = tf.where( p <= 0.0, -infinity, tf.where(p >= 1.0, infinity, x)) return x_nan_replaced
Log Normal distribution function.
def log_ndtr(x, series_order=3, name="log_ndtr"): """Log Normal distribution function. For details of the Normal distribution function see `ndtr`. This function calculates `(log o ndtr)(x)` by either calling `log(ndtr(x))` or using an asymptotic series. Specifically: - For `x > upper_segment`, use the approximation `-ndtr(-x)` based on `log(1-x) ~= -x, x << 1`. - For `lower_segment < x <= upper_segment`, use the existing `ndtr` technique and take a log. - For `x <= lower_segment`, we use the series approximation of erf to compute the log CDF directly. The `lower_segment` is set based on the precision of the input: ``` lower_segment = { -20, x.dtype=float64 { -10, x.dtype=float32 upper_segment = { 8, x.dtype=float64 { 5, x.dtype=float32 ``` When `x < lower_segment`, the `ndtr` asymptotic series approximation is: ``` ndtr(x) = scale * (1 + sum) + R_N scale = exp(-0.5 x**2) / (-x sqrt(2 pi)) sum = Sum{(-1)^n (2n-1)!! / (x**2)^n, n=1:N} R_N = O(exp(-0.5 x**2) (2N+1)!! / |x|^{2N+3}) ``` where `(2n-1)!! = (2n-1) (2n-3) (2n-5) ... (3) (1)` is a [double-factorial](https://en.wikipedia.org/wiki/Double_factorial). Args: x: `Tensor` of type `float32`, `float64`. series_order: Positive Python `integer`. Maximum depth to evaluate the asymptotic expansion. This is the `N` above. name: Python string. A name for the operation (default="log_ndtr"). Returns: log_ndtr: `Tensor` with `dtype=x.dtype`. Raises: TypeError: if `x.dtype` is not handled. TypeError: if `series_order` is a not Python `integer.` ValueError: if `series_order` is not in `[0, 30]`. """ if not isinstance(series_order, int): raise TypeError("series_order must be a Python integer.") if series_order < 0: raise ValueError("series_order must be non-negative.") if series_order > 30: raise ValueError("series_order must be <= 30.") with tf.name_scope(name): x = tf.convert_to_tensor(value=x, name="x") if dtype_util.base_equal(x.dtype, tf.float64): lower_segment = LOGNDTR_FLOAT64_LOWER upper_segment = LOGNDTR_FLOAT64_UPPER elif dtype_util.base_equal(x.dtype, tf.float32): lower_segment = LOGNDTR_FLOAT32_LOWER upper_segment = LOGNDTR_FLOAT32_UPPER else: raise TypeError("x.dtype=%s is not supported." % x.dtype) # The basic idea here was ported from: # https://root.cern.ch/doc/v608/SpecFuncCephesInv_8cxx_source.html # We copy the main idea, with a few changes # * For x >> 1, and X ~ Normal(0, 1), # Log[P[X < x]] = Log[1 - P[X < -x]] approx -P[X < -x], # which extends the range of validity of this function. # * We use one fixed series_order for all of 'x', rather than adaptive. # * Our docstring properly reflects that this is an asymptotic series, not a # Taylor series. We also provided a correct bound on the remainder. # * We need to use the max/min in the _log_ndtr_lower arg to avoid nan when # x=0. This happens even though the branch is unchosen because when x=0 # the gradient of a select involves the calculation 1*dy+0*(-inf)=nan # regardless of whether dy is finite. Note that the minimum is a NOP if # the branch is chosen. return tf.where( tf.greater(x, upper_segment), -_ndtr(-x), # log(1-x) ~= -x, x << 1 tf.where( tf.greater(x, lower_segment), tf.math.log(_ndtr(tf.maximum(x, lower_segment))), _log_ndtr_lower(tf.minimum(x, lower_segment), series_order)))