Probabilistic reasoning and statistical analysis in TensorFlow

Overview

TensorFlow Probability

TensorFlow Probability is a library for probabilistic reasoning and statistical analysis in TensorFlow. As part of the TensorFlow ecosystem, TensorFlow Probability provides integration of probabilistic methods with deep networks, gradient-based inference via automatic differentiation, and scalability to large datasets and models via hardware acceleration (e.g., GPUs) and distributed computation.

Our probabilistic machine learning tools are structured as follows.

Layer 0: TensorFlow. Numerical operations. In particular, the LinearOperator class enables matrix-free implementations that can exploit special structure (diagonal, low-rank, etc.) for efficient computation. It is built and maintained by the TensorFlow Probability team and is now part of tf.linalg in core TF.

Layer 1: Statistical Building Blocks

Layer 2: Model Building

  • Joint Distributions (e.g., tfp.distributions.JointDistributionSequential): Joint distributions over one or more possibly-interdependent distributions. For an introduction to modeling with TFP's JointDistributions, check out this colab
  • Probabilistic Layers (tfp.layers): Neural network layers with uncertainty over the functions they represent, extending TensorFlow Layers.

Layer 3: Probabilistic Inference

  • Markov chain Monte Carlo (tfp.mcmc): Algorithms for approximating integrals via sampling. Includes Hamiltonian Monte Carlo, random-walk Metropolis-Hastings, and the ability to build custom transition kernels.
  • Variational Inference (tfp.vi): Algorithms for approximating integrals via optimization.
  • Optimizers (tfp.optimizer): Stochastic optimization methods, extending TensorFlow Optimizers. Includes Stochastic Gradient Langevin Dynamics.
  • Monte Carlo (tfp.monte_carlo): Tools for computing Monte Carlo expectations.

TensorFlow Probability is under active development. Interfaces may change at any time.

Examples

See tensorflow_probability/examples/ for end-to-end examples. It includes tutorial notebooks such as:

It also includes example scripts such as:

Installation

For additional details on installing TensorFlow, guidance installing prerequisites, and (optionally) setting up virtual environments, see the TensorFlow installation guide.

Stable Builds

To install the latest stable version, run the following:

# Notes:

# - The `--upgrade` flag ensures you'll get the latest version.
# - The `--user` flag ensures the packages are installed to your user directory
#   rather than the system directory.
# - TensorFlow 2 packages require a pip >= 19.0
python -m pip install --upgrade --user pip
python -m pip install --upgrade --user tensorflow tensorflow_probability

For CPU-only usage (and a smaller install), install with tensorflow-cpu.

To use a pre-2.0 version of TensorFlow, run:

python -m pip install --upgrade --user "tensorflow<2" "tensorflow_probability<0.9"

Note: Since TensorFlow is not included as a dependency of the TensorFlow Probability package (in setup.py), you must explicitly install the TensorFlow package (tensorflow or tensorflow-cpu). This allows us to maintain one package instead of separate packages for CPU and GPU-enabled TensorFlow. See the TFP release notes for more details about dependencies between TensorFlow and TensorFlow Probability.

Nightly Builds

There are also nightly builds of TensorFlow Probability under the pip package tfp-nightly, which depends on one of tf-nightly or tf-nightly-cpu. Nightly builds include newer features, but may be less stable than the versioned releases. Both stable and nightly docs are available here.

python -m pip install --upgrade --user tf-nightly tfp-nightly

Installing from Source

You can also install from source. This requires the Bazel build system. It is highly recommended that you install the nightly build of TensorFlow (tf-nightly) before trying to build TensorFlow Probability from source.

# sudo apt-get install bazel git python-pip  # Ubuntu; others, see above links.
python -m pip install --upgrade --user tf-nightly
git clone https://github.com/tensorflow/probability.git
cd probability
bazel build --copt=-O3 --copt=-march=native :pip_pkg
PKGDIR=$(mktemp -d)
./bazel-bin/pip_pkg $PKGDIR
python -m pip install --upgrade --user $PKGDIR/*.whl

Community

As part of TensorFlow, we're committed to fostering an open and welcoming environment.

See the TensorFlow Community page for more details. Check out our latest publicity here:

Contributing

We're eager to collaborate with you! See CONTRIBUTING.md for a guide on how to contribute. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

References

If you use TensorFlow Probability in a paper, please cite:

  • TensorFlow Distributions. Joshua V. Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, Rif A. Saurous. arXiv preprint arXiv:1711.10604, 2017.

(We're aware there's a lot more to TensorFlow Probability than Distributions, but the Distributions paper lays out our vision and is a fine thing to cite for now.)

Comments
  • Discrepancy between TFP and Stan's CholeskyLKJ log prob

    Discrepancy between TFP and Stan's CholeskyLKJ log prob

    I'm trying to understand a discrepancy in the log_prob computation between Stan and TFP. I don't know if this is bug, and if it is, where; I'm reporting this here in case anyone thinks this may be of interest.

    I generated Cholesky factors of LKJ matrices using the following Stan code:

    code <- "
    data {
      int K;
    }
    parameters {
      cholesky_factor_corr[K] L;
    }
    model {
       target += lkj_corr_cholesky_lpdf(L|2.0);
    }
    

    I extracted samples and log_prob:

    library(rstan)
    data = list(K=5)
    fit <- stan(model_code = code, data=data)
    e <- as.data.frame(extract(fit))
    

    Finally, I computed the log prob of these matrices using TFP:

    m = tfd.CholeskyLKJ(5,2)
    b = tfb.CorrelationCholesky()
    
    for datum in data[:k,:-1]:
        L = datum.reshape(5,5).T
        tfp_lps.append(m.log_prob(L).numpy())
    

    When I compare tfp_lps to Stan's lp__, I get a high correlation, but they're not the same. For comparison, running this check with something like a normal distribution passes np.allclose. I've also suspected that this might be related to different handling of log_det_jacobian, but subtracting tfb.CorrelationCholesky().inverse_log_det_jacobian(L,2) didn't help.

    Is there something wrong in the way I'm running the comparison, or in the expectation that they would be the same in the first place?

    Thanks in advance!

    opened by adamhaber 53
  • Correlation matrix [cholesky] bijector

    Correlation matrix [cholesky] bijector

    As discussed on https://groups.google.com/a/tensorflow.org/d/msg/tfprobability/JYNa3_g33qo/asqjrRs0BAAJ it would be nice to have a bijector to go from unconstrained vectors to LKJ distributed correlation matrices. Some guessing that our existing LKJ might already have the right forward transformation implemented, but would need to add the inverse and the log det of deformation.

    good first issue 
    opened by brianwa84 46
  • feature request: KL divergence for Gaussian Mixture Model

    feature request: KL divergence for Gaussian Mixture Model

    Hi all,

    Would it be possible to add KL between two Mixture of gaussians distirbutions or even between one multivariate gaussian and a mixture of gaussian? Example:

    A =tfd.Normal( loc=[1., -1],scale=[1, 2.])
    B =tfd.Normal( loc=[1., -1],scale=[1, 2.])
    C = tfd.MixtureSameFamily(
        mixture_distribution=tfd.Categorical(
            probs=[0.3, 0.7]),
        components_distribution=tfd.Normal(
          loc=[-1., 1],       # One for each component.
          scale=[0.1, 0.5]))  # And same here.
    D = tfd.MixtureSameFamily(
        mixture_distribution=tfd.Categorical(
            probs=[0.1, 0.9]),
        components_distribution=tfd.Normal(
          loc=[-1., 1],       # One for each component.
          scale=[0.1, 0.5]))  # And same here.
    

    Then

    tf.distributions.kl_divergence(A, B)
    

    works fine and returns <tf.Tensor 'KullbackLeibler/kl_normal_normal/add:0' shape=(2,) dtype=float32> Yet, tf.distributions.kl_divergence(A, D)and tf.distributions.kl_divergence(C, D) returns

    No KL(distribution_a || distribution_b) registered for distribution_a type Normal and distribution_b type MixtureSameFamily
    

    This would be of great value since more and more KL of this type are being calulated (or at least approximated) while training Neural Nets.

    Thanks Belhal

    opened by BelhalK 37
  • Implicit reparameterization gradients

    Implicit reparameterization gradients

    The idea is simple to implement and well-scoped as part of TF Distributions. I personally like the idea of even having it be the default for Gamma, Beta, and others. The gradient implementations may be tricky. Maybe @mfigurnov has an implementation we can build on? (if open source or google-internal)

    • https://arxiv.org/abs/1805.08498
    • https://twitter.com/mfigurnov/status/999198094905499648
    • https://twitter.com/ericjang11/status/999183266237198336
    opened by dustinvtran 33
  • Tensor is unhashable if Tensor equality is enabled. Instead, use tensor.experimental_ref() as the key

    Tensor is unhashable if Tensor equality is enabled. Instead, use tensor.experimental_ref() as the key

    https://github.com/tensorflow/tensorflow/issues/32139

    Error occurs: tf-gpu 2.0.0-rc0 with tfp 0.7

    Code to reproduce:

    import tensorflow_probability as tfp
    tfp.distributions.MultivariateNormalDiag([0.], [1.]).sample()
    

    Error returned:

    Traceback (most recent call last): File "/home/pycharm_project/VAE/save_issue_reproduction.py", line 3, in tfp.distributions.MultivariateNormalDiag([0.], [1.]).sample() File "/usr/local/lib/python3.5/dist-packages/tensorflow_probability/python/distributions/distribution.py", line 840, in sample return self._call_sample_n(sample_shape, seed, name, **kwargs) File "/usr/local/lib/python3.5/dist-packages/tensorflow_probability/python/distributions/transformed_distribution.py", line 391, in _call_sample_n y = self.bijector.forward(x, **bijector_kwargs) File "/usr/local/lib/python3.5/dist-packages/tensorflow_probability/python/bijectors/bijector.py", line 933, in forward return self._call_forward(x, name, **kwargs) File "/usr/local/lib/python3.5/dist-packages/tensorflow_probability/python/bijectors/bijector.py", line 904, in _call_forward mapping = self._lookup(x=x, kwargs=kwargs) File "/usr/local/lib/python3.5/dist-packages/tensorflow_probability/python/bijectors/bijector.py", line 1343, in _lookup mapping = self._from_x[x].get(subkey, mapping).merge(x=x) File "/usr/local/lib/python3.5/dist-packages/tensorflow_probability/python/bijectors/bijector.py", line 151, in getitem return super(WeakKeyDefaultDict, self).getitem(weak_key) File "/usr/local/lib/python3.5/dist-packages/tensorflow_probability/python/bijectors/bijector.py", line 181, in hash return hash(x) File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/ops.py", line 713, in hash raise TypeError("Tensor is unhashable if Tensor equality is enabled. " TypeError: Tensor is unhashable if Tensor equality is enabled. Instead, use tensor.experimental_ref() as the key.

    opened by kristofgiber 27
  • TFP and TF2.0 incompatibility

    TFP and TF2.0 incompatibility

    After using TFP with TF2.0, I've realized that I'm spending a lot of time on understanding why it doesn't work rather than making any progress. The TFP relies too much on TF1.0, even not realizing it. Some conceptions of the TF2.0 and the TFP are just incompatible, e.g. tf.Module.

    class Model(tf.Module):
        def __init__(self):
            pass  # Create variables, set the prior and etc.
    
    data = ... setup data
    model = Model()
    
    def run_chain():
        def posterior_log_prob_fn(*parameters):
            model.assign_parameters(*parameters)
            log_prob = -model.neg_log_marginal_likelihood(data)
            return log_prob
        
        # state_gradients_are_stopped=True
        variables = model.trainable_variables
        hmc = mcmc.HamiltonianMonteCarlo(target_log_prob_fn=posterior_log_prob_fn,
                                         num_leapfrog_steps=2,
                                         step_size=1.0,
                                         state_gradients_are_stopped=True)
        adaptive_hmc = mcmc.SimpleStepSizeAdaptation(hmc, num_adaptation_steps=2)
        return mcmc.sample_chain(num_results=2,
                                 num_burnin_steps=0,
                                 current_state=variables,
                                 kernel=adaptive_hmc)
    

    The HMC will not be able to compute the gradients as there are no links in a graph between states and the model (with or without tf.function).

    Other examples you can find here: https://github.com/tensorflow/probability/issues/333, https://github.com/tensorflow/probability/issues/348, https://github.com/tensorflow/probability/issues/47

    There is no easy fix for that and hope I'm wrong), as the roots of these problems are in TF2.0 itself. I've created another github issue at TF2.0: https://github.com/tensorflow/tensorflow/issues/29367, describing how and what makes TFP framework less effective.

    opened by awav 27
  • Implemented BNN for MNIST with LeNet5

    Implemented BNN for MNIST with LeNet5

    Hi Dustin and Bayesflow team,

    I've added my implementation for MNIST with Bayesian neural networks for the LeNet5 strucure. The performance reaches about 97% after 6000 steps. Let me know if there are other changes needed.

    Best, Jiaming

    cla: yes 
    opened by jmzeng 26
  • `ImportError: cannot import name 'abs'` when importing TFP in Python 3 (and in Python 2)

    `ImportError: cannot import name 'abs'` when importing TFP in Python 3 (and in Python 2)

    So I used the following code to import my dependencies:

    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function
    
    import numpy as np
    import matplotlib.pyplot as plt
    from matplotlib.patches import Ellipse
    import seaborn as sns
    import tensorflow as tf                            # importing Tensorflow
    import tensorflow_probability as tfp               # and Tensorflow probability
    from tensorflow_probability import edward2 as ed   # Edwardlib extension
    
    tfd = tfp.distributions             # Basic probability distribution toolkit
    tfb = tfp.distributions.bijectors   # and their modifiers
    
    # Eager Execution
    # tfe = tf.contrib.eager
    # tfe.enable_eager_execution()
    
    %matplotlib inline
    plt.style.use("fivethirtyeight")        # Styling plots like FiveThirtyEight
    
    import warnings
    warnings.filterwarnings('ignore')
    %config InlineBackend.figure_format="retina" # improves resolution of plots
    

    But I get this error when trying to import tensorflow_probability

    ---------------------------------------------------------------------------
    ImportError                               Traceback (most recent call last)
    <ipython-input-3-47fdbecb20a4> in <module>()
          7 from matplotlib.patches import Ellipse
          8 import seaborn as sns
    ----> 9 import tensorflow as tf                            # importing Tensorflow
         10 import tensorflow_probability as tfp               # and Tensorflow probability
         11 from tensorflow_probability import edward2 as ed   # Edwardlib extension
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/__init__.py in <module>()
         22 
         23 # pylint: disable=g-bad-import-order
    ---> 24 from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
         25 # pylint: disable=wildcard-import
         26 from tensorflow.tools.api.generator.api import *  # pylint: disable=redefined-builtin
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/__init__.py in <module>()
         79 # Bring in subpackages.
         80 from tensorflow.python import data
    ---> 81 from tensorflow.python import keras
         82 from tensorflow.python.estimator import estimator_lib as estimator
         83 from tensorflow.python.feature_column import feature_column_lib as feature_column
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/__init__.py in <module>()
         22 from __future__ import print_function
         23 
    ---> 24 from tensorflow.python.keras import activations
         25 from tensorflow.python.keras import applications
         26 from tensorflow.python.keras import backend
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/activations/__init__.py in <module>()
         20 
         21 # Activation functions.
    ---> 22 from tensorflow.python.keras._impl.keras.activations import elu
         23 from tensorflow.python.keras._impl.keras.activations import hard_sigmoid
         24 from tensorflow.python.keras._impl.keras.activations import linear
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/_impl/keras/__init__.py in <module>()
         19 from __future__ import print_function
         20 
    ---> 21 from tensorflow.python.keras._impl.keras import activations
         22 from tensorflow.python.keras._impl.keras import applications
         23 from tensorflow.python.keras._impl.keras import backend
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/_impl/keras/activations.py in <module>()
         21 import six
         22 
    ---> 23 from tensorflow.python.keras._impl.keras import backend as K
         24 from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
         25 from tensorflow.python.layers.base import Layer
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/_impl/keras/backend.py in <module>()
         35 from tensorflow.python.framework import ops
         36 from tensorflow.python.framework import sparse_tensor
    ---> 37 from tensorflow.python.layers import base as tf_base_layers
         38 from tensorflow.python.ops import array_ops
         39 from tensorflow.python.ops import clip_ops
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/layers/base.py in <module>()
         23 from tensorflow.python.framework import dtypes
         24 from tensorflow.python.framework import ops
    ---> 25 from tensorflow.python.keras.engine import base_layer
         26 from tensorflow.python.ops import variable_scope as vs
         27 from tensorflow.python.ops import variables as tf_variables
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/__init__.py in <module>()
         19 from __future__ import print_function
         20 
    ---> 21 from tensorflow.python.keras.engine.base_layer import InputSpec
         22 from tensorflow.python.keras.engine.base_layer import Layer
         23 from tensorflow.python.keras.engine.input_layer import Input
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in <module>()
         30 from tensorflow.python.framework import tensor_shape
         31 from tensorflow.python.framework import tensor_util
    ---> 32 from tensorflow.python.keras import backend
         33 from tensorflow.python.keras import constraints
         34 from tensorflow.python.keras import initializers
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend/__init__.py in <module>()
         20 
         21 # pylint: disable=redefined-builtin
    ---> 22 from tensorflow.python.keras._impl.keras.backend import abs
         23 from tensorflow.python.keras._impl.keras.backend import all
         24 from tensorflow.python.keras._impl.keras.backend import any
    
    ImportError: cannot import name 'abs'
    
    ---------------------------------------------------------------------------
    NOTE: If your import is failing due to a missing package, you can
    manually install dependencies using either !pip or !apt.
    
    To view examples of installing some common dependencies, click the
    "Open Examples" button below.
    ---------------------------------------------------------------------------
    

    This only just started happening after I entered pip install --upgrade tfp-nightly for into the terminal, which leads me to conclude that something within the combination of tb-nightly-1.9.0a20180519, tf-nightly-1.9.0.dev20180519, and tfp-nightly-0.0.1.dev20180519 is not working.

    It was working previously but is not any longer. This error applies to both Python 3 AND python 2, so it's not as simple as just fixing the runtime type.

    opened by matthew-mcateer 26
  • Example for CIFAR-10 with ResNet18 and VGG16

    Example for CIFAR-10 with ResNet18 and VGG16

    This is an example for CIFAR-10 with ResNet18 and VGG16. With the current setup, the ResNet18 hits about 80% validation accuracy and 90% training accuracy.

    The visualization for MNIST has been removed because the plots did not really look as I expected. The validation accuracy would be pretty high but the distributions plotted are still like a random dist. I'm unsure as to why that is happening so I removed it for now. If you would like it for debugging, I can include it.

    Please let me know if there are style changes needed.

    cla: yes 
    opened by jmzeng 25
  • Priors definition

    Priors definition

    In bayesian_neural_network.py example, where are the priors on the weights defined? Are they Gaussians?

    How should I proceed if I would like to use mixture of gaussians as priors for my BNN?

    Thanks a lot for your time Belhal

    opened by BelhalK 23
  • Incompatibility with cloudpickle==1.5.0

    Incompatibility with cloudpickle==1.5.0

    Hi all,

    Due to a new update, it is not possible to import tensorflow_probability anymore. Using cloudpickle <= 1.4.1 fixed the issue

    >>> import tensorflow_probability as tfp
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_probability/__init__.py", line 76, in <module>
        from tensorflow_probability.python import *  # pylint: disable=wildcard-import
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/__init__.py", line 23, in <module>
        from tensorflow_probability.python import distributions
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/distributions/__init__.py", line 88, in <module>
        from tensorflow_probability.python.distributions.pixel_cnn import PixelCNN
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/distributions/pixel_cnn.py", line 37, in <module>
        from tensorflow_probability.python.layers import weight_norm
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/layers/__init__.py", line 31, in <module>
        from tensorflow_probability.python.layers.distribution_layer import CategoricalMixtureOfOneHotCategorical
      File "/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/layers/distribution_layer.py", line 28, in <module>
        from cloudpickle.cloudpickle import CloudPickler
    ImportError: cannot import name 'CloudPickler'
    
    opened by ltetrel 22
  • Cant slice along batch for MixtureSameDistribution produced by Keras layer

    Cant slice along batch for MixtureSameDistribution produced by Keras layer

    Not sure if this is Keras bug or tfp bug.

    I'm trying to make some dense layers that output the parameters of Gaussian mixture (a mixture density network). I want to run a batch of data through the network (for speed), get out a batch of distributions, and then slice to work with only some elements of the batch at a time. If I were doing this with just tfp, I call:

    import tensorflow_probability as tfp
    tfd = tfp.distributions
    
    num_mixture_components = 12
    batch_size = 4
    
    probs = np.random.rand(batch_size, num_mixture_components)
    loc = np.random.rand(batch_size, num_mixture_components)
    scale = np.random.rand(batch_size, num_mixture_components)
    
    gm = tfd.MixtureSameFamily(
          mixture_distribution=tfd.Categorical(probs=probs),
          components_distribution=tfd.Normal(loc=loc, scale=scale))  
    
    print(gm)
    # slice along batch
    print(gm[1])
    

    This works as expected giving

    tfp.distributions.MixtureSameFamily("MixtureSameFamily", batch_shape=[4], event_shape=[], dtype=float64)
    tfp.distributions.MixtureSameFamily("MixtureSameFamily", batch_shape=[], event_shape=[], dtype=float64)
    

    However when I try the same thing with a mixture density network in Keras I get an error

    import tensorflow.keras.layers as tfkl
    import tensorflow.keras as tfk
    
    num_mixture_components = 12
    
    l = tfkl.Input(shape=(100))
    
    # Make a fully connected network that outputs parameters of Gaussian mixture
    mu = tfkl.Dense(units=num_mixture_components, activation=None)(l)
    sigma = tfkl.Dense(units=num_mixture_components, activation='softplus')(l)
    alpha = tfkl.Dense(units=num_mixture_components, activation='softmax')(l)
    stacked = tfkl.Concatenate()([mu, sigma, alpha])
    
    mixture = tfp.layers.MixtureNormal(num_mixture_components, 
                                            event_shape=[], name="test")(stacked)
    model = tf.keras.Model(inputs=l, outputs=mixture)
    
    
    
    out = model(np.random.rand(4, 100))
    
    gm = out.tensor_distribution;
    
    
    print(gm)
    # slice along batch
    print(gm[1])
    

    Gives me this cryptic error:

    tfp.distributions._MixtureSameFamily("MixtureSameFamily", batch_shape=[4], event_shape=[], dtype=float32)
    
    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    Cell In [72], line 27
         25 print(gm)
         26 # slice along batch
    ---> 27 print(gm[1])
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/distributions/distribution.py:852, in Distribution.__getitem__(self, slices)
        825 def __getitem__(self, slices):
        826   """Slices the batch axes of this distribution, returning a new instance.
        827 
        828   ```python
       (...)
        850     dist: A new `tfd.Distribution` instance with sliced parameters.
        851   """
    --> 852   return slicing.batch_slice(self, {}, slices)
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/internal/slicing.py:220, in batch_slice(batch_object, params_overrides, slices, bijector_x_event_ndims)
        217 slice_overrides_seq = slice_overrides_seq + [(slices, params_overrides)]
        218 # Re-doing the full sequence of slice+copy override work here enables
        219 # gradients all the way back to the original batch_objectribution's arguments.
    --> 220 batch_object = _apply_slice_sequence(
        221     orig_batch_object,
        222     slice_overrides_seq,
        223     bijector_x_event_ndims=bijector_x_event_ndims)
        224 setattr(batch_object,
        225         PROVENANCE_ATTR,
        226         batch_object._no_dependency((orig_batch_object, slice_overrides_seq)))  # pylint: disable=protected-access
        227 return batch_object
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/internal/slicing.py:179, in _apply_slice_sequence(batch_object, slice_overrides_seq, bijector_x_event_ndims)
        177 """Applies a sequence of slice or copy-with-overrides operations to `batch_object`."""
        178 for slices, overrides in slice_overrides_seq:
    --> 179   batch_object = _apply_single_step(
        180       batch_object,
        181       slices,
        182       overrides,
        183       bijector_x_event_ndims=bijector_x_event_ndims)
        184 return batch_object
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/internal/slicing.py:168, in _apply_single_step(batch_object, slices, params_overrides, bijector_x_event_ndims)
        166   override_dict = {}
        167 else:
    --> 168   override_dict = _slice_params_to_dict(
        169       batch_object, slices, bijector_x_event_ndims=bijector_x_event_ndims)
        170 override_dict.update(params_overrides)
        171 parameters = dict(batch_object.parameters, **override_dict)
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/internal/slicing.py:153, in _slice_params_to_dict(batch_object, slices, bijector_x_event_ndims)
        150 else:
        151   batch_shape = batch_object.experimental_batch_shape_tensor(
        152       x_event_ndims=bijector_x_event_ndims)
    --> 153 return batch_shape_lib.map_fn_over_parameters_with_event_ndims(
        154     batch_object,
        155     functools.partial(_slice_single_param,
        156                       slices=slices,
        157                       batch_shape=batch_shape),
        158     bijector_x_event_ndims=bijector_x_event_ndims)
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/internal/batch_shape_lib.py:367, in map_fn_over_parameters_with_event_ndims(batch_object, fn, bijector_x_event_ndims, require_static, **parameter_kwargs)
        361     elif (properties.is_tensor
        362           and not tf.is_tensor(param)
        363           and not tf.nest.is_nested(param_event_ndims)):
        364       # As a last resort, try an explicit conversion.
        365       param = tensor_util.convert_nonref_to_tensor(param, name=param_name)
    --> 367   results[param_name] = nest.map_structure_up_to(
        368       param, fn, param, param_event_ndims)
        369 return results
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow/python/util/nest.py:1435, in map_structure_up_to(shallow_tree, func, *inputs, **kwargs)
       1361 @tf_export("__internal__.nest.map_structure_up_to", v1=[])
       1362 def map_structure_up_to(shallow_tree, func, *inputs, **kwargs):
       1363   """Applies a function or op to a number of partially flattened inputs.
       1364 
       1365   The `inputs` are flattened up to `shallow_tree` before being mapped.
       (...)
       1433     `shallow_tree`.
       1434   """
    -> 1435   return map_structure_with_tuple_paths_up_to(
       1436       shallow_tree,
       1437       lambda _, *values: func(*values),  # Discards the path arg.
       1438       *inputs,
       1439       **kwargs)
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow/python/util/nest.py:1535, in map_structure_with_tuple_paths_up_to(shallow_tree, func, *inputs, **kwargs)
       1526 flat_value_gen = (
       1527     flatten_up_to(  # pylint: disable=g-complex-comprehension
       1528         shallow_tree,
       1529         input_tree,
       1530         check_types,
       1531         expand_composites=expand_composites) for input_tree in inputs)
       1532 flat_path_gen = (
       1533     path
       1534     for path, _ in _yield_flat_up_to(shallow_tree, inputs[0], is_nested_fn))
    -> 1535 results = [
       1536     func(*args, **kwargs) for args in zip(flat_path_gen, *flat_value_gen)
       1537 ]
       1538 return pack_sequence_as(structure=shallow_tree, flat_sequence=results,
       1539                         expand_composites=expand_composites)
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow/python/util/nest.py:1536, in <listcomp>(.0)
       1526 flat_value_gen = (
       1527     flatten_up_to(  # pylint: disable=g-complex-comprehension
       1528         shallow_tree,
       1529         input_tree,
       1530         check_types,
       1531         expand_composites=expand_composites) for input_tree in inputs)
       1532 flat_path_gen = (
       1533     path
       1534     for path, _ in _yield_flat_up_to(shallow_tree, inputs[0], is_nested_fn))
       1535 results = [
    -> 1536     func(*args, **kwargs) for args in zip(flat_path_gen, *flat_value_gen)
       1537 ]
       1538 return pack_sequence_as(structure=shallow_tree, flat_sequence=results,
       1539                         expand_composites=expand_composites)
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow/python/util/nest.py:1437, in map_structure_up_to.<locals>.<lambda>(_, *values)
       1361 @tf_export("__internal__.nest.map_structure_up_to", v1=[])
       1362 def map_structure_up_to(shallow_tree, func, *inputs, **kwargs):
       1363   """Applies a function or op to a number of partially flattened inputs.
       1364 
       1365   The `inputs` are flattened up to `shallow_tree` before being mapped.
       (...)
       1433     `shallow_tree`.
       1434   """
       1435   return map_structure_with_tuple_paths_up_to(
       1436       shallow_tree,
    -> 1437       lambda _, *values: func(*values),  # Discards the path arg.
       1438       *inputs,
       1439       **kwargs)
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/internal/slicing.py:101, in _slice_single_param(param, param_event_ndims, slices, batch_shape)
         85 """Slices into the batch shape of a single parameter.
         86 
         87 Args:
       (...)
         98     `slices`.
         99 """
        100 # Broadcast the parmameter to have full batch rank.
    --> 101 param = batch_shape_lib.broadcast_parameter_with_batch_shape(
        102     param, param_event_ndims, ps.ones_like(batch_shape))
        103 param_batch_shape = batch_shape_lib.get_batch_shape_tensor_part(
        104     param, param_event_ndims)
        105 # At this point the param should have full batch rank, *unless* it's an
        106 # atomic object like `tfb.Identity()` incapable of having any batch rank.
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/internal/batch_shape_lib.py:274, in broadcast_parameter_with_batch_shape(param, param_event_ndims, batch_shape)
        270 base_shape = ps.concat([batch_shape,
        271                         ps.ones([param_event_ndims], dtype=np.int32)],
        272                        axis=0)
        273 if hasattr(param, '_broadcast_parameters_with_batch_shape'):
    --> 274   return param._broadcast_parameters_with_batch_shape(base_shape)  # pylint: disable=protected-access
        275 elif hasattr(param, 'matmul'):
        276   # TODO(davmre): support broadcasting LinearOperator parameters.
        277   return param
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/distributions/distribution.py:952, in Distribution._broadcast_parameters_with_batch_shape(self, batch_shape)
        926 def _broadcast_parameters_with_batch_shape(self, batch_shape):
        927   """Broadcasts each parameter's batch shape with the given `batch_shape`.
        928 
        929   This is semantically equivalent to wrapping with the `BatchBroadcast`
       (...)
        950       the given `batch_shape`.
        951   """
    --> 952   return self.copy(
        953       **batch_shape_lib.broadcast_parameters_with_batch_shape(
        954           self, batch_shape))
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/distributions/distribution.py:915, in Distribution.copy(self, **override_parameters_kwargs)
        897 """Creates a deep copy of the distribution.
        898 
        899 Note: the copy distribution may continue to depend on the original
       (...)
        909     `dict(self.parameters, **override_parameters_kwargs)`.
        910 """
        911 try:
        912   # We want track provenance from origin variables, so we use batch_slice
        913   # if this distribution supports slicing. See the comment on
        914   # PROVENANCE_ATTR in batch_slicing.py
    --> 915   return slicing.batch_slice(self, override_parameters_kwargs, Ellipsis)
        916 except NotImplementedError:
        917   pass
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/internal/slicing.py:220, in batch_slice(batch_object, params_overrides, slices, bijector_x_event_ndims)
        217 slice_overrides_seq = slice_overrides_seq + [(slices, params_overrides)]
        218 # Re-doing the full sequence of slice+copy override work here enables
        219 # gradients all the way back to the original batch_objectribution's arguments.
    --> 220 batch_object = _apply_slice_sequence(
        221     orig_batch_object,
        222     slice_overrides_seq,
        223     bijector_x_event_ndims=bijector_x_event_ndims)
        224 setattr(batch_object,
        225         PROVENANCE_ATTR,
        226         batch_object._no_dependency((orig_batch_object, slice_overrides_seq)))  # pylint: disable=protected-access
        227 return batch_object
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/internal/slicing.py:179, in _apply_slice_sequence(batch_object, slice_overrides_seq, bijector_x_event_ndims)
        177 """Applies a sequence of slice or copy-with-overrides operations to `batch_object`."""
        178 for slices, overrides in slice_overrides_seq:
    --> 179   batch_object = _apply_single_step(
        180       batch_object,
        181       slices,
        182       overrides,
        183       bijector_x_event_ndims=bijector_x_event_ndims)
        184 return batch_object
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/tensorflow_probability/python/internal/slicing.py:172, in _apply_single_step(batch_object, slices, params_overrides, bijector_x_event_ndims)
        170 override_dict.update(params_overrides)
        171 parameters = dict(batch_object.parameters, **override_dict)
    --> 172 return type(batch_object)(**parameters)
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/decorator.py:231, in decorate.<locals>.fun(*args, **kw)
        229 def fun(*args, **kw):
        230     if not kwsyntax:
    --> 231         args, kw = fix(args, kw, sig)
        232     return caller(func, *(extras + args), **kw)
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/site-packages/decorator.py:203, in fix(args, kwargs, sig)
        199 def fix(args, kwargs, sig):
        200     """
        201     Fix args and kwargs to be consistent with the signature
        202     """
    --> 203     ba = sig.bind(*args, **kwargs)
        204     ba.apply_defaults()  # needed for test_dan_schult
        205     return ba.args, ba.kwargs
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/inspect.py:3179, in Signature.bind(self, *args, **kwargs)
       3174 def bind(self, /, *args, **kwargs):
       3175     """Get a BoundArguments object, that maps the passed `args`
       3176     and `kwargs` to the function's signature.  Raises `TypeError`
       3177     if the passed arguments can not be bound.
       3178     """
    -> 3179     return self._bind(args, kwargs)
    
    File ~/mambaforge/envs/phenotypes/lib/python3.10/inspect.py:3168, in Signature._bind(self, args, kwargs, partial)
       3166         arguments[kwargs_param.name] = kwargs
       3167     else:
    -> 3168         raise TypeError(
       3169             'got an unexpected keyword argument {arg!r}'.format(
       3170                 arg=next(iter(kwargs))))
       3172 return self._bound_arguments_cls(self, arguments)
    
    TypeError: got an unexpected keyword argument 'reinterpreted_batch_ndims'
    

    Using tensorflow 2.11.0 tensorflow-probability 0.19.0

    Any ideas why this is happening and how to fix?

    opened by henrypinkard 0
  • MultivariateNormalFullCovariance gives false log_prob with JAX backend, GPU and fp64

    MultivariateNormalFullCovariance gives false log_prob with JAX backend, GPU and fp64

    Code

    from jax.config import config
    config.update("jax_enable_x64", True)
    import os
    #os.environ['XLA_PYTHON_CLIENT_ALLOCATOR']='platform'
    import jax
    import jax.numpy as jnp
    from tensorflow_probability.substrates import jax as tfp
    
    tfd = tfp.distributions
    
    Y=jnp.ones((4032,258),dtype=jnp.float64)
    distribution = tfd.MultivariateNormalFullCovariance(loc = jnp.zeros((Y.shape[1])),covariance_matrix = jnp.eye(Y.shape[1],dtype = jnp.float64))
    distribution.log_prob(Y)
    

    Problem

    When I enables fp64 for jax, with GPU, I can only calculate the log_prob of Y with size smaller than (4032,258). For example, size of (4096,258) would give false result:

    DeviceArray([-366.08614157, -366.08614157, -366.08614157, ...,
                           nan,           nan,           nan], dtype=float64)
    

    nan should be a false result. However with CPU everything works fine. I suppose this is the error in #1666

    When i uncommented the os.environ['XLA_PYTHON_CLIENT_ALLOCATOR']='platform', I got an error:

    ---------------------------------------------------------------------------
    UnfilteredStackTrace                      Traceback (most recent call last)
    [<ipython-input-1-6651013d38f8>](https://ajr892vy7y4-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230104-060047-RC00_499454183#) in <module>
         12 distribution = tfd.MultivariateNormalFullCovariance(loc = jnp.zeros((Y.shape[1])),covariance_matrix = jnp.eye(Y.shape[1],dtype = jnp.float64))
    ---> 13 distribution.log_prob(Y)
         14 #jnp.where(log_prob<0)
    
    59 frames
    UnfilteredStackTrace: jaxlib.xla_extension.XlaRuntimeError: INTERNAL: Failed to complete all kernels launched on stream 0x89279a0: Could not synchronize CUDA stream: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered
    
    The stack trace below excludes JAX-internal frames.
    The preceding is the original exception that occurred, unmodified.
    
    --------------------
    
    The above exception was the direct cause of the following exception:
    
    XlaRuntimeError                           Traceback (most recent call last)
    [/usr/local/lib/python3.8/dist-packages/jax/_src/scipy/linalg.py](https://ajr892vy7y4-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230104-060047-RC00_499454183#) in solve_triangular(***failed resolving arguments***)
        404                      debug: Any = None, check_finite: bool = True) -> Array:
        405   del overwrite_b, debug, check_finite  # unused
    --> 406   return _solve_triangular(a, b, trans, lower, unit_diagonal)
        407 
        408 
    
    XlaRuntimeError: INTERNAL: Failed to complete all kernels launched on stream 0x89279a0: Could not synchronize CUDA stream: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered
    

    Material

    A colab can be found here which reproduces the error.

    opened by dkn16 0
  • distributions.mvn_diag using deprecated scale_identity_multiplier kwarg

    distributions.mvn_diag using deprecated scale_identity_multiplier kwarg

    TFP 0.19.0 is using a deprecated TFP kwarg scale_identity_multiplier which results in the following log spam:

    From site-packages/tensorflow_probability/python/distributions/distribution.py:342: calling MultivariateNormalDiag.__init__ (from tensorflow_probability.python.distributions.mvn_diag) with scale_identity_multiplier is deprecated and will be removed after 2020-01-01.
    Instructions for updating:
    `scale_identity_multiplier` is deprecated; please combine it into `scale_diag` directly instead.
    

    Related https://github.com/tensorflow/tensorflow/issues/55190

    opened by stewartmiles 1
  • RelaxedOneHotCategorical has incorrect output for logits under 1 when logits are type of float32

    RelaxedOneHotCategorical has incorrect output for logits under 1 when logits are type of float32

    Hi, I was trying out the RelaxedOneHotCategorical function on tensorfloa_probability version 0.19.0. The following code gives me the incorrect distribution.

    from tensorflow_probability.substrates import jax as tfp
    temperature = 0.5
    p = [0.1, 0.5, 0.4]
    dist = tfp.distributions.RelaxedOneHotCategorical(temperature, logits=p)
    from jax import random
    dist.sample(seed=random.PRNGkey(0))
    

    The expected behavior is "the 2nd class is the most likely be the largest component in samples". However, I got the reverse probability instead.

    Array([0.9872344, 0.00204739, 0.01071812], dtype=float32)
    

    This behavior disappears when the logits are > 1, or if we cast p to be float16.

    Is this expected?

    opened by umyta 0
  • Model stuck when calling .fit(x, y) using negative binomial in DistributionLambda Layer

    Model stuck when calling .fit(x, y) using negative binomial in DistributionLambda Layer

    Hi all,

    I have a simple BNN that I just tried to change to have a negative binomial distribution as output:

    def get_model(input_shape, loss, optimizer, metrics, kl_weight, output_shape):
            
        inputs = Input(shape=(input_shape))
        x = BatchNormalization()(inputs)
        x = tfpl.DenseVariational(units=128, activation='tanh', make_posterior_fn=get_posterior, make_prior_fn=get_prior, kl_weight=kl_weight)(x)
        count = Dense(1)(x)
        logits = Dense(output_shape, activation = 'sigmoid')(x)
        neg_binom = tfp.layers.DistributionLambda(
                lambda t: tfd.NegativeBinomial(total_count=t[..., 0:1], probs = t[..., 1:]))
        cat = Concatenate(axis=-1)([count, logits])
        outputs = neg_binom(cat)
        model = Model(inputs, outputs)
        model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
        return model
    

    I do not get an error, it compiles and when I call model.fit(x,y) I just get:

    Epoch 1/500

    and it's stuck here forever (about 20 minutes I waited for the longest).

    When I use a Poisson Layer, which I did before it starts fitting instantly, an epoch runs about 1s.

    What could be the cause of this? Is there something wrong with my code above? I was hoping to call param_size but distribution lambda seems not to support this (just in case I am missing something).

    If I use a single Dense Layer without concatenate and just linear activation I get the same behavior.

    Many thanks for your insights and tips on things to try and debug this behavior.

    opened by aegonwolf 1
  • Error when running shared HMC on distributed GPUs with big dataset

    Error when running shared HMC on distributed GPUs with big dataset

    I'm currently running a distributed HMC on 4 Tesla V100 cards, and my codes are like:

    import functools import collections import contextlib from jax.config import config config.update("jax_enable_x64", True) import jax import jax.numpy as jnp from jax import lax from jax import random import tensorflow as tf from tensorflow_probability.substrates import jax as tfp

    tfd = tfp.distributions tfb = tfp.bijectors tfm = tfp.mcmc tfed = tfp.experimental.distribute tfde = tfp.experimental.distributions tfem = tfp.experimental.mcmc

    Root = tfed.JointDistributionCoroutine.Root

    def shard_value(x): x = x.reshape((jax.device_count(), -1, *x.shape[1:])) return jax.pmap(lambda x: x)(x) # pmap will physically place values on devices

    shard = functools.partial(jax.tree_map, shard_value) Y=jnp.ones((256*256,258),dtype=jnp.float64)

    dtype = jnp.float64 @functools.partial(jax.pmap, axis_name='data', in_axes=(None, None,0), out_axes=(None,None)) def run(seed, X, data): #data = Y # a sharded dataset num_examples, dim = data.shape # this is our model def model_fn(): k= yield Root(tfd.Sample(tfd.HalfNormal(dtype(2.)), 1)) k = k*jnp.eye(dim) yield tfed.Sharded(tfd.Independent(tfd.MultivariateNormalFullCovariance(loc = jnp.zeros((num_examples,dim),dtype=jnp.float64),covariance_matrix = k),1), shard_axis_name='data') model = tfed.JointDistributionCoroutine(model_fn) init_seed, sample_seed = random.split(seed) initial_state = model.sample(seed=init_seed)[:-1] def target_log_prob(*state): return model.log_prob((*state, data)) kernel = tfm.NoUTurnSampler(target_log_prob, 1e-2,max_tree_depth=8) kernel = tfm.DualAveragingStepSizeAdaptation(kernel,800,target_accept_prob=0.8) def trace_fn(state, pkr): log_prob = target_log_prob(*state) return ( log_prob, pkr.inner_results.has_divergence, 10**pkr.inner_results.log_accept_ratio, #accuracy(*state), pkr.new_step_size, ) states,traces = tfm.sample_chain( num_results=20, #num_burnin_steps=1000, current_state=initial_state, kernel=kernel, trace_fn=trace_fn, seed=sample_seed, ) return states, traces

    The code works fine when the second axis of Y is shorter than 256, however, when Y.shape[1]>256, it reports:

    2022-12-23 12:12:25.279516: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238201200; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279535: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238200c00; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279539: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238201400; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279541: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238200e00; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279544: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238201000; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279551: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c2862a3c00; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279562: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c2862a3600; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279565: E external/org_tensorflow/tensorflow/compiler/xla/pjrt/pjrt_stream_executor_client.cc:2153] Execution of replica 1 failed: INTERNAL: Unable to launch triangular solve 2022-12-23 12:12:25.279572: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c2862a3e00; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279576: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c2862a3800; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279599: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c2862a3a00; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279614: E external/org_tensorflow/tensorflow/compiler/xla/pjrt/pjrt_stream_executor_client.cc:2153] Execution of replica 0 failed: INTERNAL: cublas error 2022-12-23 12:12:25.279668: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238401200; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279679: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238400c00; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279682: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238401400; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279685: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238400e00; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279687: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238401000; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.279697: E external/org_tensorflow/tensorflow/compiler/xla/pjrt/pjrt_stream_executor_client.cc:2153] Execution of replica 2 failed: INTERNAL: cublas error 2022-12-23 12:12:25.280175: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:219] failed to create cublas handle: cublas error 2022-12-23 12:12:25.280206: W external/org_tensorflow/tensorflow/compiler/xla/stream_executor/stream.cc:1088] attempting to perform BLAS operation using StreamExecutor without BLAS support 2022-12-23 12:12:25.280218: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238601200; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.280221: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238600c00; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.280224: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238601400; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.280227: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238600e00; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.280230: E external/org_tensorflow/tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:752] failed to free device memory at 0x14c238601000; result: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered 2022-12-23 12:12:25.280241: E external/org_tensorflow/tensorflow/compiler/xla/pjrt/pjrt_stream_executor_client.cc:2153] Execution of replica 3 failed: INTERNAL: Unable to launch triangular solve

    I checked that it probably is not the OOM problem, because when Y.shape[0]=256, it only took half of the GPU memory.

    My environment is jax=0.3.23,CUDA=11.8,jaxlib=0.3.22+cuda11.cudnn82,tensorflow_probability=0.19,tensorflow = 2.11

    Does anyone have the same problem as me?

    opened by dkn16 5
Releases(v0.19.0)
  • v0.19.0(Dec 6, 2022)

    Release notes

    This is the 0.19.0 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.11 and JAX 0.3.25 .

    Change notes

    • Bijectors

      • Added UnitVector bijector to map to the unit sphere.
    • Distributions

      • Added noncentral Chi2 distribution to TFP.
      • Added differentiable quantile and cdf function approximation to NC2 distribution.
      • Added quantiles to Student-T, Beta and SigmoidBeta, with efficient implementations for Student-T quantile/cdf.
      • Allow structured index points to GaussianProcess* classes.
      • Improved efficiency of GaussianProcess* gradients through custom gradients on log_prob.
    • Linear Algebra

      • Added functions (with custom gradients) to handle Hermitian Symmetric Positive-definite matrices:
        • tfp.math.hspd_logdet
        • tfp.math.hpsd_quadratic_form_solve and tfp.math.hpsd_quadratic_form_solvevec
        • tfp.math.hpsd_solve and tfp.math.hpsd_solvevec
    • Optimizer

      • BUGFIX: Prevent Hager-Zhang linesearch from terminating early.
    • PSD Kernels

      • Added support for structured inputs in PSD Kernel.
    • STS

      • Added seasonality support to STS Gibbs Sampler.
    • Other

      • BUGFIX: Allow jnp.bfloat16 arrays to be correctly recognized as floats.

    Huge thanks to all the contributors to this release!

    • Brian Patton
    • Chen Qian
    • Christopher Suter
    • Colin Carrol
    • Emily Fertig
    • Francois Chollet
    • Ian Langmore
    • Jacob Burnim
    • Jonas Eschle
    • Kyle Loveless
    • Leandro Campos
    • Du Phan
    • Pavel Sountsov
    • Sebastian Nowozin
    • Srinivas Vasudevan
    • Thomas Colthurst
    • Umer Javed
    • Urs Koster
    • Yash Katariya
    Source code(tar.gz)
    Source code(zip)
  • v0.18.0(Sep 12, 2022)

    Release notes

    This is the 0.18.0 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.10 and JAX 0.3.17 .

    Change notes

    [coming soon]

    Huge thanks to all the contributors to this release!

    [coming soon]

    Source code(tar.gz)
    Source code(zip)
  • v0.17.0(Jun 7, 2022)

    Release notes

    This is the 0.17.0 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.9.1 and JAX 0.3.13 .

    Change notes

    • Distributions

      • Discrete distributions transform correctly when a bijector is applied.
      • Fix bug in Taylor approximation of log-normalizing constant for the ContinuousBernoulli.
      • Add TwoPieceNormal distribution and reparameterize it's samples.
      • Make IncrementLogProb a proper tfd.Distribution.
      • Add quantiles to Empirical distribution.
      • Add tfp.experimental.distributions.MultiTaskGaussianProcessRegressionModel
      • Improve efficiency of MultiTaskGaussian Processes in the presence of observation noise: Reduce complexity from O((NT)^3) to O(N^3 + T^3) where N is the number of data points and T is the number of tasks.
      • Improve efficiency of VariationalGaussianProcess.
      • Add tfd.LognNormal.experimental_from_mean_variance.
    • Bijectors

      • Fix Softfloor bijector to act as the identity at high temperature, and floor at low temperature.
      • Remove tfb.Ordered bijector and finite_nondiscrete flags in Distributions.
    • Math

      • Add tfp.math.betainc and gradients with respect to all parameters.
    • STS

      • Several bug fixes and performance improvements to tfp.experimental.sts_gibbs for Gibbs sampling Bayesian structural time series models with sparse linear regression.
      • Enable tfp.experimental.sts_gibbs under JAX
    • Experimental

      • Ensemble Kalman filter is now efficient in the case of ensemble size << observation size and an "easy to invert" modeled observation covariance.
      • Add a perturbed_observations option to ensemble_kalman_filter_log_marginal_likelihood.
      • Add Experimental support for custom JAX PRNGs.
    • Other

      • Add assertAllMeansClose to tfp.TestCase for testing sampling code.

    Huge thanks to all the contributors to this release!

    • Adam Sorrenti
    • Alexey Radul
    • Christopher Suter
    • Colin Carroll
    • Du Phan
    • Emily Fertig
    • Fabien Hertschuh
    • Faizan Muhammad
    • Francois Chollet
    • Ian Langmore
    • Jacob Burnim
    • Jake VanderPlas
    • Kathy Wu
    • Kristian Hartikainen
    • Kyle Loveless
    • Leandro Campos
    • Xinle Sheila Liu
    • ltsaprounis
    • Matt Hoffman
    • Manas Mohanty
    • Max Jiang
    • Pavel Sountsov
    • Peter Hawkins
    • Praveen Narayan
    • Renu Patel
    • Ryan Russell
    • Scott Zhu
    • Sergey Lebedev
    • Sharad Vikram
    • Srinivas Vasudevan
    • tagoma
    • Urs Koster
    • Vaidotas Simkus
    • Vishnuvardhan Janapati
    • Yilei Yang
    Source code(tar.gz)
    Source code(zip)
  • v0.16.0(Feb 14, 2022)

    Release notes

    This is the 0.16.0 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.8.0 and JAX 0.3.0 .

    Change notes

    [coming soon]

    Huge thanks to all the contributors to this release!

    • Alexey Radul
    • Ben Lee
    • Billy Lamberta
    • Brian Patton
    • Chansoo Lee
    • Christopher Suter
    • Colin Carroll
    • Dave Moore
    • Du Phan
    • Emily Fertig
    • François Chollet
    • Gianluigi Silvestri
    • Jacob Burnim
    • Jake Taylor
    • Junpeng Lao
    • Matthew Johnson
    • Michael Weiss
    • Pavel Sountsov
    • Peter Hawkins
    • Rebecca Chen
    • Sharad Vikram
    • Soo Sung
    • Srinivas Vasudevan
    • Urs Köster
    Source code(tar.gz)
    Source code(zip)
  • v0.15.0(Nov 18, 2021)

    Release notes

    This is the 0.15 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.7.0.

    Change notes

    • Distributions

      • Add tfd.StudentTProcessRegressionModel.
      • Distributions' statistics now all have batch shape matching the Distribution itself.
      • JointDistributionCoroutine no longer requires Root when sample_shape==().
      • Support sample_distributions from autobatched joint distributions.
      • Expose mask argument to support missing observations in HMM log probs.
      • BetaBinomial.log_prob is more accurate when all trials succeed.
      • Support broadcast batch shapes in MixtureSameFamily.
      • Add cholesky_fn argument to GaussianProcess, GaussianProcessRegressionModel, and SchurComplement.
      • Add staticmethod for precomputing GPRM for more efficient inference in TensorFlow.
      • Add GaussianProcess.posterior_predictive.
    • Bijectors

      • Bijectors parameterized by distinct tf.Variables no longer register as ==.
      • BREAKING CHANGE: Remove deprecated AffineScalar bijector. Please use tfb.Shift(shift)(tfb.Scale(scale)) instead.
      • BREAKING CHANGE: Remove deprecated Affine and AffineLinearOperator bijectors.
    • PSD kernels

      • Add tfp.math.psd_kernels.ChangePoint.
      • Add slicing support for PositiveSemidefiniteKernel.
      • Add inverse_length_scale parameter to kernels.
      • Add parameter_properties to PSDKernel along with automated batch shape inference.
    • VI

      • Add support for importance-weighted variational objectives.
      • Support arbitrary distribution types in tfp.experimental.vi.build_factored_surrogate_posterior.
    • STS

      • Support + syntax for summing StructuralTimeSeries models.
    • Math

      • Enable JAX/NumPy backends for tfp.math.ode.
      • Allow returning auxiliary information from tfp.math.value_and_gradient.
    • Experimental

      • Speedup to experimental.mcmc windowed samplers.
      • Support unbiased gradients through particle filtering via stop-gradient resampling.
      • ensemble_kalman_filter_log_marginal_likelihood (log evidence) computation added to tfe.sequential.
      • Add experimental joint-distribution layers library.
      • Delete tfp.experimental.distributions.JointDensityCoroutine.
      • Add experimental special functions for high-precision computation on a TPU.
      • Add custom log-prob ratio for IncrementLogProb.
      • Use foldl in no_pivot_ldl instead of while_loop.
    • Other

      • TFP should now support numpy 1.20+.
      • BREAKING CHANGE: Stock unpacking seeds when splitting in JAX.

    Huge thanks to all the contributors to this release!

    • 8bitmp3
    • adriencorenflos
    • Alexey Radul
    • Allen Lavoie
    • Ben Lee
    • Billy Lamberta
    • Brian Patton
    • Christopher Suter
    • Colin Carroll
    • Dave Moore
    • Du Phan
    • Emily Fertig
    • Faizan Muhammad
    • George Necula
    • George Tucker
    • Grace Luo
    • Ian Langmore
    • Jacob Burnim
    • Jake VanderPlas
    • Jeremiah Liu
    • Junpeng Lao
    • Kaan
    • Luke Wood
    • Max Jiang
    • Mihai Maruseac
    • Neil Girdhar
    • Paul Chiang
    • Pavel Izmailov
    • Pavel Sountsov
    • Peter Hawkins
    • Rebecca Chen
    • Richard Song
    • Rif A. Saurous
    • Ron Shapiro
    • Roy Frostig
    • Sharad Vikram
    • Srinivas Vasudevan
    • Tomohiro Endo
    • Urs Köster
    • William C Grisaitis
    • Yilei Yang
    Source code(tar.gz)
    Source code(zip)
  • v0.14.1(Sep 30, 2021)

    Release notes

    This is the 0.14.1 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.6.0 and JAX 0.2.21.

    Change notes

    [coming soon]

    Huge thanks to all the contributors to this release!

    • 8bitmp3
    • adriencorenflos
    • allenl
    • axch
    • bjp
    • blamb
    • csuter
    • colcarroll
    • davmre
    • derifatives
    • emilyaf
    • europeanplaice
    • Frightera
    • fmuham
    • gcluo
    • GianluigiSilvestri
    • gisilvs
    • gjt
    • grisaitis
    • harahu
    • jburnim
    • langmore
    • leben
    • lukewood
    • mihaimaruseac
    • NeilGirdhar
    • phandu
    • phawkins
    • rechen
    • ronshapiro
    • scottzhu
    • sharadmv
    • siege
    • srvasude
    • ursk
    • vanderplas
    • xingyousong
    • yileiyang
    Source code(tar.gz)
    Source code(zip)
  • v0.14.0(Sep 21, 2021)

    Release notes

    This is the 0.14 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.6.0 and JAX 0.2.20.

    Change notes

    Please see the release notes for TFP 0.14.1 at https://github.com/tensorflow/probability/releases/v0.14.1 .

    Huge thanks to all the contributors to this release!

    • 8bitmp3
    • adriencorenflos
    • allenl
    • axch
    • bjp
    • blamb
    • csuter
    • colcarroll
    • davmre
    • derifatives
    • emilyaf
    • europeanplaice
    • Frightera
    • fmuham
    • gcluo
    • GianluigiSilvestri
    • gisilvs
    • gjt
    • grisaitis
    • harahu
    • jburnim
    • langmore
    • leben
    • lukewood
    • mihaimaruseac
    • NeilGirdhar
    • phandu
    • phawkins
    • rechen
    • ronshapiro
    • scottzhu
    • sharadmv
    • siege
    • srvasude
    • ursk
    • vanderplas
    • xingyousong
    • yileiyang
    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(Jun 18, 2021)

    Release notes

    This is the 0.13 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.5.0.

    See the visual release notebook in colab.

    Change notes

    • Distributions

      • Adds tfd.BetaQuotient
      • Adds tfd.DeterminantalPointProcess
      • Adds tfd.ExponentiallyModifiedGaussian
      • Adds tfd.MatrixNormal and tfd.MatrixT
      • Adds tfd.NormalInverseGaussian
      • Adds tfd.SigmoidBeta
      • Adds tfp.experimental.distribute.Sharded
      • Adds tfd.BatchBroadcast
      • Adds tfd.Masked
      • Adds JAX support for tfd.Zipf
      • Adds Implicit Reparameterization Gradients to tfd.InverseGaussian.
      • Adds quantiles for tfd.{Chi2,ExpGamma,Gamma,GeneralizedNormal,InverseGamma}
      • Derive Distribution batch shapes automatically from parameter annotations.
      • Ensuring Exponential.cdf(x) is always 0 for x < 0.
      • VectorExponentialLinearOperator and VectorExponentialDiag distributions now return variance, covariance, and standard deviation of the correct shape.
      • Bates distribution now returns mean of the correct shape.
      • GeneralizedPareto now returns variance of the correct shape.
      • Deterministic distribution now returns mean, mode, and variance of the correct shape.
      • Ensure that JointDistributionPinned's support bijectors respect autobatching.
      • Now systematically testing log_probs of most distributions for numerical accuracy.
      • InverseGaussian no longer emits negative samples for large loc / concentration
      • GammaGamma, GeneralizedExtremeValue, LogLogistic, LogNormal, ProbitBernoulli should no longer compute nan log_probs on their own samples. VonMisesFisher, Pareto, and GeneralizedExtremeValue should no longer emit samples numerically outside their support.
      • Improve numerical stability of tfd.ContinuousBernoulli and deprecate lims parameter.
    • Bijectors

      • Add bijectors to mimic tf.nest.flatten (tfb.tree_flatten) and tf.nest.pack_sequence_as (tfb.pack_sequence_as).
      • Adds tfp.experimental.bijectors.Sharded
      • Remove deprecated tfb.ScaleTrilL. Use tfb.FillScaleTriL instead.
      • Adds cls.parameter_properties() annotations for Bijectors.
      • Extend range tfb.Power to all reals for odd integer powers.
      • Infer the log-deg-jacobian of scalar bijectors using autodiff, if not otherwise specified.
    • MCMC

      • MCMC diagnostics support arbitrary structures of states, not just lists.
      • remc_thermodynamic_integrals added to tfp.experimental.mcmc
      • Adds tfp.experimental.mcmc.windowed_adaptive_hmc
      • Adds an experimental API for initializing a Markov chain from a near-zero uniform distribution in unconstrained space. tfp.experimental.mcmc.init_near_unconstrained_zero
      • Adds an experimental utility for retrying Markov Chain initialization until an acceptable point is found. tfp.experimental.mcmc.retry_init
      • Shuffling experimental streaming MCMC API to slot into tfp.mcmc with a minimum of disruption.
      • Adds ThinningKernel to experimental.mcmc.
      • Adds experimental.mcmc.run_kernel driver as a candidate streaming-based replacement to mcmc.sample_chain
    • VI

      • Adds build_split_flow_surrogate_posterior to tfp.experimental.vi to build structured VI surrogate posteriors from normalizing flows.
      • Adds build_affine_surrogate_posterior to tfp.experimental.vi for construction of ADVI surrogate posteriors from an event shape.
      • Adds build_affine_surrogate_posterior_from_base_distribution to tfp.experimental.vi to enable construction of ADVI surrogate posteriors with correlation structures induced by affine transformations.
    • MAP/MLE

      • Added convenience method tfp.experimental.util.make_trainable(cls) to create trainable instances of distributions and bijectors.
    • Math/linalg

      • Add trapezoidal rule to tfp.math.
      • Add tfp.math.log_bessel_kve.
      • Add no_pivot_ldl to experimental.linalg.
      • Add marginal_fn argument to GaussianProcess (see no_pivot_ldl).
      • Added tfp.math.atan_difference(x, y)
      • Add tfp.math.erfcx, tfp.math.logerfc and tfp.math.logerfcx
      • Add tfp.math.dawsn for Dawson's Integral.
      • Add tfp.math.igammaincinv, tfp.math.igammacinv.
      • Add tfp.math.sqrt1pm1.
      • Add LogitNormal.stddev_approx and LogitNormal.variance_approx
      • Add tfp.math.owens_t for the Owen's T function.
      • Add bracket_root method to automatically initialize bounds for a root search.
      • Add Chandrupatla's method for finding roots of scalar functions.
    • Stats

      • tfp.stats.windowed_mean efficiently computes windowed means.
      • tfp.stats.windowed_variance efficiently and accurately computes windowed variances.
      • tfp.stats.cumulative_variance efficiently and accurately computes cumulative variances.
      • RunningCovariance and friends can now be initialized from an example Tensor, not just from explicit shape and dtype.
      • Cleaner API for RunningCentralMoments, RunningMean, RunningPotentialScaleReduction.
    • STS

      • Speed up STS forecasting and decomposition using internal tf.function wrapping.
      • Add option to speed up filtering in LinearGaussianSSM when only the final step's results are required.
      • Variational Inference with Multipart Bijectors: example notebook with the Radon model.
      • Add experimental support for transforming any distribution into a preconditioning bijector.
    • Other

      • Distributed inference example notebook
      • sanitize_seed is now available in the tfp.random namespace.
      • Add tfp.random.spherical_uniform.

    Huge thanks to all the contributors to this release!

    • Abhinav Upadhyay
    • axch
    • Brian Patton
    • Chris Jewell
    • Christopher Suter
    • colcarroll
    • Dave Moore
    • ebrevdo
    • Emily Fertig
    • Harald Husum
    • Ivan Ukhov
    • jballe
    • jburnim
    • Jeff Pollock
    • Jensun Ravichandran
    • JulianWgs
    • junpenglao
    • jvdillon
    • j-wilson
    • kateslin
    • Kristian Hartikainen
    • ksachdeva
    • langmore
    • leben
    • mattjj
    • Nicola De Cao
    • Pavel Sountsov
    • paweller
    • phawkins
    • Prasanth Shyamsundar
    • Rene Jean Corneille
    • Samuel Marks
    • scottzhu
    • sharadmv
    • siege
    • Simon Dirmeier
    • Srinivas Vasudevan
    • Thomas Markovich
    • ursk
    • Uzair
    • vanderplas
    • yileiyang
    • ZeldaMariet
    • Zichun Ye
    Source code(tar.gz)
    Source code(zip)
  • 0.13.0-rc0(May 24, 2021)

  • v0.12.2(Apr 19, 2021)

    This is the 0.12.2 release of TensorFlow Probability, a patch release to cap the JAX dependency to a compatible version. It is tested and stable against TensorFlow version 2.4.0.

    For detailed change notes, please see the 0.12.1 release at https://github.com/tensorflow/probability/releases/tag/v0.12.1 .

    Source code(tar.gz)
    Source code(zip)
  • v0.12.1(Dec 29, 2020)

    Release notes

    This is the 0.12.1 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.4.0.

    Change notes

    NOTE: Links point to examples in the TFP 0.12.1 release Colab.

    Bijectors:

    Distributions:

    MCMC:

    • Add tfp.experimental.mcmc.ProgressBarReducer.
    • Update experimental.mcmc.sample_sequential_monte_carlo to use new MCMC stateless kernel API.
    • Add an experimental streaming MCMC framework that supports computing statistics over a (batch of) Markov chain(s) without materializing the samples. Statistics supported (mostly on arbitrary functions of the model variables): mean, (co)variance, central moments of arbitrary rank, and the potential scale reduction factor (R-hat). Also support selectively tracing history of some but not all statistics or model variables. Add algorithms for running mean, variance, covariance, arbitrary higher central moments, and potential scale reduction factor (R-hat) totfp.experimental.stats.
    • untempered_log_prob_fn added as init kwarg to ReplicaExchangeMC Kernel.
    • Add experimental support for mass matrix preconditioning in Hamiltonian Monte Carlo.
    • Add ability to temper part of the log prob in ReplicaExchangeMC.
    • tfp.experimental.mcmc.{sample_fold,sample_chain} support warm restart.
    • even_odd_swap exchange function added to replica_exchange_mc.
    • Samples from ReplicaExchangeMC can now have a per-replica initial state.
    • Add omitted n/(n-1) term to tfp.mcmc.potential_scale_reduction_factor.
    • Add KernelBuilder and KernelOutputs to experimental.
    • Allow tfp.mcmc.SimpleStepSizeAdaptation and DualAveragingStepSizeAdaptation to take a custom reduction function.
    • Replace make_innermost_getter et al. with tfp.experimental.unnest utilities.

    VI:

    Math + Stats:

    Other:

    • Add tfp.math.psd_kernels.GeneralizedMaternKernel (generalizes MaternOneHalf, MaternThreeHalves and MaternFiveHalves).
    • Add tfp.math.psd_kernels.Parabolic.
    • Add tfp.experimental.unnest utilities for accessing nested attributes.
    • Enable pytree flattening for TFP distributions in JAX
    • More careful handling of nan and +-inf in {L-,}BFGS.
    • Remove Edward2 from TFP. Edward2 is now in its own repo at https://github.com/google/edward2 .
    • Support vector-valued offsets in sts.Sum.
    • Make DeferredTensor actually defer computation under JAX/NumPy backends.

    Huge thanks to all the contributors to this release!

    • Adrian Buzea
    • Alexey Radul
    • Ben Lee
    • Ben Poole
    • Brian Patton
    • Christopher Suter
    • Colin Carroll
    • Cyril Chimisov
    • Dave Moore
    • Du Phan
    • Emily Fertig
    • Eugene Brevdo
    • Federico Tomasi
    • François Chollet
    • George Karpenkov
    • Giovanni Palla
    • Ian Langmore
    • Jacob Burnim
    • Jacob Valdez
    • Jake VanderPlas
    • Jason Zavaglia
    • Jean-Baptiste Lespiau
    • Jeff Pollock
    • Joan Puigcerver
    • Jonas Eschle
    • Josh Darrieulat
    • Joshua V. Dillon
    • Junpeng Lao
    • Kapil Sachdeva
    • Kate Lin
    • Kibeom Kim
    • Luke Metz
    • Mark Daoust
    • Matteo Hessel
    • Michal Brys
    • Oren Bochman
    • Padarn Wilson
    • Pavel Sountsov
    • Peter Hawkins
    • Rif A. Saurous
    • Ru Pei
    • ST John
    • Sharad Vikram
    • Simeon Carstens
    • Srinivas Vasudevan
    • Tom O'Malley
    • Tomer Kaftan
    • Urs Köster
    • Yash Katariya
    • Yilei Yang
    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(Dec 29, 2020)

    This is the 0.12.0 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.4.0.

    For detailed change notes, please see the 0.12.1 release at https://github.com/tensorflow/probability/releases/tag/v0.12.1 .

    Source code(tar.gz)
    Source code(zip)
  • v0.12.0-rc4(Dec 9, 2020)

  • v0.12.0-rc2(Nov 21, 2020)

  • v0.12.0-rc1(Nov 11, 2020)

  • v0.12.0-rc0(Nov 10, 2020)

  • v0.11.1(Oct 9, 2020)

  • v0.11.0(Jul 28, 2020)

    Release notes

    This is the 0.11 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.3.0.

    Change notes

    Links point to examples in the TFP 0.11.0 release Colab.

    • Distributions

    • Bijectors:

      • Add the Split bijector.
      • Add GompertzCDF and ShiftedGompertzCDF bijectors
      • Add Sinh bijector.
      • Scale bijector can take in log_scale parameter.
      • Blockwise now supports size changing bijectors.
      • Allow using conditioning inputs in AutoregressiveNetwork.
      • Move bijector caching logic to its own library.
    • MCMC:

      • tfp.mcmc now supports stateless sampling. tfp.mcmc.sample_chain(..., seed=(1,2)) is expected to always return the same results (within a release), and is deterministic (provided the underlying kernel is deterministic).
      • Better static shape inference for Metropolis-Hastings kernels with partially-specified shapes.
      • TransformedTransitionKernel nests properly with itself and other wrapper kernels.
      • Pretty-printing MCMC kernel results.
    • Structured time series:

      • Automatically constrain STS inference when weights have constrained support.
    • Math:

      • Add tfp.math.bessel_iv_ratio for ratios of modified bessel functions of the first kind.
      • round_exponential_bump_function added to tfp.math.
      • Support dynamic num_steps and custom convergence_criteria in tfp.math.minimize.
      • Add tfp.math.log_cosh.
      • Define more accurate lbeta and log_gamma_difference.
    • Jax/Numpy substrates:

      • TFP runs on JAX!
      • Expose MaskedAutogregressiveFlow to Numpy and JAX.
    • Experimental:

      • Add experimental Sequential Monte Carlo sample driver.
      • Add experimental tools for estimating parameters of sequential models using iterated filtering.
      • Use Distributions as CompositeTensors.
      • Inference Gym: Add logistic regression.
      • Add support for convergence criteria in tfp.vi.fit_surrogate_posterior.
    • Other:

      • Added tfp.random.split_seed for stateless sampling. Moved tfp.math.random_{rademacher,rayleigh} to tfp.random.{rademacher,rayleigh}.
      • Possibly breaking change: SeedStream seed argument may not be a Tensor.

    Huge thanks to all the contributors to this release!

    • Alexey Radul
    • anatoly
    • Anudhyan Boral
    • Ben Lee
    • Brian Patton
    • Christopher Suter
    • Colin Carroll
    • Cristi Cobzarenco
    • Dan Moldovan
    • Dave Moore
    • David Kao
    • Emily Fertig
    • erdembanak
    • Eugene Brevdo
    • Fearghus Robert Keeble
    • Frank Dellaert
    • Gabriel Loaiza
    • Gregory Flamich
    • Ian Langmore
    • Iqrar Agalosi Nureyza
    • Jacob Burnim
    • jeffpollock9
    • jekbradbury
    • Jimmy Yao
    • johannespitz
    • Joshua V. Dillon
    • Junpeng Lao
    • Kate Lin
    • Ken Franko
    • luke199629
    • Mark Daoust
    • Markus Kaiser
    • Martin Jul
    • Matthew Feickert
    • Maxim Polunin
    • Nicolas
    • npfp
    • Pavel Sountsov
    • Peng YU
    • Rebecca Chen
    • Rif A. Saurous
    • Ru Pei
    • Sayam753
    • Sharad Vikram
    • Srinivas Vasudevan
    • summeryue
    • Tom Charnock
    • Tres Popp
    • Wataru Hashimoto
    • Yash Katariya
    • Zichun Ye
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0-rc1(Jul 20, 2020)

  • v0.11.0-rc0(Jul 16, 2020)

  • v0.10.1(Jul 6, 2020)

    This is a patch release to pin the CloudPickle version to 1.3 to address #991 . It is tested and stable against TensorFlow version 2.2.0.

    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(May 14, 2020)

    Release notes

    This is the 0.10 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.2.0.

    Change notes

    • Distributions

      • Beta-Binomial distribution.
      • Add new AutoBatched joint distribution variants that treat a joint sample as a single probabilistic event.
      • XLA-able Python TF Gamma sampler.
      • XLA-able binomial sampler. Replaces the existing sampler, which implements binomial using one-hot categoricals via multinomial, with a batched rejection sampler. The new sampler is 4-6 times slower for very small problems, but an unbounded amount faster on large problems, since it removes a linear dependency on total_count. Additionally, since the previous solver required memory proportional to total_count*num_samples, many problems which OOM'd before are now feasible.
      • Enable use of joint bijectors in TransformedDistribution.
      • Remove unused get_logits_and_probs from internal/distribution_util.
      • Batched rejection sampling utilities.
      • Update batched_rejection_sampler to use prefer_static.shape to handle possibly-dynamic shape.
    • Bijectors

      • Add Lambert W transform bijectors.
    • MCMC

      • EllipticalSliceSampler in tfp.experimental.mcmc
      • Add cross-chain ESS, following Vehtari et al. 2019.
    • Optimizer

      • Add convergence criteria for optimizations.
    • Stats

      • Add tfp.stats.expected_calibration_error_quantiles.
    • Math

      • Add a 'special' module to tfp.math - a TF version of scipy.special.
      • Add scan_associative function, implementing parallel prefix scan of tensors with a user-provided binary operation.
    • Breaking change: Removed a number of functions, methods, and classes that were deprecated in TensorFlow Probability 0.9.0 or earlier.

      • Removed deprecated tfb.Weibull -- use tfb.WeibullCDF.
      • Remove VectorLaplaceLinearOperator
      • Remove deprecated method tfp.sts.build_factored_variational_loss.
      • Remove deprecated tfb.Kumaraswamy -- use tfb.Invert(tfb.KumaraswamyCDF).
      • Remove deprecated tfd.VectorSinhArcsinhDiag, tfd.VectorLaplaceDiag.
      • Remove deprecated tfb.Gumbel -- use tfb.GumbelCDF.
    • Other

      • Python 3.8 compatibility.
      • TensorFlow now requires gast version 0.3.2 and is no longer compatible with 0.2.2.
      • Moving TF Session C++ to Python code and functionality from swig to pybind11.
      • Update TFP examples to Python 3.

    Huge thanks to all the contributors to this release!

    • Alexander Ivanov
    • Alexey Radul
    • Amanda
    • Amelio Vazquez-Reina
    • Amit Patankar
    • Anudhyan Boral
    • Artem Belevich
    • Brian Patton
    • Christopher Suter
    • Colin Carroll
    • Dan Moldovan
    • Dave Moore
    • Demetri Pananos
    • Dmitrii Kochkov
    • Emily Fertig
    • gameshamilton
    • Georg M. Goerg
    • Ian Langmore
    • Jacob Burnim
    • jeffpollock9
    • Joshua V. Dillon
    • Junpeng Lao
    • kovak1
    • Kristian Hartikainen
    • Liam
    • Martin Jul
    • Matt Hoffman
    • nbro
    • Olli Huotari
    • Pavel Sountsov
    • Pyrsos
    • Rif A. Saurous
    • Rushabh Vasani
    • Sayam753
    • Sharad Vikram
    • Spyros
    • Srinivas Vasudevan
    • Taylor Robie
    • Xiaojing Wang
    • Zichun Ye
    Source code(tar.gz)
    Source code(zip)
  • v0.10.0-rc1(Apr 30, 2020)

  • v0.10.0-rc0(Apr 15, 2020)

  • v0.9.0(Jan 15, 2020)

    Release notes

    This is the 0.9 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.1.0.

    NOTE: The 0.9 releases of TensorFlow Probability will be the last to support Python 2. Future versions of TensorFlow Probability will require Python 3.5 or later.

    Change notes

    • Distributions

      • Add Pixel CNN++ distribution.
      • Breaking change: Remove deprecated behavior of Poisson.rate and Poisson.log_rate.
      • Breaking change: Remove deprecated behavior of logits, probs properties.
      • Add _default_event_space_bijector to distributions.
      • Add validation that samples are within the support of the distribution.
      • Support positional and keyword args to JointDistribution.prob and JointDistribution.log_prob.
      • Support OrderedDict dtype in JointDistributionNamed.
      • tfd.BatchReshape is tape-safe
      • More accurate survival function and CDF for the generalized Pareto distribution.
      • Added Plackett-Luce distribution over permutations.
      • Fix long-standing bug with cdf, survival_function, and quantile for TransformedDistributions having decreasing bijectors.
      • Export the DoubleMaxwell distribution.
      • Add method for analytic Bayesian linear regression with LinearOperators.
    • Bijectors

      • Breaking change: Scalar bijectors must implement _is_increasing if using cdf/survival_function/quantile on TransformedDistribution. This supports resolution of a long-standing bug, e.g. tfb.Scale(scale=-1.)(tfd.HalfNormal(0,1)).cdf was incorrect.
      • Deprecate tfb.masked_autoregressive_default_template.
      • Fixed inverse numerical stability bug in tfb.Softfloor
      • Tape-safe Reshape bijector.
    • MCMC

      • Optimize tfp.mcmc.ReplicaExchangeMonteCarlo by replacing TF control flow and
      • ReplicaExchangeMC now can trace exchange proposals/acceptances.
      • Correct implementation of log_accept_ratio in NUTS
      • Return non-cumulated leapfrogs_taken in nuts kernel_result.
      • Make unrolled NUTS reproducible.
      • Bug fix of Generalized U-turn in NUTS.
      • Reduce NUTS test flakiness.
      • Fix convergence test for NUTS.
      • Switch back to original U turn criteria in Hoffman & Gelman 2014.
      • Make autobatched NUTS reproducible.
    • STS

      • Update example "Structural Time Series Modeling Case Studies" to TF2.0 API.
      • Add fast path for sampling STS LocalLevel models.
      • Support posterior sampling in linear Gaussian state space models.
      • Add a fast path for Kalman smoothing with scalar latents.
      • Add option to disallow drift in STS Seasonal models.
    • Breaking change: Removed a number of functions, methods, and classes that were deprecated in TensorFlow Probability 0.8.0 or earlier.

      • Remove deprecated trainable_distributions_lib.
      • Remove deprecated property Dirichlet.total_concentration.
      • Remove deprecated tfb.AutoregressiveLayer -- use tfb.AutoregressiveNetwork.
      • Remove deprecated tfp.distributions.* methods.
      • Remove deprecated tfp.distributions.moving_mean_variance.
      • Remove two deprecated tfp.vi functions.
      • Remove deprecated tfp.distributions.SeedStream -- use tfp.util.SeedStream.
      • Remove deprecated properties of tfd.Categorical.
    • Other

      • Add make_rank_polymorphic utility, which lifts a callable to a vectorized callable.
      • Dormand-Prince solver supports nested structures. Implemented adjoint sensitivity method for Dormand-Prince solver gradients.
      • Run Travis tests against latest tf-estimator-nightly.
      • Supporting gast 0.3 +
      • Add tfp.vi.build_factored_surrogate_posterior utility for automatic black-box variational inference.

    Huge thanks to all the contributors to this release!

    • Aditya Grover
    • Alexey Radul
    • Anudhyan Boral
    • Arthur Lui
    • Billy Lamberta
    • Brian Patton
    • Christopher Suter
    • Colemak
    • Dan Moldovan
    • Dave Moore
    • Dmitrii Kochkov
    • Edward Loper
    • Emily Fertig
    • Ian Langmore
    • Jacob Burnim
    • Joshua V. Dillon
    • Junpeng Lao
    • Katherine Wu
    • Kibeom Kim
    • Kristian Hartikainen
    • Mark Daoust
    • Pavel Sountsov
    • Peter Hawkins
    • refraction-ray
    • RJ Skerry-Ryan
    • Sanket Kamthe
    • Sergei Lebedev
    • Sharad Vikram
    • Srinivas Vasudevan
    • Yanhua Sun
    • Yash Katariya
    • Zachary Nado
    Source code(tar.gz)
    Source code(zip)
  • 0.8.0(Oct 1, 2019)

    Release notes

    This is the 0.8 release of TensorFlow Probability. It is tested and stable against TensorFlow version 2.0.0 and 1.15.0rc1.

    Change notes

    • GPU-friendly "unrolled" NUTS: tfp.mcmc.NoUTurnSampler

      • Open-source the unrolled implementation of the No U-Turn Sampler.
      • Switch back to original U turn criteria in Hoffman & Gelman 2014.
      • Bug fix in Unrolled NUTS to make sure it does not lose shape for event_shape=1.
      • Bug fix of U turn check in Unrolled NUTS at the tree extension.
      • Refactor U turn check in Unrolled NUTS.
      • Fix dynamic shape bug in Unrolled NUTS.
      • Move NUTS unrolled into mcmc, with additional clean up.
      • Make sure the unrolled NUTS sampler handle scalar target_log_probs correctly.
      • Change implementation of check U turn to using a tf.while_loop in unrolled NUTS.
      • Implement multinomial sampling across tree (instead of Slice sampling) in unrolled NUTS.
      • Expose additional diagnostics in previous_kernel_results in unrolled NUTS so that it works with *_step_size_adaptation.
    • MCMC

      • Modify the shape handling in DualAveragingStepSizeAdaptation so that it works with non-scalar event_shape.
      • support structured samples in tfp.monte_carlo.expectation.
      • Minor fix for docstring example in leapfrog_integrator
    • VI

      • Add utilities for fitting variational distributions.
      • Improve Csiszar divergence support for joint variational distributions.
      • ensure that joint distributions are correctly recognized as reparameterizable by monte_carlo_csiszar_f_divergence.
      • Rename monte_carlo_csiszar_f_divergence to monte_carlo_variational_loss.
      • Refactor tfp.vi.csiszar_vimco_helper to expose useful leave-one-out statistical tools.
    • Distributions

      • Added tfp.distributions.GeneralizedPareto
      • Multinomial and DirichletMultinomial samplers are now reproducible.
      • HMM samples are now reproducible.
      • Cleaning up unneeded conversion to tensor in quantile().
      • Added support for dynamic num_steps in HiddenMarkovModel
      • Added implementation of quantile() for exponential distributions.
      • Fix entropy of Categorical distribution when logits contains -inf.
      • Annotate float-valued Deterministic distributions as reparameterized.
      • Establish patterns which ensure that TFP objects are "GradientTape Safe."
      • "GradientTape-safe" distributions: FiniteDiscrete, VonMises, Binomial, Dirichlet, Multinomial, DirichletMultinomial, Categorical, Deterministic
      • Add tfp.util.DeferredTensor to delay Tensor operations on tf.Variables (also works for tf.Tensors).
      • Add probs_parameter, logits_parameter member functions to Categorical-like distributions. In the future users should use these new functions rather than probs/logits properties because the properties might be None if that's how the distribution was parameterized.
    • Bijectors

      • Add log_scale parameter to AffineScalar bijector.
      • Added tfp.bijectors.RationalQuadraticSpline.
      • Add SoftFloor bijector. (Note: Known inverse bug WIP.)
      • Allow using an arbitrary bijector in RealNVP for the coupling.
      • Allow using an arbitrary bijector in MaskedAutoregressiveFlow for the coupling.
    • Experimental auto-batching system: tfp.experimental.auto_batching

      • Open-source the program-counter-based auto-batching system.
      • Added tfp.experimental.auto_batching, an experimental system to recover batch parallelism across recursive function invocations.
      • Autobatched NUTS supports batching across consecutive trajectories.
      • Add support for field references to autobatching.
      • Increase the amount of Python syntax that "just works" in autobatched functions.
      • pop-push fusion optimization in the autobatching system (also recently did tail-call optimization but forgot to add a relnote).
      • Open-source the auto-batched implementation of the No U-Turn Sampler.
    • STS

      • Support TF2/Eager-mode fitting of STS models, and deprecate build_factored_variational_loss.
      • Use dual averaging step size adaptation for STS HMC fitting.
      • Add support for imputing missing values in structural time series models.
      • Standardize parameter scales during STS inference.
    • Layers

      • Add WeightNorm layer wrapper.
      • Fix gradients flowing through variables in the old style variational layers.
      • tf.keras.model.save_model and model.save now defaults to saving a TensorFlow SavedModel.
    • Stats/Math

      • Add calibration metrics to tfp.stats.
      • Add output_gradients argument to value_and_gradient.
      • Add Geyer initial positive sequence truncation criterion to tfp.mcmc.effective_sample_size.
      • Resolve shape inconsistencies in PSDKernels API.
      • Support dynamic-shaped results in tfp.math.minimize.
      • ODE: Implement the Adjoint Method for gradients with respect to the initial state.

    Huge thanks to all the contributors to this release!

    • Alexey Radul
    • Anudhyan Boral
    • Arthur Lui
    • Brian Patton
    • Christopher Suter
    • Colin Carroll
    • Dan Moldovan
    • Dave Moore
    • Edward Loper
    • Emily Fertig
    • Gaurav Jain
    • Ian Langmore
    • Igor Ganichev
    • Jacob Burnim
    • Jeff Pollock
    • Joshua V. Dillon
    • Junpeng Lao
    • Katherine Wu
    • Mark Daoust
    • Matthieu Coquet
    • Parsiad Azimzadeh
    • Pavel Sountsov
    • Pavithra Vijay
    • PJ Trainor
    • prabhu prakash kagitha
    • prakashkagitha
    • Reed Wanderman-Milne
    • refraction-ray
    • Rif A. Saurous
    • RJ Skerry-Ryan
    • Saurabh Saxena
    • Sharad Vikram
    • Sigrid Keydana
    • skeydan
    • Srinivas Vasudevan
    • Yash Katariya
    • Zachary Nado
    Source code(tar.gz)
    Source code(zip)
  • 0.8.0-rc0(Aug 30, 2019)

  • v0.7(Jun 20, 2019)

    Release notes

    This is the 0.7 release of TensorFlow Probability. It is tested and stable against TensorFlow version 1.14.0.

    Change notes

    • Internal optimizations to HMC leapfrog integrator.
    • Add FeatureTransformed, FeatureScaled, and KumaraswamyTransformed PSD kernels
    • Added tfp.debugging.benchmarking.benchmark_tf_function.
    • Added optional masking of observations for hidden_markov_model methods posterior_marginals and posterior_mode.
    • Fixed evaluation order of distributions within JointDistributionNamed
    • Rename tfb.AutoregressiveLayer to tfb.AutoregressiveNetwork.
    • Support kernel and bias constraints/regularizers/initializers in tfb.AutoregressiveLayer.
    • Created Backward Difference Formula (BDF) solver for stiff ODEs.
    • Update Cumsum bijector.
    • Add distribution layer for masked autoregressive flow in Keras.
    • Shorten repr, str Distribution strings by using "?" instead of "<unknown>" to represent None.
    • Implement FiniteDiscrete distribution
    • Add Cumsum bijector.
    • Make Seasonal STS more flexible to handle none constant num_steps_per_season for each season.
    • In tfb.BatchNormalization, use keras layer over compat.v1 layer.
    • Forward kwargs in MaskedAutoregressiveFlow.
    • Added tfp.math.pivoted_cholesky for low rank preconditioning.
    • Add tfp.distributions.JointDistributionCoroutine for specifying simple directed graphical models via Python generators.
    • Complete the example notebook demonstrating multilevel modeling using TFP.
    • Remove default None initializations for Beta and LogNormal parameters.
    • Bug fix in init method of Rational quadratic kernel
    • Add Binomial.sample method.
    • Add SparseLinearRegression structural time series component.
    • Remove TFP support of KL Divergence calculation of tf.compat.v1.distributions which have been deprecated for 6 months.
    • Added tfp.math.cholesky_concat (adds columns to a cholesky decomposition)
    • Introduce SchurComplement PSD Kernel
    • Add EllipticalSliceSampler as an experimental MCMC kernel.
    • Remove intercepting/reuse of variables created within DistributionLambda.
    • Support missing observations in structural time series models.
    • Add Keras layer for masked autoregressive flows.
    • Add code block to show recommended style of using JointDistribution.
    • Added example notebook demonstrating multilevel modeling.
    • Correctly decorate the training block in the VI part of the JointDistribution example notebook.
    • Add tfp.distributions.Sample for specifying plates in tfd.JointDistribution*.
    • Enable save/load of Keras models with DistributionLambda layers.
    • Add example notebook to show how to use joint distribution sequential for small-median Bayesian graphical model.
    • Add NaN propagation to tfp.stats.percentile.
    • Add tfp.distributions.JointDistributionSequential for specifying simple directed graphical models.
    • Enable save/load of models with IndependentX or MixtureX layers.
    • Extend monte_carlo_csiszar_f_divergence so it also work with JointDistribution.
    • Fix typo in value_and_gradient docstring.
    • Add SimpleStepSizeAdaptation, deprecate step_size_adaptation_fn.
    • batch_interp_regular_nd_grid added to tfp.math
    • Adds IteratedSigmoidCentered bijector to unconstrain unit simplex.
    • Add option to constrain seasonal effects to zero-sum in STS models, and enable by default.
    • Add two-sample multivariate equality in distribution.
    • Fix broadcasting errors when forecasting STS models with batch shape.
    • Adds batch slicing support to most distributions in tfp.distributions.
    • Add tfp.layers.VariationalGaussianProcess.
    • Added posterior_mode to HiddenMarkovModel
    • Add VariationalGaussianProcess distribution.
    • Adds slicing of distributions batch axes as dist[..., :2, tf.newaxis, 3]
    • Add tfp.layers.VariableLayer for making a Keras model which ignores inputs.
    • tfp.math.matrix_rank.
    • Add KL divergence between two blockwise distributions.
    • tf.function decorate tfp.bijectors.
    • Add Blockwise distribution for concatenating different distribution families.
    • Add and begin using a utility for varying random seeds in tests when desired.
    • Add two-sample calibrated statistical test for equality of CDFs, incl. support for duplicate samples.
    • Deprecating obsolete moving_mean_variance. Use assign_moving_mean_variance and manage the variables explicitly.
    • Migrate Variational SGD Optimizer to TF 2.0
    • Migrate SGLD Optimizer to TF 2.0
    • TF2 migration
    • Make all test in MCMC TF2 compatible.
    • Expose HMC parameters via kernel results.
    • Implement a new version of sample_chain with optional tracing.
    • Make MCMC diagnostic tests Eager/TF2 compatible.
    • Implement Categorical to Discrete Values bijector, which maps integer x (0<=x<K) to values[x], where values is a predefined 1D tensor with size K.
    • Run dense, conv variational layer tests in eager mode.
    • Add Empirical distribution to Edward2 (already exists as a TFP distribution).
    • Ensure Gumbel distribution does not produce inf samples.
    • Hid tensor shapes from operators in HMM tests
    • Added Empirical distribution
    • Add the Blockwise bijector.
    • Add MixtureNormal and MixtureLogistic distribution layers.
    • Experimental support for implicit reparameterization gradients in MixtureSameFamily
    • Fix parameter broadcasting in DirichletMultinomial.
    • Add tfp.math.clip_by_value_preserve_gradient.
    • Rename InverseGamma rate parameter to scale, to match its semantics.
    • Added option 'input_output_cholesky' to LKJ distribution.
    • Add a semi-local linear trend STS model component.
    • Added Proximal Hessian Sparse Optimizer (a variant of Newton-Raphson).
    • find_bins(x, edges, ...) added to tfp.stats.
    • Disable explicit caching in masked_autoregressive in eager mode.
    • Add a local level STS model component.
    • Docfix: Fix constraint on valid range of reinterpreted_batch_dims for Independent.

    Huge thanks to all the contributors to this release!

    • Alexey Radul
    • Anudhyan Boral
    • axch
    • Brian Patton
    • cclauss
    • Chikanaga Tomoyuki
    • Christopher Suter
    • Clive Chan
    • Dave Moore
    • Gaurav Jain
    • harrismirza
    • Harris Mirza
    • Ian Langmore
    • Jacob Burnim
    • Janosh Riebesell
    • Jeff Pollock
    • Jiri Simsa
    • joeyhaohao
    • johndebugger
    • Joshua V. Dillon
    • Juan A. Navarro P?rez
    • Junpeng Lao
    • Matej Rizman
    • Matthew O'Kelly
    • MG92
    • Nicola De Cao
    • Parsiad Azimzadeh
    • Pavel Sountsov
    • Philip Pham
    • PJ Trainor
    • Rif A. Saurous
    • Sergei Lebedev
    • Sigrid Keydana
    • Sophia Gu
    • Srinivas Vasudevan
    • ykkawana
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0-rc0(May 30, 2019)

  • v0.6.0(Feb 27, 2019)

    Release notes

    This is the 0.6 release of TensorFlow Probability. It is tested and stable against TensorFlow version 1.13.1.

    Change notes

    • Adds tfp.positive_semidefinite_kernels.RationalQuadratic
    • Support float64 in tfpl.MultivariateNormalTriL.
    • Add IndependentLogistic and IndependentPoisson distribution layers.
    • Add make_value_setter interceptor to set values of Edward2 random variables.
    • Implementation of Kalman Smoother, as a member function of LinearGaussianStateSpaceModel.
    • Bijector caching is enabled only in one direction when executing in eager mode. May cause some performance regression in eager mode if repeatedly computing forward(x) or inverse(y) with the same x or y value.
    • Handle rank-0/empty event_shape in tfpl.Independent{Bernoulli,Normal}.
    • Run additional tests in eager mode.
    • quantiles(x, n, ...) added to tfp.stats.
    • Makes tensorflow_probability compatible with Tensorflow 2.0 TensorShape indexing.
    • Use scipy.special functions when testing KL divergence for Chi, Chi2.
    • Add methods to create forecasts from STS models.
    • Add a MixtureSameFamily distribution layer.
    • Add Chi distribution.
    • Fix doc typo tfp.Distribution -> tfd.Distribution.
    • Add Gumbel-Gumbel KL divergence.
    • Add HalfNormal-HalfNormal KL divergence.
    • Add Chi2-Chi2 KL divergence unit tests.
    • Add Exponential-Exponential KL divergence unit tests.
    • Add sampling test for Normal-Normal KL divergence.
    • Add an IndependentNormal distribution layer.
    • Added posterior_marginals to HiddenMarkovModel
    • Add Pareto-Pareto KL divergence.
    • Add LinearRegression component for structural time series models.
    • Add dataset ops to the graph (or create kernels in Eager execution) during the python Dataset object creation instead doing it during Iterator creation time.
    • Text messages HMC benchmark.
    • Add example notebook encoding a switching Poisson process as an HMM for multiple changepoint detection.
    • Require num_adaptation_steps argument to make_simple_step_size_update_policy.
    • s/eight_hmc_schools/eight_schools_hmc/ in printed benchmark string.
    • Add tfp.layers.DistributionLambda to enable plumbing tfd.Distribution instances through Keras models.
    • Adding tfp.math.batch_interp_regular_1d_grid.
    • Update description of fill_triangular to include an in-depth example.
    • Enable bijector/distribution composition, eg, tfb.Exp(tfd.Normal(0,1)).
    • linear and midpoint interpolation added to tfp.stats.percentile.
    • Make distributions include only the bijectors they use.
    • tfp.math.interp_regular_1d_grid added
    • tfp.stats.correlation added (Pearson correlation).
    • Update list of edward2 RVs to include recently added Distributions.
    • Density of continuous Uniform distribution includes the upper endpoint.
    • Add support for batched inputs in tfp.glm.fit_sparse.
    • interp_regular_1d_grid added to tfp.math.
    • Added HiddenMarkovModel distribution.
    • Add Student's T Process.
    • Optimize LinearGaussianStateSpaceModel by avoiding matrix ops when the observations are statically known to be scalar.
    • stddev, cholesky added to tfp.stats.
    • Add methods to fit structual time series models to data with variational inference and HMC.
    • Add Expm1 bijector (Y = Exp(X) - 1).
    • New stats namespace. covariance and variance added to tfp.stats
    • Make all available MCMC kernels compatible with TransformedTransitionKernel.

    Huge thanks to all the contributors to this release!

    • Adam Wood
    • Alexey Radul
    • Anudhyan Boral
    • Ashish Saxena
    • Billy Lamberta
    • Brian Patton
    • Christopher Suter
    • Cyril Chimisov
    • Dave Moore
    • Eugene Zhulenev
    • Griffin Tabor
    • Ian Langmore
    • Jacob Burnim
    • Jakub Arnold
    • Jiahao Yao
    • Jihun
    • Jiming Ye
    • Joshua V. Dillon
    • Juan A. Navarro Pérez
    • Julius Kunze
    • Julius Plenz
    • Kristian Hartikainen
    • Kyle Beauchamp
    • Matej Rizman
    • Pavel Sountsov
    • Peter Roelants
    • Rif A. Saurous
    • Rohan Jain
    • Roman Ring
    • Rui Zhao
    • Sergio Guadarrama
    • Shuhei Iitsuka
    • Shuming Hu
    • Srinivas Vasudevan
    • Tabor473
    • ValentinMouret
    • Youngwook Kim
    • Yuki Nagae
    Source code(tar.gz)
    Source code(zip)
Weather analysis with Python, SQLite, SQLAlchemy, and Flask

Surf's Up Weather analysis with Python, SQLite, SQLAlchemy, and Flask Overview The purpose of this analysis was to examine weather trends (precipitati

Art Tucker 1 Sep 05, 2021
MapReader: A computer vision pipeline for the semantic exploration of maps at scale

MapReader A computer vision pipeline for the semantic exploration of maps at scale MapReader is an end-to-end computer vision (CV) pipeline designed b

Living with Machines 25 Dec 26, 2022
Python dataset creator to construct datasets composed of OpenFace extracted features and Shimmer3 GSR+ Sensor datas

Python dataset creator to construct datasets composed of OpenFace extracted features and Shimmer3 GSR+ Sensor datas

Gabriele 3 Jul 05, 2022
This repo contains a simple but effective tool made using python which can be used for quality control in statistical approach.

📈 Statistical Quality Control 📉 This repo contains a simple but effective tool made using python which can be used for quality control in statistica

SasiVatsal 8 Oct 18, 2022
An ETL framework + Monitoring UI/API (experimental project for learning purposes)

Fastlane An ETL framework for building pipelines, and Flask based web API/UI for monitoring pipelines. Project structure fastlane |- fastlane: (ETL fr

Dan Katz 2 Jan 06, 2022
Validation and inference over LinkML instance data using souffle

Translates LinkML schemas into Datalog programs and executes them using Souffle, enabling advanced validation and inference over instance data

Linked data Modeling Language 7 Aug 07, 2022
Renato 214 Jan 02, 2023
A data analysis using python and pandas to showcase trends in school performance.

A data analysis using python and pandas to showcase trends in school performance. A data analysis to showcase trends in school performance using Panda

Jimmy Faccioli 0 Sep 07, 2021
First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we want to understand column level lineage and automate impact analysis.

dbt-osmosis First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we wan

Alexander Butler 150 Jan 06, 2023
A fast, flexible, and performant feature selection package for python.

linselect A fast, flexible, and performant feature selection package for python. Package in a nutshell It's built on stepwise linear regression When p

88 Dec 06, 2022
Senator Trades Monitor

Senator Trades Monitor This monitor will grab the most recent trades by senators and send them as a webhook to discord. Installation To use the monito

Yousaf Cheema 5 Jun 11, 2022
A Numba-based two-point correlation function calculator using a grid decomposition

A Numba-based two-point correlation function (2PCF) calculator using a grid decomposition. Like Corrfunc, but written in Numba, with simplicity and hackability in mind.

Lehman Garrison 3 Aug 24, 2022
Approximate Nearest Neighbor Search for Sparse Data in Python!

Approximate Nearest Neighbor Search for Sparse Data in Python! This library is well suited to finding nearest neighbors in sparse, high dimensional spaces (like text documents).

Meta Research 906 Jan 01, 2023
Display the behaviour of a realtime program with a scope or logic analyser.

1. A monitor for realtime MicroPython code This library provides a means of examining the behaviour of a running system. It was initially designed to

Peter Hinch 17 Dec 05, 2022
The lastest all in one bombing tool coded in python uses tbomb api

BaapG-Attack is a python3 based script which is officially made for linux based distro . It is inbuit mass bomber with sms, mail, calls and many more bombing

59 Dec 25, 2022
Unsub is a collection analysis tool that assists libraries in analyzing their journal subscriptions.

About Unsub is a collection analysis tool that assists libraries in analyzing their journal subscriptions. The tool provides rich data and a summary g

9 Nov 16, 2022
Python reader for Linked Data in HDF5 files

Linked Data are becoming more popular for user-created metadata in HDF5 files.

The HDF Group 8 May 17, 2022
The Master's in Data Science Program run by the Faculty of Mathematics and Information Science

The Master's in Data Science Program run by the Faculty of Mathematics and Information Science is among the first European programs in Data Science and is fully focused on data engineering and data a

Amir Ali 2 Jun 17, 2022
A Python and R autograding solution

Otter-Grader Otter Grader is a light-weight, modular open-source autograder developed by the Data Science Education Program at UC Berkeley. It is desi

Infrastructure Team 93 Jan 03, 2023
Building house price data pipelines with Apache Beam and Spark on GCP

This project contains the process from building a web crawler to extract the raw data of house price to create ETL pipelines using Google Could Platform services.

1 Nov 22, 2021