Deep GPs built on top of TensorFlow/Keras and GPflow

Related tags

Deep LearningGPflux
Overview

GPflux

Quality checks and Tests Docs build

Documentation | Tutorials | API reference | Slack

What does GPflux do?

GPflux is a toolbox dedicated to Deep Gaussian processes (DGP), the hierarchical extension of Gaussian processes (GP).

GPflux uses the mathematical building blocks from GPflow and marries these with the powerful layered deep learning API provided by Keras. This combination leads to a framework that can be used for:

  • researching new (deep) Gaussian process models, and
  • building, training, evaluating and deploying (deep) Gaussian processes in a modern way — making use of the tools developed by the deep learning community.

Getting started

In the Documentation, we have multiple Tutorials showing the basic functionality of the toolbox, a benchmark implementation and a comprehensive API reference.

Install GPflux

This project is assuming you are using python3.

For users

To install the latest (stable) release of the toolbox from PyPI, use pip:

$ pip install gpflux

For contributors

To install this project in editable mode, run the commands below from the root directory of the GPflux repository.

make install

Check that the installation was successful by running the tests:

make test

You can have a peek at the Makefile for the commands.

The Secondmind Labs Community

Getting help

Bugs, feature requests, pain points, annoying design quirks, etc: Please use GitHub issues to flag up bugs/issues/pain points, suggest new features, and discuss anything else related to the use of GPflux that in some sense involves changing the GPflux code itself. We positively welcome comments or concerns about usability, and suggestions for changes at any level of design. We aim to respond to issues promptly, but if you believe we may have forgotten about an issue, please feel free to add another comment to remind us.

Slack workspace

We have a public Secondmind Labs slack workspace. Please use this invite link and join the #gpflux channel, whether you'd just like to ask short informal questions or want to be involved in the discussion and future development of GPflux.

Contributing

All constructive input is very much welcome. For detailed information, see the guidelines for contributors.

Maintainers

GPflux was originally created at Secondmind Labs and is now actively maintained by (in alphabetical order) Vincent Dutordoir and ST John. We are grateful to all contributors who have helped shape GPflux.

GPflux is an open source project. If you have relevant skills and are interested in contributing then please do contact us (see "The Secondmind Labs Community" section above).

We are very grateful to our Secondmind Labs colleagues, maintainers of GPflow, Trieste and Bellman, for their help with creating contributing guidelines, instructions for users and open-sourcing in general.

Citing GPflux

To cite GPflux, please reference our arXiv paper where we review the framework and describe the design. Sample Bibtex is given below:

@article{dutordoir2021gpflux,
    author = {Dutordoir, Vincent and Salimbeni, Hugh and Hambro, Eric and McLeod, John and
        Leibfried, Felix and Artemev, Artem and van der Wilk, Mark and Deisenroth, Marc P.
        and Hensman, James and John, ST},
    title = {GPflux: A library for Deep Gaussian Processes},
    year = {2021},
    journal = {arXiv:2104.05674},
    url = {https://arxiv.org/abs/2104.05674}
}

License

Apache License 2.0

Comments
  • Attempting to learn models with multidimensional inputs leads to an error.

    Attempting to learn models with multidimensional inputs leads to an error.

    Thanks a lot for making this exciting project public! I'm not 100% sure if what I'm reporting is a bug of if this isn't supposed to work in GPflux, but here we go:

    Describe the bug Attempting to learn models with multidimensional inputs leads to an error.

    To reproduce First of all, the setup of a toy example and a GPflow SVGP-based version which works as expected:

    import numpy as np
    import tensorflow as tf
    import matplotlib.pyplot as plt
    import gpflow
    import gpflux
    from gpflow.utilities import print_summary, set_trainable
    
    tf.keras.backend.set_floatx("float64")
    tf.get_logger().setLevel("INFO")
    
    grid = np.meshgrid(np.linspace(0, np.pi*2, 20),
                       np.linspace(0, np.pi*2, 20))
    X = np.column_stack(tuple(map(np.ravel, grid)))
    Y = (np.sin(X[:, 0]) * np.sin(X[:, 1]))[:, None]
    
    plt.contourf(grid[0], grid[1], Y.reshape(grid[0].shape))
    plt.title("DATA")
    plt.show()
    
    num_data = len(X)
    num_inducing = 10
    output_dim = Y.shape[1]
    
    kernel = (gpflow.kernels.SquaredExponential(active_dims=[0]) *
              gpflow.kernels.SquaredExponential(active_dims=[1]))
    inducing_variable = gpflow.inducing_variables.InducingPoints(
        X[np.random.choice(X.shape[0], size=num_inducing, replace=False),:].copy()
    )
    
    #---------- SVGP
    svgp = gpflow.models.SVGP(kernel, gpflow.likelihoods.Gaussian(), inducing_variable,
                              num_latent_gps=output_dim, num_data=num_data)
    set_trainable(svgp.q_mu, False)
    set_trainable(svgp.q_sqrt, False)
    variational_params = [(svgp.q_mu, svgp.q_sqrt)]
    natgrad_opt = gpflow.optimizers.NaturalGradient(gamma=0.1)
    adam_opt = tf.optimizers.Adam(0.01)
    minibatch_size = 10
    train_dataset = tf.data.Dataset.from_tensor_slices(
        (X, Y)).repeat().shuffle(num_data)
    iter_train = iter(train_dataset.batch(minibatch_size))
    objective = svgp.training_loss_closure(iter_train, compile=True)
    
    @tf.function
    def optim_step():
        natgrad_opt.minimize(objective, var_list=variational_params)
        adam_opt.minimize(objective, svgp.trainable_variables)
    
    for i in range(100):
        optim_step()
    elbo = -objective().numpy()
    print(f"it: {i} of dual-optimizer... elbo: {elbo}")
    
    
    atgrid = np.meshgrid(np.linspace(0, np.pi*2, 40),
                         np.linspace(0, np.pi*2, 40))
    atX = np.column_stack(tuple(map(np.ravel, atgrid)))
    
    mean, var = svgp.predict_f(atX)
    plt.contourf(atgrid[0], atgrid[1], mean.numpy().reshape(atgrid[0].shape))
    plt.title("SVGP")
    plt.show()
    

    And here a single-layer DGP with GPflux:

    #---------- DEEPGP
    gp_layer = gpflux.layers.GPLayer(
        kernel, inducing_variable, num_data=num_data, num_latent_gps=output_dim
    )
    
    likelihood_layer = gpflux.layers.LikelihoodLayer(gpflow.likelihoods.Gaussian(0.1))
    
    single_layer_dgp = gpflux.models.DeepGP([gp_layer], likelihood_layer)
    model = single_layer_dgp.as_training_model()
    model.compile(tf.optimizers.Adam(0.01))
    
    log = model.fit({"inputs": X, "targets": Y}, epochs=int(100), verbose=1)
    

    which throws the following error when reaching the last line of the example:

    ValueError: in user code:
    
        venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:805 train_function  *
            return step_function(self, iterator)
        venv/lib/python3.7/site-packages/gpflux/layers/gp_layer.py:277 call  *
            outputs = super().call(inputs, *args, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow_probability/python/layers/distribution_layer.py:252 call  **
            inputs, *args, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow/python/keras/layers/core.py:917 call
            result = self.function(inputs, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow_probability/python/layers/distribution_layer.py:172 _fn
            d = make_distribution_fn(*fargs, **fkwargs)
        venv/lib/python3.7/site-packages/gpflux/layers/gp_layer.py:328 _make_distribution_fn
            return tfp.distributions.MultivariateNormalDiag(loc=mean, scale_diag=tf.sqrt(cov))
        <decorator-gen-394>:2 __init__
            
        venv/lib/python3.7/site-packages/tensorflow_probability/python/distributions/distribution.py:298 wrapped_init
            default_init(self_, *args, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py:538 new_func
            return func(*args, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow_probability/python/distributions/mvn_diag.py:252 __init__
            name=name)
        <decorator-gen-322>:2 __init__
            
        venv/lib/python3.7/site-packages/tensorflow_probability/python/distributions/distribution.py:298 wrapped_init
            default_init(self_, *args, **kwargs)
        venv/lib/python3.7/site-packages/tensorflow_probability/python/distributions/mvn_linear_operator.py:190 __init__
            loc, scale)
        venv/lib/python3.7/site-packages/tensorflow_probability/python/internal/distribution_util.py:136 shapes_from_loc_and_scale
            'of `loc` ({}).'.format(event_size_, loc_event_size_))
    
        ValueError: Event size of `scale` (1) could not be broadcast up to that of `loc` (2).
    

    Expected behaviour I expected this to not throw an error, and produce a (at least qualitatively) similar result to the SVGP implementation, but again, I'm not sure if this expectation is justified.

    System information

    • OS: Linux, kernel 5.4.112-1
    • Python version: 3.7.5
    • GPflux version: 0.1.0 from pip
    • TensorFlow version: 2.4.1
    • GPflow version: 2.1.5
    bug 
    opened by clwgg 6
  • Conditional Density Estimation notebook

    Conditional Density Estimation notebook

    Notebook building and fitting a deep (two layer) latent variable model using VI. No changes to the core of GPflux are required but careful setting of fitting options is necessary. For example, it is important to set shuffle to False and batch_size to the number of datapoints to have correct optimisation of the latent variables.

    opened by vdutor 3
  • Customize NatGrad Model  to turn off variational parameters in hidden layers

    Customize NatGrad Model to turn off variational parameters in hidden layers

    It seems that from the NatGradModel class, the requirement of having a NaturalGradient optimizer for each layer reduces flexibility for the user as this forces each layer variational parameters to be optimized. Even by setting the parameters off manually outside through set_trainable , tensorflow would throw a ValueError: None values not supported. The issue is to set off the variational parameters off except for the last hidden layer Any way to get around this issue?

      # Set all var params in inner layers off
      var_params = [(layer.q_mu, layer.q_sqrt) for layer in dgp_model.f_layers[:-1]]
      for vv in var_params:
          set_trainable(vv[0], False)
          set_trainable(vv[1], False)
    
      # Train Last Layer with NatGrad: (NOTE: this uses the given class from gpflux and not customized NatGradModel_)
      train_mode = NatGradWrapper(dgp_model.as_training_model())
      train_mode.compile([NaturalGradient(gamma=0.01), NaturalGradient(gamma=1.0), tf.optimizers.Adam(0.001)])
      history = train_mode.fit({"inputs": Xsc, "targets": Y}, epochs=int(5000), verbose=1)
    

    I only got it to work by changing the _split_natgrad_params_and_other_vars and optimizer.setter functions. Although it works im not too sure whether it is correct.

    class NatGradModel_(tf.keras.Model):
    
        @property
        def natgrad_optimizers(self) -> List[gpflow.optimizers.NaturalGradient]:
            if not hasattr(self, "_all_optimizers"):
                raise AttributeError(
                    "natgrad_optimizers accessed before optimizer being set"
                )  # pragma: no cover
            if self._all_optimizers is None:
                return None  # type: ignore
            return self._all_optimizers
    
        @property
        def optimizer(self) -> tf.optimizers.Optimizer:
    
            if not hasattr(self, "_all_optimizers"):
                raise AttributeError("optimizer accessed before being set")
            if self._all_optimizers is None:
                return None
            return self._all_optimizers
    
        @optimizer.setter
        def optimizer(self, optimizers: List[NaturalGradient]) -> None:
            # # Remove AdamOptimizer Requirement
            if optimizers is None:
                # tf.keras.Model.__init__() sets self.optimizer = None
                self._all_optimizers = None
                return
    
            if optimizers is self.optimizer:
                # Keras re-sets optimizer with itself; this should not have any effect on the state
                return
    
            self._all_optimizers = optimizers
    
        def _split_natgrad_params_and_other_vars(
            self,
        ) -> List[Tuple[Parameter, Parameter]]:
    
            # self.layers[-1] is Likelihood Layer, self.layers[-2] is Input Layer,
            # Last hidden layer is self.layers[-3]
            variational_params = [(self.layers[-3].q_mu, self.layers[-3].q_sqrt)]
    
            return variational_params
    
        def _apply_backwards_pass(self, loss: tf.Tensor, tape: tf.GradientTape) -> None:
     
            variational_params = self._split_natgrad_params_and_other_vars()
            variational_params_vars = [
                (q_mu.unconstrained_variable, q_sqrt.unconstrained_variable)
                for (q_mu, q_sqrt) in variational_params
            ]
    
            variational_params_grads = tape.gradient(loss, (variational_params_vars))
    
    
            num_natgrad_opt = len(self.natgrad_optimizers)
            num_variational = len(variational_params)
            if len(self.natgrad_optimizers) != len(variational_params):
                raise ValueError(
                    f"Model has {num_natgrad_opt} NaturalGradient optimizers, "
                    f"but {num_variational} variational distributions"
                )  # pragma: no cover
    
            for (natgrad_optimizer, (q_mu_grad, q_sqrt_grad), (q_mu, q_sqrt)) in zip(
                self.natgrad_optimizers, variational_params_grads, variational_params
            ):
                natgrad_optimizer._natgrad_apply_gradients(q_mu_grad, q_sqrt_grad, q_mu, q_sqrt)
    
    
        def train_step(self, data: Any) -> Mapping[str, Any]:
            """
            The logic for one training step. For more details of the
            implementation, see TensorFlow's documentation of how to
            `customize what happens in Model.fit
            <https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit>`_.
            """
            from tensorflow.python.keras.engine import data_adapter
    
            data = data_adapter.expand_1d(data)
            x, y, sample_weight = data_adapter.unpack_x_y_sample_weight(data)
    
            with tf.GradientTape() as tape:
                y_pred = self.__call__(x, training=True)
                loss = self.compiled_loss(y, y_pred, sample_weight, regularization_losses=self.losses)
    
            self._apply_backwards_pass(loss, tape=tape)
    
            self.compiled_metrics.update_state(y, y_pred, sample_weight)
            return {m.name: m.result() for m in self.metrics}
    
    

    The problem that im trying to reproduce is from https://github.com/ICL-SML/Doubly-Stochastic-DGP/blob/master/demos/using_natural_gradients.ipynb

    However, even with the same settings, i am still unable to reproduce the results

    opened by izsahara 3
  • GPLayer's prediction seems to be too confident

    GPLayer's prediction seems to be too confident

    Looking at the "Hybrid Deep GP models: ..." tutorial, the GPLayer's prediction seems to be too confident, i.e. its uncertainty estimate (95% confidence level) does not cover the training data spread. Its prediction accuracy (mean), however, is very good, nearly identical to that of the neural network model obtained by removing the GPLayer.

    When I replace the GPLayer with a TFP DenseVariational layer using Gaussin priors, the prediction accuracy is not as good. However, importantly, the uncertainty estimate is very good, covering the training data spread well.

    Without good uncertainty estimate, the GPLayer seems to add little value over the neural network model, which already provides good prediction accuracy.

    bug 
    opened by dtchang 3
  • GPLayer doesn't seem to support multiple input units

    GPLayer doesn't seem to support multiple input units

    Using the "Hybrid Deep GP models: ..." tutorial, when I changed tf.keras.layers.Dense(1, activation="linear"), to tf.keras.layers.Dense(2, activation="linear"), I got an error, same as reported in #27.

    I then set a Zero mean function, as suggested in #27: gp_layer = gpflux.layers.GPLayer( kernel, inducing_variable, num_data=num_data, num_latent_gps=output_dim, mean_function=gpflow.mean_functions.Zero() ), I got a different error.

    bug 
    opened by dtchang 3
  • Update quality-check.yaml

    Update quality-check.yaml

    GPflux does not work at present with TensorFlow 2.5.0. @st-- has an open PR (#30) exploring how this could work.

    At present the develop build fails. This PR bounds the version of TensorFlow above by 2.5.0.

    opened by johnamcleod 3
  • GPflux for text classification?

    GPflux for text classification?

    Hey, many thanks for this project! I am currently investigating GPs for binary (and one-class-) classification tasks and did some first experiments using pre-trained sentence embeddings for feature representation, PCA for dimension reduction and GPs (GPFlow) for classification. It sounds promising to use a text embedding, some dense layers and a GP in an end-to-end fashion. At a first glance, GPflux seems to offer this. After checking the gpflux tutorials (Hybrid Deep GP models), I am actually not sure how to define the inducing variables. Seems like they have to cover the expected data ranges in each latent space dimension, right? Furthermore, I am not sure if GPflux offers variational inference for binary classification. Any comments, suggestions, links that could help to build hybrid models are appreciated. Many thanks! Kind regards Jens

    opened by kerstenj 2
  • Carry on using Ubuntu 20.04 for now

    Carry on using Ubuntu 20.04 for now

    ubuntu-latest has recently been updated to use Ubuntu 22.04, which seems to break our tests. While we investigate this we should continue to use the old builder.

    opened by uri-granta 1
  • Update to newer GPflow and TensorFlow.

    Update to newer GPflow and TensorFlow.

    Update to make GPflux compatible with newer versions of GPflow, that require an additional X parameter in the likelihoods. In this PR I just pass a dummy None value as X. Alternatively we could:

    1. I don't know Keras and GPflux well, but maybe we can find a "real" value of X to use?
    2. Do we want to attempt to write code that's compatible with earlier version of GPflow as well? I suppose we could add an if somewhere?
    opened by jesnie 1
  • Support tensorflow 2.5 through 2.8.

    Support tensorflow 2.5 through 2.8.

    1. Dropped support for Python 3.6.
    2. Added Python 3.9 and 3.10.
    3. Added TensorFlow 2.6, 2.7 and 2.8.
    4. Updated github actions to test all of these combinations.
    5. Had to update some of the tests_requirements - this caused some reformatting.
    6. tfp.Distributions are sometimes wrapped in a _TensorCoercible - I added unwrap_dist to handle this.
    7. For some versions there are problems serialising gpflow.Parameters. I skip the relevant tests.
    8. Apparently the tags that are exported by TensorBoard changes slightly with version. Did a version test for that.
    9. Had to down-adjust coverage to 96% - presumably related to the skipped tests above.

    Notice the changes to the build system will require you/us to update the settings on which tests are required to merge.

    opened by jesnie 1
  • Place upper bound on TensorFlow (Probability) dependencies

    Place upper bound on TensorFlow (Probability) dependencies

    TensorFlow (TF) 2.6.0 introduces some breaking changes. Until these are addressed, we must ensure the versions installed are strictly less than 2.6.0 and 0.14.0 for TF and TF-Probability respectively (TF-Probability version 0.14.0 and above require TensorFlow version equal to or greater than 2.6.0).

    opened by ltiao 1
  • Sebastian.p/orth dgp

    Sebastian.p/orth dgp

    Implementation of "Sparse Orthogonal Variational Inference for Gaussian Processes".

    Created the following folders: -- conditionals: needs a specialized form to account for the two different GPs that have to be summed g() and h() as in the paper. --covariances: needed to compute Cvv and Cvf as in the paper. These covariances rely on the other set of inducing points. --posteriors.py: different conditionals are needed here. -- conditionals.util.conditional_GP_maths might have to be re-designed or at least have its name changed to something more sensible.

    opened by SebastianPopescu 5
  • Fix for new GPflow heteroskedastic likelihood breaks for quadrature dependent likelihoods

    Fix for new GPflow heteroskedastic likelihood breaks for quadrature dependent likelihoods

    A clear and concise description of what the bug is. In https://github.com/secondmind-labs/GPflux/pull/84 there were several changes made to accommodate the new framework in GPflow for heteroskedastic likelihoods. More precisely, no_X = None in gpflux/layers/likelihood_layer.py.

    This works well with Gaussian or Student-t likelihoods, however it will break when using Softmax, which uses quadrature for variational_expectations or predict_mean_and_var. Both methods require access to the shape of X, so because currently we are passing None this results in an error.

    bug 
    opened by SebastianPopescu 0
  • Add version upper bound before adjust gpflow>=2.6

    Add version upper bound before adjust gpflow>=2.6

    Describe the bug GPflow>=2.6 seems to change the way to calculate likelihood p(Y|F,X) from by using F and Y to by using X, F and Y. I guess this change would be incompatible to the current gpflux develop branch.

    To reproduce Steps to reproduce the behaviour:

    1. git clone https://github.com/secondmind-labs/GPflux.git
    2. pip install -e .
    3. python ./gpflux/docs/notebooks/intro.py

    An error will occur at Line 99.

    System information

    • OS: Ubuntu20.04
    • Python version: 3.9.13
    • GPflux version: develop branch f95e1cb
    • TensorFlow version: 2.8.3
    • GPflow version: 2.6.1

    Additional context It would be ok if switching to gpflow==2.5.2

    bug 
    opened by zjowowen 3
  • Sebastian.p/generalized rff

    Sebastian.p/generalized rff

    The purpose of this PR is to to support sampling with models

    • that can have SeparateIndependent and SharedIndependent kernels & inducing variables.
    • with Heteroskedastic likelihoods (so multiple GP heads)

    Main changes:

    • creation of feature_decomposition_kernels folder. Idea was to structure it just like in GPflow (i.e. multioutput subfolder). Having a separate folder for this type of kernels will prove to be better suited if we are planning on including in the future some other papers such as .. [1] Solin, Arno, and Simo Särkkä. "Hilbert space methods for reduced-rank Gaussian process regression." Statistics and Computing (2020). .. [2] Borovitskiy, Viacheslav, et al. "Matérn Gaussian processes on Riemannian manifolds." In Advances in Neural Information Processing Systems (2020).
    • in gpflux.layers.basis_functions.fourier_features I have added the multioutput version
    • in gpflux.sampling I have added the multioutput version
    opened by SebastianPopescu 3
Releases(v0.3.1)
  • v0.3.1(Nov 17, 2022)

    What's Changed

    • Add conditionally tensorflow-macos to setup.py by @vdutor in https://github.com/secondmind-labs/GPflux/pull/77
    • removing dependency on setting the Keras backend by @hstojic in https://github.com/secondmind-labs/GPflux/pull/79
    • Update to newer GPflow and TensorFlow. by @jesnie in https://github.com/secondmind-labs/GPflux/pull/84
    • Bump version to 0.3.1 by @uri-granta in https://github.com/secondmind-labs/GPflux/pull/86

    New Contributors

    • @hstojic made their first contribution in https://github.com/secondmind-labs/GPflux/pull/79
    • @uri-granta made their first contribution in https://github.com/secondmind-labs/GPflux/pull/86

    Full Changelog: https://github.com/secondmind-labs/GPflux/compare/v0.3.0...v0.3.1

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(May 30, 2022)

    What's Changed

    • Add priors to kernel hyperparameters to loss by @vdutor in https://github.com/secondmind-labs/GPflux/pull/62
    • Restructure basis function modules by @ltiao in https://github.com/secondmind-labs/GPflux/pull/63
    • Use iv.num_inducing instead of len(iv), for compatibility with future GPflow. by @jesnie in https://github.com/secondmind-labs/GPflux/pull/66
    • adding import in init to make "fourier_features" module available by @NicolasDurrande in https://github.com/secondmind-labs/GPflux/pull/69
    • Fixing issue #70 by @sebastianober in https://github.com/secondmind-labs/GPflux/pull/71
    • Support tensorflow 2.5 through 2.8. by @jesnie in https://github.com/secondmind-labs/GPflux/pull/72
    • Pin protobuf to 3.19.0 by @vdutor in https://github.com/secondmind-labs/GPflux/pull/73

    New Contributors

    • @jesnie made their first contribution in https://github.com/secondmind-labs/GPflux/pull/66
    • @NicolasDurrande made their first contribution in https://github.com/secondmind-labs/GPflux/pull/69
    • @sebastianober made their first contribution in https://github.com/secondmind-labs/GPflux/pull/71

    Full Changelog: https://github.com/secondmind-labs/GPflux/compare/v0.2.7...v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.7(Nov 8, 2021)

  • v0.2.6(Nov 8, 2021)

  • v0.2.5(Nov 8, 2021)

    What's Changed

    • hotfix version by @tensorlicious in https://github.com/secondmind-labs/GPflux/pull/59
    • fix version in setup.py by @tensorlicious in https://github.com/secondmind-labs/GPflux/pull/60

    New Contributors

    • @tensorlicious made their first contribution in https://github.com/secondmind-labs/GPflux/pull/59

    Full Changelog: https://github.com/secondmind-labs/GPflux/compare/v0.2.4...v0.2.5

    Source code(tar.gz)
    Source code(zip)
  • v0.2.4(Nov 5, 2021)

    What's Changed

    • TF 2.5 compatibility by @vdutor in https://github.com/secondmind-labs/GPflux/pull/48
    • Place upper bound on TensorFlow (Probability) dependencies by @ltiao in https://github.com/secondmind-labs/GPflux/pull/52
    • Make RFF weights explicitly not trainable by @ltiao in https://github.com/secondmind-labs/GPflux/pull/51
    • Refactoring basis functions by @ltiao in https://github.com/secondmind-labs/GPflux/pull/53
    • Added support for alternative Fourier feature map by @ltiao in https://github.com/secondmind-labs/GPflux/pull/54
    • Quadrature Fourier features by @ltiao in https://github.com/secondmind-labs/GPflux/pull/56
    • Orthogonal Random Features by @ltiao in https://github.com/secondmind-labs/GPflux/pull/57

    New Contributors

    • @ltiao made their first contribution in https://github.com/secondmind-labs/GPflux/pull/52

    Full Changelog: https://github.com/secondmind-labs/GPflux/compare/v0.2.3...v0.2.4

    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(Aug 24, 2021)

    Release 0.2.3

    Bugfixes

    Fix PyPi upload Github Action

    Thanks to our Contributors

    This release contains contributions from (alphabetical order)

    @sebastianober, @vdutor

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Aug 24, 2021)

    Release 0.2.2

    Bugfixes

    • Fix PyPi upload Github Action (#46)

    Thanks to our Contributors

    This release contains contributions from (alphabetical order)

    @sebastianober, @vdutor

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Aug 24, 2021)

    Release 0.2.1

    Bugfixes

    • Fix PyPi upload Github Action (#45)

    Thanks to our Contributors

    This release contains contributions from (alphabetical order)

    @sebastianober, @vdutor

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Aug 24, 2021)

    Release 0.2.0

    Improvements

    • Allow for whitening in sampling methods (#26)
    • Add warning for default Identity mean function in GPLayer (#42)
    • Allow the user to specify which layers to train with NaturalGradient (#43)

    Documentation

    • Update README with link to GPflux paper (#22)
    • Clean notebook deep_gp_samples notebook (#23)
    • Fix header in efficient_sampling notebook (#24)
    • New notebook on conditional density estimation with GPflux (#40)
    • Improve plotting in gpflux_with_keras_layers (#41)

    Thanks to our Contributors

    This release contains contributions from (alphabetical order)

    @johnamcleod, @sebastianober, @st--, @vdutor

    Source code(tar.gz)
    Source code(zip)
Owner
Secondmind Labs
Secondmind Labs
SANet: A Slice-Aware Network for Pulmonary Nodule Detection

SANet: A Slice-Aware Network for Pulmonary Nodule Detection This paper (SANet) has been accepted and early accessed in IEEE TPAMI 2021. This code and

Jie Mei 39 Dec 17, 2022
Implementation of Convolutional enhanced image Transformer

CeiT : Convolutional enhanced image Transformer This is an unofficial PyTorch implementation of Incorporating Convolution Designs into Visual Transfor

Rishikesh (ऋषिकेश) 82 Dec 13, 2022
torchlm is aims to build a high level pipeline for face landmarks detection, it supports training, evaluating, exporting, inference(Python/C++) and 100+ data augmentations

💎A high level pipeline for face landmarks detection, supports training, evaluating, exporting, inference and 100+ data augmentations, compatible with torchvision and albumentations, can easily instal

DefTruth 142 Dec 25, 2022
Algorithms for outlier, adversarial and drift detection

Alibi Detect is an open source Python library focused on outlier, adversarial and drift detection. The package aims to cover both online and offline d

Seldon 1.6k Dec 31, 2022
Pytorch GUI(demo) for iVOS(interactive VOS) and GIS (Guided iVOS)

GUI for iVOS(interactive VOS) and GIS (Guided iVOS) GUI Implementation of CVPR2021 paper "Guided Interactive Video Object Segmentation Using Reliabili

Yuk Heo 13 Dec 09, 2022
Extremely simple and fast extreme multi-class and multi-label classifiers.

napkinXC napkinXC is an extremely simple and fast library for extreme multi-class and multi-label classification, that focus of implementing various m

Marek Wydmuch 43 Nov 14, 2022
VQGAN+CLIP Colab Notebook with user-friendly interface.

VQGAN+CLIP and other image generation system VQGAN+CLIP Colab Notebook with user-friendly interface. Latest Notebook: Mse regulized zquantize Notebook

Justin John 227 Jan 05, 2023
Code Impementation for "Mold into a Graph: Efficient Bayesian Optimization over Mixed Spaces"

Code Impementation for "Mold into a Graph: Efficient Bayesian Optimization over Mixed Spaces" This repo contains the implementation of GEBO algorithm.

Jaeyeon Ahn 2 Mar 22, 2022
Nested cross-validation is necessary to avoid biased model performance in embedded feature selection in high-dimensional data with tiny sample sizes

Pruner for nested cross-validation - Sphinx-Doc Nested cross-validation is necessary to avoid biased model performance in embedded feature selection i

1 Dec 15, 2021
Python utility to generate filesystem content for Obsidian.

Security Vault Generator Quickly parse, format, and output common frameworks/content for Obsidian.md. There is a strong focus on MITRE ATT&CK because

Justin Angel 73 Dec 02, 2022
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
Machine Learning toolbox for Humans

Reproducible Experiment Platform (REP) REP is ipython-based environment for conducting data-driven research in a consistent and reproducible way. Main

Yandex 662 Nov 20, 2022
Fully convolutional networks for semantic segmentation

FCN-semantic-segmentation Simple end-to-end semantic segmentation using fully convolutional networks [1]. Takes a pretrained 34-layer ResNet [2], remo

Kai Arulkumaran 186 Dec 25, 2022
NLG evaluation via Statistical Measures of Similarity: BaryScore, DepthScore, InfoLM

NLG evaluation via Statistical Measures of Similarity: BaryScore, DepthScore, InfoLM Automatic Evaluation Metric described in the papers BaryScore (EM

Pierre Colombo 28 Dec 28, 2022
PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020).

Scaffold-Federated-Learning PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020). Environment numpy=

KI 30 Dec 29, 2022
OBG-FCN - implementation of 'Object Boundary Guided Semantic Segmentation'

OBG-FCN This repository is to reproduce the implementation of 'Object Boundary Guided Semantic Segmentation' in http://arxiv.org/abs/1603.09742 Object

Jiu XU 3 Mar 11, 2019
code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification

On Robust Prefix-Tuning for Text Classification Prefix-tuning has drawed much attention as it is a parameter-efficient and modular alternative to adap

Zonghan Yang 12 Nov 30, 2022
Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer

AdaConv Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer from "Adaptive Convolutions for Structure-

65 Dec 22, 2022
UniFormer - official implementation of UniFormer

UniFormer This repo is the official implementation of "Uniformer: Unified Transformer for Efficient Spatiotemporal Representation Learning". It curren

SenseTime X-Lab 573 Jan 04, 2023
Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646

[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization LPN [Paper] NEWs Prerequisites Python 3.6 GPU Memory = 8G Numpy 1.

46 Dec 14, 2022