Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Overview

Bayesian-Torch: Bayesian neural network layers for uncertainty estimation

Get started | Example usage | Documentation | License | Citing

Bayesian layers and utilities to perform stochastic variational inference in PyTorch

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

The repository has implementations for the following Bayesian layers:

  • Variational layers with Gaussian mixture model (GMM) posteriors using reparameterized Monte Carlo estimators (in pre-alpha)

    LinearMixture
    Conv1dMixture, Conv2dMixture, Conv3dMixture, ConvTranspose1dMixture, ConvTranspose2dMixture, ConvTranspose3dMixture
    LSTMMixture
    

Please refer to documentation of Bayesian layers for details.

Other features include:

Installation

Install from source:

git clone https://github.com/IntelLabs/bayesian-torch
cd bayesian-torch
pip install .

This code has been tested on PyTorch v1.6.0 and torchvision v0.7.0 with python 3.7.7.

Dependencies:

  • Create conda environment with python=3.7
  • Install PyTorch and torchvision packages within conda environment following instructions from PyTorch install guide
  • conda install -c conda-forge accimage
  • pip install tensorboard
  • pip install scikit-learn

Example usage

We have provided example model implementations using the Bayesian layers.

We also provide example usages and scripts to train/evaluate the models. The instructions for CIFAR10 examples is provided below, similar scripts for ImageNet and MNIST are available.

cd bayesian_torch

Training

To train Bayesian ResNet on CIFAR10, run this command:

Mean-field variational inference (Reparameterized Monte Carlo estimator)

sh scripts/train_bayesian_cifar.sh

Mean-field variational inference (Flipout Monte Carlo estimator)

sh scripts/train_bayesian_flipout_cifar.sh

To train deterministic ResNet on CIFAR10, run this command:

Vanilla

sh scripts/train_deterministic_cifar.sh

Evaluation

To evaluate Bayesian ResNet on CIFAR10, run this command:

Mean-field variational inference (Reparameterized Monte Carlo estimator)

sh scripts/test_bayesian_cifar.sh

Mean-field variational inference (Flipout Monte Carlo estimator)

sh scripts/test_bayesian_flipout_cifar.sh

To evaluate deterministic ResNet on CIFAR10, run this command:

Vanilla

sh scripts/test_deterministic_cifar.sh

Citing

If you use this code, please cite as:

@misc{krishnan2020bayesiantorch,
    author = {Ranganath Krishnan and Piero Esposito},
    title = {Bayesian-Torch: Bayesian neural network layers for uncertainty estimation},
    year = {2020},
    publisher = {GitHub},
    howpublished = {\url{https://github.com/IntelLabs/bayesian-torch}}
}

Cite the weight sampling methods as well: Blundell et al. 2015; Wen et al. 2018

Contributors

  • Ranganath Krishnan
  • Piero Esposito

This code is intended for researchers and developers, enables to quantify principled uncertainty estimates from deep neural network predictions using stochastic variational inference in Bayesian neural networks. Feedbacks, issues and contributions are welcome. Email to [email protected] for any questions.

Comments
  • The average should be taken over log probability rather than logits

    The average should be taken over log probability rather than logits

    https://github.com/IntelLabs/bayesian-torch/blob/7abcfe7ff3811c6a5be6326ab91a8d5cb1e8619f/bayesian_torch/examples/main_bayesian_cifar.py#L363-L367 I think the average across the MC runs should be taken over the log probability. However, the output here is the logits before the softmax operation. I think we may first run output = F.log_softmax(output, dim=1) and then take the average.

    There are two equivalent ways to take the average, which I think is more reasonable. The first way is

    for mc_run in range(args.num_mc):
        output, kl = model(input_var)
        output = F.log_softmax(output, dim=1)
        output_.append(output)
        kl_.append(kl)
    output = torch.mean(torch.stack(output_), dim=0)
    loss= F.nll_loss(output, target_var) # this is to replace the original cross_entropy_loss
    

    Or equivalently, we can first take the cross-entropy loss for each MC run, and average the losses at the end:

    loss = 0
    for mc_run in range(args.num_mc):
        output, kl = model(input_var)
        loss = loss + F.cross_entropy(output, target_var, dim=1)
        kl_.append(kl)
    loss = loss / args.num_mc  # this is to replace the original cross_entropy_loss
    
    question 
    opened by Nebularaid2000 5
  • KL divergence not changing during training

    KL divergence not changing during training

    Hello,

    I am trying to make a single layer BNN using the LinearReparameterization layer. I am unable to get it to give reasonable uncertainty estimates, so I started monitoring the KL term from the layers and noticed that it is not changing at all for each epoch. Even when I scale up the KL term in the loss, it remains unchanged.

    I am not sure if this is a bug, or if I am not doing the training correctly.

    My model

    class BNN(nn.Module):
        def __init__(self, input_dim, hidden_dim, output_dim=1):
            super().__init__()
            self.layer1 = LinearReparameterization(input_dim, hidden_dim)
            self.layerf = nn.Linear(hidden_dim, output_dim)
            
        def forward(self, x):
            kl_sum = 0
            x, kl = self.layer1(x)
            kl_sum += kl
            x = F.relu(x)
            x = self.layerf(x)
            return x, kl_sum
    

    and my training loop

    model = BNN(X_train.shape[-1], 100).to(device)
    optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
    criterion = torch.nn.MSELoss()
    
    for epoch in pbar:
            running_kld_loss = 0
            running_mse_loss = 0
            running_loss = 0
            for datapoints, labels in dataloader_train:
                optimizer.zero_grad()
                
                output, kl = model(datapoints)
                kl = get_kl_loss(model)
                
                # calculate loss with kl term for Bayesian layers
                mse_loss = criterion(output, labels)
                loss = mse_loss + kl * kld_beta / batch_size
                
                loss.backward()
                optimizer.step()
                
                running_mse_loss += mse_loss.detach().numpy()
                running_kld_loss += kl.detach().numpy()
                running_loss += loss.detach().numpy()
                
            status.update({
                'Epoch': epoch, 
                'loss': running_loss/len(dataloader_train),
                'kl': running_kld_loss/len(dataloader_train),
                'mse': running_mse_loss/len(dataloader_train)
            })
    

    When I print the KL loss, it starts at ~5.0 and does not decrease at all.

    opened by gkwt 4
  • Pytorch version issue

    Pytorch version issue

    Hi, it seems that this software only supports Pytorch 1.8.1 LTS or newer version. When I try to use this code in my own project(which is mainly based on Pytorch 1.4.0), some conflicts occur and seemingly unavoidable. Could you please update this software to make it also support some older version Pytorch, for example, 1.4.0. Great Thanks!!!

    opened by hebowei2000 4
  • conv_transpose2d parameter order problem

    conv_transpose2d parameter order problem

    https://github.com/IntelLabs/bayesian-torch/blob/f6f516e9b3721466aa0036c735475a421cc3ce80/bayesian_torch/layers/variational_layers/conv_variational.py#L784-L786

    image

    When I try to convert a deterministic autoencoder into a bayesian one with a ConvTranspose2d layer I get the error "Exception has occurred: TypeError conv_transpose2d(): argument 'groups' (position 7) must be int, not tuple" which I suspect comes from self.dilation and self.group which are swaped.

    opened by pierreosselin 3
  • Let the models return prediction only, saving KL Divergence as an attribute

    Let the models return prediction only, saving KL Divergence as an attribute

    Closes #7 .

    Let the user, if they want, to return predictions only on forward method, while saving kl divergence as an attribute. This is important to make it easier to integrate into PyTorch models.

    Also, it does not break the lib as it is: we added a new parameter on forward method that defaults to True and, if manually set to false, returns predictions only.

    Performed the following changes, on all layers:

    • Added return_kl on all forward methods, defaulting to True. If set to false, won't return kl.
    • Added a new kl attribute to each layer, updating it at every feedforward step. Useful when integrating with already-built PyTorch models.

    That should help integrating with PyTorch experiments while keeping backward compatibility towards this lib.

    enhancement 
    opened by piEsposito 3
  • Fix KL computation

    Fix KL computation

    Hello,

    I think there might be a few problems in your model definitions.

    In particular:

    • in resnet_flipout.py and resnet_variational.py you only sum the kl of the last block inside self.layerN
    • in resnet_flipout_large.py and resnet_variational_large.py you check for is None while you probably want is not None or actually no check at all since it can't be None in any reasonable setting. Also the str(layer) check is odd since it contains a BasicBlock or BottleNeck object (you're looping over an nn.Sequential of blocks). In fact in this code that string check is very likely superfluous (didn't test this, but I did include it in this PR as example)

    I hope you can confirm and perhaps fix these issues, which will help me (and maybe others) in building on your nice codebase :)

    opened by y0ast 3
  • Enable the Bayesian layer to freeze the parameters to their mean values

    Enable the Bayesian layer to freeze the parameters to their mean values

    I think it would be good to provide an option to freeze the weights and biases to the mean value when inferencing. The forward function would somehow look like this:

    def forward(self, input, sample=True):
        if sample:
            # do sampling and forward as in the current code
        else:
            # set weight=self.mu_weight
            # set bias=self.mu_bias
            # (optional) set kl=0, since it is useless in this case
        return out, kl
    
    opened by Nebularaid2000 2
  • FR: Enable forward method of Bayesian Layers to return value only for smoother integration with PyTorch

    FR: Enable forward method of Bayesian Layers to return value only for smoother integration with PyTorch

    It would be nice if we could store the KL divergence value as an attribute of the Bayesian Layers and return them on the forward method only if needed.

    With that we can have less friction on integration with PyTorch. being able to "plug and play" with bayesian-torch layers on deterministic models.

    It would be something like that:

    def forward(self, x, return_kl=False):
        ...
        self.kl = kl
    
        if return_kl:
            return out, kl
        return out   
    

    We then can get it from the bayesian layers when calculating the loss with no harm or hard changes to the code, which might encourage users to try the lib.

    I can work on that also.

    enhancement 
    opened by piEsposito 2
  • modifing the Scripts for any arbitrary output size

    modifing the Scripts for any arbitrary output size

    I am trying to use these repo for classification with 3 output nodes. However, in the fully connected layer, I got the error of size mismatch. Can you help me with this issue?

    question 
    opened by narminGhaffari 2
  • add mu_kernel to of delta_kernel in flipout layers?

    add mu_kernel to of delta_kernel in flipout layers?

    Hi,

    Shouldn't mu_kernel be added to delta_kernel in the code below?

    Best, Lewis

    diff --git a/bayesian_torch/layers/flipout_layers/conv_flipout.py b/bayesian_torch/layers/flipout_layers/conv_flipout.py
    index 4b3e88d..719cfdc 100644
    --- a/bayesian_torch/layers/flipout_layers/conv_flipout.py
    +++ b/bayesian_torch/layers/flipout_layers/conv_flipout.py
    @@ -165,7 +165,7 @@ class Conv1dFlipout(BaseVariationalLayer_):
             sigma_weight = torch.log1p(torch.exp(self.rho_kernel))
             eps_kernel = self.eps_kernel.data.normal_()
     
    -        delta_kernel = (sigma_weight * eps_kernel)
    +        delta_kernel = (sigma_weight * eps_kernel) + self.mu_kernel 
     
             kl = self.kl_div(self.mu_kernel, sigma_weight, self.prior_weight_mu,
                              self.prior_weight_sigma)
    
    opened by burntcobalt 2
  • Kernel_size

    Kernel_size

    The Conv2dReparameterization only allows kernels with same dim (e.g., 2×2) However, some CNN model has different kernels (e.g., in inceptionresnetv2, the kernel size in block17 is 1×7)

    So, I modified the code in conv_variational.py from:

            self.mu_kernel = Parameter(
                torch.Tensor(out_channels, in_channels // groups, kernel_size,
                             kernel_size))
            self.rho_kernel = Parameter(
                torch.Tensor(out_channels, in_channels // groups, kernel_size,
                             kernel_size))
            self.register_buffer(
                'eps_kernel',
                torch.Tensor(out_channels, in_channels // groups, kernel_size,
                             kernel_size),
                persistent=False)
            self.register_buffer(
                'prior_weight_mu',
                torch.Tensor(out_channels, in_channels // groups, kernel_size,
                             kernel_size),
                persistent=False)
            self.register_buffer(
                'prior_weight_sigma',
                torch.Tensor(out_channels, in_channels // groups, kernel_size,
                             kernel_size),
                persistent=False)
    

    to

            self.mu_kernel = Parameter(
                torch.Tensor(out_channels, in_channels // groups, kernel_size[0],
                             kernel_size[1]))
            self.rho_kernel = Parameter(
                torch.Tensor(out_channels, in_channels // groups, kernel_size[0],
                             kernel_size[1]))
            self.register_buffer(
                'eps_kernel',
                torch.Tensor(out_channels, in_channels // groups, kernel_size[0],
                             kernel_size[1]),
                persistent=False)
            self.register_buffer(
                'prior_weight_mu',
                torch.Tensor(out_channels, in_channels // groups, kernel_size[0],
                             kernel_size[1]),
                persistent=False)
            self.register_buffer(
                'prior_weight_sigma',
                torch.Tensor(out_channels, in_channels // groups, kernel_size[0],
                             kernel_size[1]),
                persistent=False)
    

    also, the kernel_size=d.kernel_size[0] was changes to kernel_size=d.kernel_size in dnn_to_cnn.py

    enhancement 
    opened by flydephone 1
  • How bayesian network be used in quantization?

    How bayesian network be used in quantization?

    @peteriz @jpablomch @ranganathkrishnan we can get the $\mu$ and $\sigma$ of each weight ,so how it can be used to quantization which mapping the $w_{float32 }$ into $w_{int8}$

    enhancement 
    opened by LeopoldACC 1
Releases(v0.3.0)
  • v0.3.0(Dec 14, 2022)

  • v0.2.0-alpha(Jan 27, 2022)

    Includes dnn_to_bnn new feature: An API to convert deterministic deep neural network (dnn) model of any architecture to Bayesian deep neural network (bnn) model, simplifying the model definition i.e. drop-in replacements of Convolutional, Linear and LSTM layers to corresponding Bayesian layers. This will enable seamless conversion of existing topology of larger models to Bayesian deep neural network models for extending towards uncertainty-aware applications.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 27, 2022)

    Includes dnn_to_bnn new feature: An API to convert deterministic deep neural network (dnn) model of any architecture to Bayesian deep neural network (bnn) model, simplifying the model definition i.e. drop-in replacements of Convolutional, Linear and LSTM layers to corresponding Bayesian layers. This will enable seamless conversion of existing topology of larger models to Bayesian deep neural network models for extending towards uncertainty-aware applications.

    Full Changelog: https://github.com/IntelLabs/bayesian-torch/compare/v0.1...v0.2.0

    Source code(tar.gz)
    Source code(zip)
Owner
Intel Labs
Intel Labs
Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"

MeshTransformer ✨ This is our research code of End-to-End Human Pose and Mesh Reconstruction with Transformers. MEsh TRansfOrmer is a simple yet effec

Microsoft 473 Dec 31, 2022
The materials used in the SaxonJS tutorial presented at Declarative Amsterdam, 2021

SaxonJS-Tutorial-2021, version 1.0.4 Last updated on 4 November, 2021. Table of contents Background Prerequisites Starting a web server Running a Java

Saxonica 11 Oct 23, 2022
Python code to fuse multiple RGB-D images into a TSDF voxel volume.

Volumetric TSDF Fusion of RGB-D Images in Python This is a lightweight python script that fuses multiple registered color and depth images into a proj

Andy Zeng 845 Jan 03, 2023
Automatic library of congress classification, using word embeddings from book titles and synopses.

Automatic Library of Congress Classification The Library of Congress Classification (LCC) is a comprehensive classification system that was first deve

Ahmad Pourihosseini 3 Oct 01, 2022
Instant neural graphics primitives: lightning fast NeRF and more

Instant Neural Graphics Primitives Ever wanted to train a NeRF model of a fox in under 5 seconds? Or fly around a scene captured from photos of a fact

NVIDIA Research Projects 10.6k Jan 01, 2023
A lightweight face-recognition toolbox and pipeline based on tensorflow-lite

FaceIDLight 📘 Description A lightweight face-recognition toolbox and pipeline based on tensorflow-lite with MTCNN-Face-Detection and ArcFace-Face-Rec

Martin Knoche 16 Dec 07, 2022
根据midi文件演奏“风物之诗琴”的脚本 "Windsong Lyre" auto play

Genshin-lyre-auto-play 简体中文 | English 简介 根据midi文件演奏“风物之诗琴”的脚本。由Python驱动,在此承诺, ⚠️ 项目内绝不含任何能够引起安全问题的代码。 前排提示:所有键盘在动但是原神没反应的都是因为没有管理员权限,双击run.bat或者以管理员模式

御坂17032号 386 Jan 01, 2023
The codes and related files to reproduce the results for Image Similarity Challenge Track 1.

ISC-Track1-Submission The codes and related files to reproduce the results for Image Similarity Challenge Track 1. Required dependencies To begin with

Wenhao Wang 115 Jan 02, 2023
DFM: A Performance Baseline for Deep Feature Matching

DFM: A Performance Baseline for Deep Feature Matching Python (Pytorch) and Matlab (MatConvNet) implementations of our paper DFM: A Performance Baselin

143 Jan 02, 2023
VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition

VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition Usage First, install PyTorch 1.7.1+, torchvision 0.8.2

40 Dec 12, 2022
Randomizes the warps in a stock pokeemerald repo.

pokeemerald warp randomizer Randomizes the warps in a stock pokeemerald repo. Usage Instructions Install networkx and matplotlib via pip3 or similar.

Max Thomas 6 Mar 17, 2022
[CVPR 2020] Transform and Tell: Entity-Aware News Image Captioning

Transform and Tell: Entity-Aware News Image Captioning This repository contains the code to reproduce the results in our CVPR 2020 paper Transform and

Alasdair Tran 85 Dec 13, 2022
Gesture-Volume-Control - This Python program can adjust the system's volume by using hand gestures

Gesture-Volume-Control This Python program can adjust the system's volume by usi

VatsalAryanBhatanagar 1 Dec 30, 2021
Latte: Cross-framework Python Package for Evaluation of Latent-based Generative Models

Cross-framework Python Package for Evaluation of Latent-based Generative Models Latte Latte (for LATent Tensor Evaluation) is a cross-framework Python

Karn Watcharasupat 30 Sep 08, 2022
Sample code and notebooks for Vertex AI, the end-to-end machine learning platform on Google Cloud

Google Cloud Vertex AI Samples Welcome to the Google Cloud Vertex AI sample repository. Overview The repository contains notebooks and community conte

Google Cloud Platform 560 Dec 31, 2022
AWS provides a Python SDK, "Boto3" ,which can be used to access the AWS-account from the local.

Boto3 - The AWS SDK for Python Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to wri

Shreyas Srivastava 1 Oct 25, 2021
[CVPR2021] De-rendering the World's Revolutionary Artefacts

De-rendering the World's Revolutionary Artefacts Project Page | Video | Paper In CVPR 2021 Shangzhe Wu1,4, Ameesh Makadia4, Jiajun Wu2, Noah Snavely4,

49 Nov 06, 2022
Bytedance Inc. 2.5k Jan 06, 2023
Beancount-mercury - Beancount importer for Mercury Startup Checking

beancount-mercury beancount-mercury provides an Importer for converting CSV expo

Michael Lynch 4 Oct 31, 2022
Official implementation of "Refiner: Refining Self-attention for Vision Transformers".

RefinerViT This repo is the official implementation of "Refiner: Refining Self-attention for Vision Transformers". The repo is build on top of timm an

101 Dec 29, 2022