Posterior predictive distributions quantify uncertainties ignored by point estimates.

Overview

The Neural Testbed

Neural Testbed Logo

Introduction

Posterior predictive distributions quantify uncertainties ignored by point estimates. The neural_testbed provides tools for the systematic evaluation of agents that generate such predictions. Crucially, these tools assess not only the quality of marginal predictions per input, but also joint predictions given many inputs. Joint distributions are often critical for useful uncertainty quantification, but they have been largely overlooked by the Bayesian deep learning community.

This library automates the evaluation and analysis of learning agents:

  • Synthetic neural-network-based generative model.
  • Evaluate predictions beyond marginal distributions.
  • Reference implementations of benchmark agents (with tuning).

For a more comprehensive overview, see the accompanying paper.

Technical overview

We outline the key high-level interfaces for our code in base.py:

  • EpistemicSampler: Generates a random sample from agent's predictive distribution.
  • TestbedAgent: Given data, prior_knowledge outputs an EpistemicSampler.
  • TestbedProblem: Reveals training_data, prior_knowledge. Can evaluate the quality of an EpistemicSampler.

If you want to evaluate your algorithm on the testbed, you simply need to define your TestbedAgent and then run it on our experiment.py

def run(agent: testbed_base.TestbedAgent,
        problem: testbed_base.TestbedProblem) -> testbed_base.ENNQuality:
  """Run an agent on a given testbed problem."""
  enn_sampler = agent(problem.train_data, problem.prior_knowledge)
  return problem.evaluate_quality(enn_sampler)

The neural_testbed takes care of the evaluation/logging within the TestbedProblem. This means that the experiment will automatically output data in the correct format. This makes it easy to compare results from different codebases/frameworks, so you can focus on agent design.

How do I get started?

If you are new to neural_testbed you can get started in our colab tutorial. This Jupyter notebook is hosted with a free cloud server, so you can start coding right away without installing anything on your machine. After this, you can follow the instructions below to get neural_testbed running on your local machine:

Installation

We have tested neural_testbed on Python 3.7. To install the dependencies:

  1. Optional: We recommend using a Python virtual environment to manage your dependencies, so as not to clobber your system installation:

    python3 -m venv neural_testbed
    source neural_testbed/bin/activate
    pip install --upgrade pip setuptools
  2. Install neural_testbed directly from github:

    git clone https://github.com/deepmind/neural_testbed.git
    cd neural_testbed
    pip install .
  3. Optional: run the tests by executing ./test.sh from the neural_testbed main directory.

Baseline agents

In addition to our testbed code, we release a collection of benchmark agents. These include the full sets of hyperparameter sweeps necessary to reproduce the paper's results, and can serve as a great starting point for new research. You can have a look at these implementations in the agents/factories/ folder.

We recommended you get started with our colab tutorial. After intallation you can also run an agent directly by executing the following command from the main directory of neural_testbed:

python -m neural_testbed.experiments.run --agent_name=mlp

By default, this will save the results for that agent to csv at /tmp/neural_testbed. You can control these options by flags in the run file. In particular, you can run the agent on the whole sweep of tasks in the Neural Testbed by specifying the flag --problem_id=SWEEP.

Citing

If you use neural_testbed in your work, please cite the accompanying paper:

@misc{osband2021evaluating,
      title={Evaluating Predictive Distributions: Does Bayesian Deep Learning Work?},
      author={Ian Osband and Zheng Wen and Seyed Mohammad Asghari and Vikranth Dwaracherla and Botao Hao and Morteza Ibrahimi and Dieterich Lawson and Xiuyuan Lu and Brendan O'Donoghue and Benjamin Van Roy},
      year={2021},
      eprint={2110.04629},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Owner
DeepMind
DeepMind
Official pytorch implementation of Rainbow Memory (CVPR 2021)

Rainbow Memory: Continual Learning with a Memory of Diverse Samples

Clova AI Research 91 Dec 17, 2022
Source Code for AAAI 2022 paper "Graph Convolutional Networks with Dual Message Passing for Subgraph Isomorphism Counting and Matching"

Graph Convolutional Networks with Dual Message Passing for Subgraph Isomorphism Counting and Matching This repository is an official implementation of

HKUST-KnowComp 13 Sep 08, 2022
MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python

Digital Image Processing Python MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python TO-DO: Refactor scripts, curren

Merve Noyan 24 Oct 16, 2022
Re-implementation of 'Grokking: Generalization beyond overfitting on small algorithmic datasets'

Re-implementation of the paper 'Grokking: Generalization beyond overfitting on small algorithmic datasets' Paper Original paper can be found here Data

Tom Lieberum 38 Aug 09, 2022
A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation

##A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation. #USAGE To run the trained classifier on some images: python w

Alex Seewald 13 Nov 17, 2022
Multiwavelets-based operator model

Multiwavelet model for Operator maps Gaurav Gupta, Xiongye Xiao, and Paul Bogdan Multiwavelet-based Operator Learning for Differential Equations In Ne

Gaurav 33 Dec 04, 2022
REBEL: Relation Extraction By End-to-end Language generation

REBEL: Relation Extraction By End-to-end Language generation This is the repository for the Findings of EMNLP 2021 paper REBEL: Relation Extraction By

Babelscape 222 Jan 06, 2023
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search

BossNAS This repository contains PyTorch evaluation code, retraining code and pretrained models of our paper: BossNAS: Exploring Hybrid CNN-transforme

Changlin Li 127 Dec 26, 2022
QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

249 Jan 03, 2023
This YoloV5 based model is fit to detect people and different types of land vehicles, and displaying their density on a fitted map, according to their coordinates and detected labels.

This YoloV5 based model is fit to detect people and different types of land vehicles, and displaying their density on a fitted map, according to their

Liron Bdolah 8 May 22, 2022
This is the implementation of the paper LiST: Lite Self-training Makes Efficient Few-shot Learners.

LiST (Lite Self-Training) This is the implementation of the paper LiST: Lite Self-training Makes Efficient Few-shot Learners. LiST is short for Lite S

Microsoft 28 Dec 07, 2022
Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, as a standalone package for Pytorch

Triangle Multiplicative Module - Pytorch Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or c

Phil Wang 22 Oct 28, 2022
Official repo of the paper "Surface Form Competition: Why the Highest Probability Answer Isn't Always Right"

Surface Form Competition This is the official repo of the paper "Surface Form Competition: Why the Highest Probability Answer Isn't Always Right" We p

Peter West 46 Dec 23, 2022
Best practices for segmentation of the corporate network of any company

Best-practice-for-network-segmentation What is this? This project was created to publish the best practices for segmentation of the corporate network

2k Jan 07, 2023
PyTorch original implementation of Cross-lingual Language Model Pretraining.

XLM NEW: Added XLM-R model. PyTorch original implementation of Cross-lingual Language Model Pretraining. Includes: Monolingual language model pretrain

Facebook Research 2.7k Dec 27, 2022
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.

Nonuniform-to-Uniform Quantization This repository contains the training code of N2UQ introduced in our CVPR 2022 paper: "Nonuniform-to-Uniform Quanti

Zechun Liu 60 Dec 28, 2022
Implementation of our paper 'RESA: Recurrent Feature-Shift Aggregator for Lane Detection' in AAAI2021.

RESA PyTorch implementation of the paper "RESA: Recurrent Feature-Shift Aggregator for Lane Detection". Our paper has been accepted by AAAI2021. Intro

137 Jan 02, 2023
Galileo library for large scale graph training by JD

近年来,图计算在搜索、推荐和风控等场景中获得显著的效果,但也面临超大规模异构图训练,与现有的深度学习框架Tensorflow和PyTorch结合等难题。 Galileo(伽利略)是一个图深度学习框架,具备超大规模、易使用、易扩展、高性能、双后端等优点,旨在解决超大规模图算法在工业级场景的落地难题,提

JD Galileo Team 128 Nov 29, 2022
Wandb-predictions - WANDB Predictions With Python

WANDB API CI/CD Below we capture the CI/CD scenarios that we would expect with o

Anish Shah 6 Oct 07, 2022
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

93 Nov 06, 2022