Implicit Deep Adaptive Design (iDAD)

Related tags

Deep Learningidad
Overview

Implicit Deep Adaptive Design (iDAD)

This code supports the NeurIPS paper 'Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods'.

@article{ivanova2021implicit,
  title={Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods},
  author={Ivanova, Desi R. and Foster, Adam and Kleinegesse, Steven and Gutmann, Michael and Rainforth, Tom},
  journal={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2021}
}

Computing infrastructure requirements

We have tested this codebase on Linux (Ubuntu x86_64) and MacOS (Big Sur v11.2.3) with Python 3.8. To train iDAD networks, we recommend the use of a GPU. We used one GeForce RTX 3090 GPU on a machine with 126 GiB of CPU memory and 40 CPU cores.

Installation

  1. Ensure that Python and conda are installed.
  2. Create and activate a new conda virtual environment as follows
conda create -n idad_code
conda activate idad_code
  1. Install the correct version of PyTorch, following the instructions at pytorch.org. For our experiments we used torch==1.8.0 with CUDA version 11.1.
  2. Install the remaining package requirements using pip install -r requirements.txt.
  3. Install the torchsde package from its repository: pip install git+https://github.com/google-research/torchsde.git.

MLFlow

We use mlflow to log metric and store network parameters. Each experiment run is stored in a directory mlruns which will be created automatically. Each experiment is assigned a numerical and each run gets a unique . The iDAD networks will be saved in ./mlruns/ / /artifacts , which will be printed at the end of each training run.

Location Finding Experiment

To train an iDAD network with the InfoNCE bound to locate 2 sources in 2D, using the approach in the paper, execute the command

python3 location_finding.py \
    --num-steps 100000 \
    --num-experiments=10 \
    --physical-dim 2 \
    --num-sources 2 \
    --lr 0.0005 \
    --num-experiments 10 \
    --encoding-dim 64 \
    --hidden-dim 512 \
    --mi-estimator InfoNCE \
    --device <DEVICE>

To train an iDAD network with the NWJ bound, using the approach in the paper, execute the command

python3 location_finding.py \
    --num-steps 100000 \
    --num-experiments=10 \
    --physical-dim 2 \
    --num-sources 2 \
    --lr 0.0005 \
    --num-experiments 10 \
    --encoding-dim 64 \
    --hidden-dim 512 \
    --mi-estimator NWJ \
    --device <DEVICE>

To run the static MINEBED baseline, use the following

python3 location_finding.py \
    --num-steps 100000 \
    --physical-dim 2 \
    --num-sources 2 \
    --lr 0.0001 \
    --num-experiments 10 \
    --encoding-dim 8 \
    --hidden-dim 512 \
    --design-arch static \
    --critic-arch cat \
    --mi-estimator NWJ \
    --device <DEVICE>

To run the static SG-BOED baseline, use the following

python3 location_finding.py \
    --num-steps 100000 \
    --physical-dim 2 \
    --num-sources 2 \
    --lr 0.0005 \
    --num-experiments 10 \
    --encoding-dim 8 \
    --hidden-dim 512 \
    --design-arch static \
    --critic-arch cat \
    --mi-estimator InfoNCE \
    --device <DEVICE>

To run the adaptive (explicit likelihood) DAD baseline, use the following

python3 location_finding.py \
    --num-steps 100000 \
    --physical-dim 2 \
    --num-sources 2 \
    --lr 0.0005 \
    --num-experiments 10 \
    --encoding-dim 32 \
    --hidden-dim 512 \
    --mi-estimator sPCE \
    --design-arch sum \
    --device <DEVICE>

To evaluate the resulting networks eun the following command

python3 eval_sPCE.py --experiment-id <ID>

To evaluate a random design baseline (requires no pre-training):

python3 baselines_locfin_nontrainable.py \
    --policy random \
    --physical-dim 2 \
    --num-experiments-to-perform 5 10 \
    --device <DEVICE>

To run the variational baseline (note: it takes a very long time), run:

python3 baselines_locfin_variational.py \
    --num-histories 128 \
    --num-experiments 10 \
    --physical-dim 2 \
    --lr 0.001 \
    --num-steps 5000\
    --device <DEVICE>

Copy path_to_artifact and pass it to the evaluation script:

python3 eval_sPCE_from_source.py \
    --path-to-artifact <path_to_artifact> \
    --num-experiments-to-perform 5 10 \
    --device <DEVICE>

Pharmacokinetic Experiment

To train an iDAD network with the InfoNCE bound, using the approach in the paper, execute the command

python3 pharmacokinetic.py \
    --num-steps 100000 \
    --lr 0.0001 \
    --num-experiments 5 \
    --encoding-dim 32 \
    --hidden-dim 512 \
    --mi-estimator InfoNCE \
    --device <DEVICE>

To train an iDAD network with the NWJ bound, using the approach in the paper, execute the command

python3 pharmacokinetic.py \
    --num-steps 100000 \
    --lr 0.0001 \
    --num-experiments 5 \
    --encoding-dim 32 \
    --hidden-dim 512 \
    --mi-estimator NWJ \
    --gamma 0.5 \
    --device <DEVICE>

To run the static MINEBED baseline, use the following

python3 pharmacokinetic.py \
    --num-steps 100000 \
    --lr 0.001 \
    --num-experiments 5 \
    --encoding-dim 8 \
    --hidden-dim 512 \
    --design-arch static \
    --critic-arch cat \
    --mi-estimator NWJ \
    --device <DEVICE>

To run the static SG-BOED baseline, use the following

python3 pharmacokinetic.py \
    --num-steps 100000 \
    --lr 0.0005 \
    --num-experiments 5 \
    --encoding-dim 8 \
    --hidden-dim 512 \
    --design-arch static \
    --critic-arch cat \
    --mi-estimator InfoNCE \
    --device <DEVICE>

To run the adaptive (explicit likelihood) DAD baseline, use the following

python3 pharmacokinetic.py \
    --num-steps 100000 \
    --lr 0.0001 \
    --num-experiments 5 \
    --encoding-dim 32 \
    --hidden-dim 512 \
    --mi-estimator sPCE \
    --design-arch sum \
    --device <DEVICE>

To evaluate the resulting networks run the following command

python3 eval_sPCE.py --experiment-id <ID>

To evaluate a random design baseline (requires no pre-training):

python3 baselines_pharmaco_nontrainable.py \
    --policy random \
    --num-experiments-to-perform 5 10 \
    --device <DEVICE>

To evaluate an equal interval baseline (requires no pre-training):

python3 baselines_pharmaco_nontrainable.py \
    --policy equal_interval \
    --num-experiments-to-perform 5 10 \
    --device <DEVICE>

To run the variational baseline (note: it takes a very long time), run:

python3 baselines_pharmaco_variational.py \
    --num-histories 128 \
    --num-experiments 10 \
    --lr 0.001 \
    --num-steps 5000 \
    --device <DEVICE>

Copy path_to_artifact and pass it to the evaluation script:

python3 eval_sPCE_from_source.py \
    --path-to-artifact <path_to_artifact> \
    --num-experiments-to-perform 5 10 \
    --device <DEVICE>

SIR experiment

For the SIR experiments, please first generate an initial training set and a test set:

python3 epidemic_simulate_data.py \
    --num-samples=100000 \
    --device <DEVICE>

To train an iDAD network with the InfoNCE bound, using the approach in the paper, execute the command

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.0005 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --mi-estimator InfoNCE \
    --design-transform ts \
    --device <DEVICE>

To train an iDAD network with the NWJ bound, execute the command

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.0005 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --mi-estimator NWJ \
    --design-transform ts \
    --device <DEVICE>

To run the static SG-BOED baseline, run

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.005 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --design-arch static \
    --critic-arch cat \
    --design-transform iid \
    --mi-estimator InfoNCE \
    --device <DEVICE>

To run the static MINEBED baseline, run

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.001 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --design-arch static \
    --critic-arch cat \
    --design-transform iid \
    --mi-estimator NWJ \
    --device <DEVICE>

To train a critic with random designs (to evaluate the random design baseline):

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.005 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --design-arch random \
    --critic-arch cat \
    --design-transform iid \
    --device <DEVICE>

To train a critic with equal interval designs, which is then used to evaluate the equal interval baseline, run the following

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.001 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --design-arch equal_interval \
    --critic-arch cat \
    --design-transform iid \
    --device <DEVICE>

Finally, to evaluate the different methods, run

python3 eval_epidemic.py \
    --experiment-id <ID> \
    --device <DEVICE>
Owner
Desi
Desi
Implementations of paper Controlling Directions Orthogonal to a Classifier

Classifier Orthogonalization Implementations of paper Controlling Directions Orthogonal to a Classifier , ICLR 2022, Yilun Xu, Hao He, Tianxiao Shen,

Yilun Xu 33 Dec 01, 2022
scalingscattering

Scaling The Scattering Transform : Deep Hybrid Networks This repository contains the experiments found in the paper: https://arxiv.org/abs/1703.08961

Edouard Oyallon 78 Dec 21, 2022
[NeurIPS 2021 Spotlight] Code for Learning to Compose Visual Relations

Learning to Compose Visual Relations This is the pytorch codebase for the NeurIPS 2021 Spotlight paper Learning to Compose Visual Relations. Demo Imag

Nan Liu 88 Jan 04, 2023
Prior-Guided Multi-View 3D Head Reconstruction

Prior-Guided Head MVS This repository includes some reconstruction results of our IEEE TMM 2021 paper, Prior-Guided Multi-View 3D Head Reconstruction.

11 Aug 17, 2022
PyTorch implementation for 3D human pose estimation

Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach This repository is the PyTorch implementation for the network presented in:

Xingyi Zhou 579 Dec 22, 2022
QTool: A Low-bit Quantization Toolbox for Deep Neural Networks in Computer Vision

This project provides abundant choices of quantization strategies (such as the quantization algorithms, training schedules and empirical tricks) for quantizing the deep neural networks into low-bit c

Monash Green AI Lab 51 Dec 10, 2022
Generate images from texts. In Russian. In PaddlePaddle

ruDALL-E PaddlePaddle ruDALL-E in PaddlePaddle. Install: pip install rudalle_paddle==0.0.1rc1 Run with free v100 on AI Studio. Original Pytorch versi

AgentMaker 20 Oct 18, 2022
Code for "Infinitely Deep Bayesian Neural Networks with Stochastic Differential Equations"

Infinitely Deep Bayesian Neural Networks with SDEs This library contains JAX and Pytorch implementations of neural ODEs and Bayesian layers for stocha

Winnie Xu 95 Nov 26, 2021
Code and data for ImageCoDe, a contextual vison-and-language benchmark

ImageCoDe This repository contains code and data for ImageCoDe: Image Retrieval from Contextual Descriptions. Data All collected descriptions for the

McGill NLP 27 Dec 02, 2022
Tensorflow implementation of MIRNet for Low-light image enhancement

MIRNet Tensorflow implementation of the MIRNet architecture as proposed by Learning Enriched Features for Real Image Restoration and Enhancement. Lanu

Soumik Rakshit 91 Jan 06, 2023
Uni-Fold: Training your own deep protein-folding models

Uni-Fold: Training your own deep protein-folding models. This package provides an implementation of a trainable, Transformer-based deep protein foldin

DP Technology 187 Jan 04, 2023
Code for paper "ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation"

ASAP-Net This project implements ASAP-Net of paper ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation (BMVC2020). Overview We i

Hanwen Cao 26 Aug 25, 2022
Bridging Composite and Real: Towards End-to-end Deep Image Matting

Bridging Composite and Real: Towards End-to-end Deep Image Matting Please note that the official repository of the paper Bridging Composite and Real:

Jizhizi_Li 30 Oct 31, 2022
Full-featured Decision Trees and Random Forests learner.

CID3 This is a full-featured Decision Trees and Random Forests learner. It can save trees or forests to disk for later use. It is possible to query tr

Alejandro Penate-Diaz 3 Aug 15, 2022
SOTA model in CIFAR10

A PyTorch Implementation of CIFAR Tricks 调研了CIFAR10数据集上各种trick,数据增强,正则化方法,并进行了实现。目前项目告一段落,如果有更好的想法,或者希望一起维护这个项目可以提issue或者在我的主页找到我的联系方式。 0. Requirement

PJDong 58 Dec 21, 2022
Multi-tool reverse engineering collaboration solution.

CollaRE v0.3 Intorduction CollareRE is a tool for collaborative reverse engineering that aims to allow teams that do need to use more then one tool du

105 Nov 27, 2022
This repository contains code accompanying the paper "An End-to-End Chinese Text Normalization Model based on Rule-Guided Flat-Lattice Transformer"

FlatTN This repository contains code accompanying the paper "An End-to-End Chinese Text Normalization Model based on Rule-Guided Flat-Lattice Transfor

THUHCSI 74 Nov 28, 2022
This is the code repository for the paper "Identification of the Generalized Condorcet Winner in Multi-dueling Bandits" (NeurIPS 2021).

Code Repository for the Paper "Identification of the Generalized Condorcet Winner in Multi-dueling Bandits" (To appear in: Proceedings of NeurIPS20

1 Oct 03, 2022
Procedural 3D data generation pipeline for architecture

Synthetic Dataset Generator Authors: Stanislava Fedorova Alberto Tono Meher Shashwat Nigam Jiayao Zhang Amirhossein Ahmadnia Cecilia bolognesi Dominik

Computational Design Institute 49 Nov 25, 2022
PyTorch experiments with the Zalando fashion-mnist dataset

zalando-pytorch PyTorch experiments with the Zalando fashion-mnist dataset Project Organization ├── LICENSE ├── Makefile - Makefile with co

Federico Baldassarre 31 Sep 25, 2021