Official PyTorch implementation of Spatial Dependency Networks.

Related tags

Deep Learningsdn
Overview

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling



Example of SDN-VAE generated images.

Method Description

Spatial dependency network (SDN) is a novel neural architecture. It is based on spatial dependency layers which are designed for stacking deep neural networks that produce images e.g. generative models such as VAEs or GANs or segmentation, super-resolution and image-to-image-translation neural networks. SDNs improve upon celebrated CNNs by explicitly modeling spatial dependencies between feature vectors at each level of a deep neural network pipeline. Spatial dependency layers (i) explicitly introduce the inductive bias of spatial coherence; and (ii) offer improved modeling of long-range dependencies due to the unbounded receptive field. We applied SDN to two variants of VAE, one which we used to model image density (SDN-VAE) and one which we used to learn better disentangled representations. More generally, spatial dependency layers can be used as a drop-in replacement for convolutional layers in any image-generation-related tasks.

Graphical model of SDN layer.

Code Structure

.
├── checkpoints/               # where the model checkpoints will be stored
├── data/
     ├── ImageNet32/           # where ImageNet32 data is stored
     ├── CelebAHQ256/          # where Celeb data is stored
     ├── 3DShapes/             # where 3DShapes data is stored
     ├── lmdb_datasets.py      # LMDB data loading borrowed from https://github.com/NVlabs/NVAE
     ├── get_dataset.py        # auxiliary script for fetching data sets
├── figs/                      # figures from the paper
├── lib/
     ├── DensityVAE            # SDN-VAE which we used for density estimation
     ├── DisentanglementVAE    # VAE which we used for disentanglement
     ├── nn.py                 # The script which contains SDN and other neural net modules
     ├── probability.py        # probability models
     ├── utils.py              # utility functions
 ├── train.py                  # generic training script
 ├── evaluate.py               # the script for evaluation of trained models
 ├── train_cifar.sh            # for reproducing CIFAR10 experiments
 ├── train_celeb.sh            # for reproducing CelebAHQ256 experiments
 ├── train_imagenet.sh         # for reproducing ImageNet32 experiments
 ├── train_3dshapes.sh         # for reproducing 3DShapes experiments
 ├── requirements.txt
 ├── LICENSE
 └── README.md

Applying SDN layers to your neural network

To apply SDN layers to your framework it is sufficient that you integrate the 'lib/nn.py' file into your code. You can then import and utilize SDNLayer or ResSDNLayer (the residual variant) in the same way convolutional layer is utilized. Apart from PyTorch, no additional packages are required.

Tips & Tricks

If you would like to integrate SDN into your neural network, we recommend the following:

  • first design and debug your framework using vanilla CNN layers.
  • replace CNN layers one-by-one. Start with the lowest scale e.g. 4x4 or 8x8 to speed up debugging.
  • start with 1 or 2 directions, and then later on try using 4 directions.
  • larger number of features per SDN layers implies more expressive model which is more powerful but prone to overfitting.
  • a good idea is to use smaller number of SDN features on smaller scales and larger on larger scales.

Reproducing the experiments from the paper

Common to all experiments, you will need to install PyTorch and PyTorchLightning. The default logging system is based on Wandb but this can be changed in 'train.py'. In case you decide to use Wandb, you will need to install it and then login into your account: Follow a very simple procedure described here. To reproduce density estimation experiments you will need 8 TeslaV100 GPUs with 32Gb of memory. One way to alleviate high memory requirements is to accumulate gradient batches, however, the training will take much longer in that case. By default, you will need hardware that supports automatic mixed precision. In case your hardware does not support this, you will need to reduce the batch size, however note that the results will slightly deteriorate and that you will possibly need to reduce the learning rate too to avoid NaN values. For the disentanglement experiments, you will need a single GPU with >10Gb of memory. To install all the requirements use:

pip install -r requirements.txt

Note of caution: Ensure the right version of PyTorchLightning is used. We found multiple issues in the newer versions.

CIFAR10

The data will be automatically downloaded through PyTorch. To run the baselines that reproduce the results from the paper use:

bash train_cifar.sh
ImageNet32

To obtain the dataset go into the folder 'data/ImageNet32' and then run

bash get_imagenet_data.sh

To reproduce the experiments run:

bash train_imagenet.sh
CelebAHQ256

To obtain the dataset go into the folder 'data/CelebAHQ256' and then run

bash get_celeb_data.sh

The script is adapted from NVAE repo and is based on GLOW dataset. To reproduce the experiments run:

bash train_celeb.sh
3DShapes

To obtain the dataset follow the instructions on this GitHub repo. Place it into the 'data/3DShapes' directory. To reproduce the experiments run:

bash train_3dshapes.sh

Evaluation of trained models

To perform post hoc evaluation of your trained models, use 'evaluate.py' script and select flags corresponding to the evaluation task and the model you want to use. The evaluation can be performed on a single GPU of any type, though note that the batch size needs to be modified dependent on the available GPU memory. For the CelebAHQ256 dataset, you can download the checkpoint which contains one of the pre-trained models that we used in the paper from this link. For example, you can evaluate elbo and generate random samples by running:

python3 evaluate.py --model CelebAHQ256 --elbo --sampling

Citation

Please cite our paper if you use our code or if you re-implement our method:

@conference{miladinovic21sdn,
  title = {Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling},
  author = {Miladinović, {\DJ}or{\dj}e and Stanić, Aleksandar and Bauer, Stefan and Schmidhuber, J{\"u}rgen and Buhmann, Joachim M.},
  booktitle = {9th International Conference on Learning Representations (ICLR 2021)},
  month = may,
  year = {2021},
  month_numeric = {5}
}

Note that you might need to include the following line in your latex file:

\usepackage[T1]{fontenc}
Owner
Djordje Miladinovic
Machine learning R&D.
Djordje Miladinovic
Official implementations of PSENet, PAN and PAN++.

News (2021/11/03) Paddle implementation of PAN, see Paddle-PANet. Thanks @simplify23. (2021/04/08) PSENet and PAN are included in MMOCR. Introduction

395 Dec 14, 2022
BookMyShowPC - Movie Ticket Reservation App made with Tkinter

Book My Show PC What is this? Movie Ticket Reservation App made with Tkinter. Tk

The Nithin Balaji 3 Dec 09, 2022
Classification Modeling: Probability of Default

Credit Risk Modeling in Python Introduction: If you've ever applied for a credit card or loan, you know that financial firms process your information

Aktham Momani 2 Nov 07, 2022
The pytorch implementation of DG-Font: Deformable Generative Networks for Unsupervised Font Generation

DG-Font: Deformable Generative Networks for Unsupervised Font Generation The source code for 'DG-Font: Deformable Generative Networks for Unsupervised

130 Dec 05, 2022
Beancount-mercury - Beancount importer for Mercury Startup Checking

beancount-mercury beancount-mercury provides an Importer for converting CSV expo

Michael Lynch 4 Oct 31, 2022
SpineAI Bilsky Grading With Python

SpineAI-Bilsky-Grading SpineAI Paper with Code 📫 Contact Address correspondence to J.T.P.D.H. (e-mail: james_hallinan AT nuhs.edu.sg) Disclaimer This

<a href=[email protected]"> 2 Dec 16, 2021
Supporting code for short YouTube series Neural Networks Demystified.

Neural Networks Demystified Supporting iPython notebooks for the YouTube Series Neural Networks Demystified. I've included formulas, code, and the tex

Stephen 1.3k Dec 23, 2022
Deep learning toolbox based on PyTorch for hyperspectral data classification.

Deep learning toolbox based on PyTorch for hyperspectral data classification.

Nicolas 304 Dec 28, 2022
Video Instance Segmentation with a Propose-Reduce Paradigm (ICCV 2021)

Propose-Reduce VIS This repo contains the official implementation for the paper: Video Instance Segmentation with a Propose-Reduce Paradigm Huaijia Li

DV Lab 39 Nov 23, 2022
Use Python, OpenCV, and MediaPipe to control a keyboard with facial gestures

CheekyKeys A Face-Computer Interface CheekyKeys lets you control your keyboard using your face. View a fuller demo and more background on the project

69 Nov 09, 2022
Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP"

DiLBERT Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP" Pretrained Model The pretrained model presented in the paper is

Kevin Roitero 2 Dec 15, 2022
SCAAML is a deep learning framwork dedicated to side-channel attacks run on top of TensorFlow 2.x.

SCAAML (Side Channel Attacks Assisted with Machine Learning) is a deep learning framwork dedicated to side-channel attacks. It is written in python and run on top of TensorFlow 2.x.

Google 69 Dec 21, 2022
Ensembling Off-the-shelf Models for GAN Training

Data-Efficient GANs with DiffAugment project | paper | datasets | video | slides Generated using only 100 images of Obama, grumpy cats, pandas, the Br

MIT HAN Lab 1.2k Dec 26, 2022
Performant, differentiable reinforcement learning

deluca Performant, differentiable reinforcement learning Notes This is pre-alpha software and is undergoing a number of core changes. Updates to follo

Google 114 Dec 27, 2022
A Streamlit component to render ECharts.

Streamlit - ECharts A Streamlit component to display ECharts. Install pip install streamlit-echarts Usage This library provides 2 functions to display

Fanilo Andrianasolo 290 Dec 30, 2022
FcaNet: Frequency Channel Attention Networks

FcaNet: Frequency Channel Attention Networks PyTorch implementation of the paper "FcaNet: Frequency Channel Attention Networks". Simplest usage Models

327 Dec 27, 2022
MVGCN: a novel multi-view graph convolutional network (MVGCN) framework for link prediction in biomedical bipartite networks.

MVGCN MVGCN: a novel multi-view graph convolutional network (MVGCN) framework for link prediction in biomedical bipartite networks. Developer: Fu Hait

13 Dec 01, 2022
Evaluating deep transfer learning for whole-brain cognitive decoding

Evaluating deep transfer learning for whole-brain cognitive decoding This README file contains the following sections: Project description Repository

Armin Thomas 5 Oct 31, 2022
[ICCV-2021] An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human Pose Estimation

An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human Pose Estimation (ICCV 2021) Introduction This is an official pytorch implemen

rongchangxie 42 Jan 04, 2023
Predict halo masses from simulations via graph neural networks

HaloGraphNet Predict halo masses from simulations via Graph Neural Networks. Given a dark matter halo and its galaxies, creates a graph with informati

Pablo Villanueva Domingo 20 Nov 15, 2022