PyTorch inference for "Progressive Growing of GANs" with CelebA snapshot

Overview

Progressive Growing of GANs inference in PyTorch with CelebA training snapshot

Description

This is an inference sample written in PyTorch of the original Theano/Lasagne code.

I recreated the network as described in the paper of Karras et al. Since some layers seemed to be missing in PyTorch, these were implemented as well. The network and the layers can be found in model.py.

For the demo, a 100-celeb-hq-1024x1024-ours snapshot was used, which was made publicly available by the authors. Since I couldn't find any model converter between Theano/Lasagne and PyTorch, I used a quick and dirty script to transfer the weights between the models (transfer_weights.py).

This repo does not provide the code for training the networks.

Simple inference

To run the demo, simply execute predict.py. You can specify other weights with the --weights flag.

Example image:

Example image

Latent space interpolation

To try the latent space interpolation, use latent_interp.py. All output images will be saved in ./interp.

You can chose between the "gaussian interpolation" introduced in the original paper and the "slerp interpolation" introduced by Tom White in his paper Sampling Generative Networks using the --type argument.

Use --filter to change the gaussian filter size for the gaussian interpolation and --interp for the interpolation steps for the slerp interpolation.

The following arguments are defined:

  • --weights - path to pretrained PyTorch state dict
  • --output - Directory for storing interpolated images
  • --batch_size - batch size for DataLoader
  • --num_workers - number of workers for DataLoader
  • --type {gauss, slerp} - interpolation type
  • --nb_latents - number of latent vectors to generate
  • --filter - gaussian filter length for interpolating latent space (gauss interpolation)
  • --interp - interpolation length between each latent vector (slerp interpolation)
  • --seed - random seed for numpy and PyTorch
  • --cuda - use GPU

The total number of generated frames depends on the used interpolation technique.

For gaussian interpolation the number of generated frames equals nb_latents, while the slerp interpolation generates nb_latents * interp frames.

Example interpolation:

Example interpolation

Live latent space interpolation

A live demo of the latent space interpolation using PyGame can be seen in pygame_interp_demo.py.

Use the --size argument to change the output window size.

The following arguments are defined:

  • --weights - path to pretrained PyTorch state dict
  • --num_workers - number of workers for DataLoader
  • --type {gauss, slerp} - interpolation type
  • --nb_latents - number of latent vectors to generate
  • --filter - gaussian filter length for interpolating latent space (gauss interpolation)
  • --interp - interpolation length between each latent vector (slerp interpolation)
  • --size - PyGame window size
  • --seed - random seed for numpy and PyTorch
  • --cuda - use GPU

Transferring weights

The pretrained lasagne weights can be transferred to a PyTorch state dict using transfer_weights.py.

To transfer other snapshots from the paper (other than CelebA), you have to modify the model architecture accordingly and use the corresponding weights.

Environment

The code was tested on Ubuntu 16.04 with an NVIDIA GTX 1080 using PyTorch v.0.2.0_4.

  • transfer_weights.py needs Theano and Lasagne to load the pretrained weights.
  • pygame_interp_demo.py needs PyGame to visualize the output

A single forward pass took approx. 0.031 seconds.

Links

License

This code is a modified form of the original code under the CC BY-NC license with the following copyright notice:

# Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
#
# This work is licensed under the Creative Commons Attribution-NonCommercial
# 4.0 International License. To view a copy of this license, visit
# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to
# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

According the Section 3, I hereby identify Tero Karras et al. and NVIDIA as the original authors of the material.

Owner
Deep Learning Frameworks @NVIDIA
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
The Empirical Investigation of Representation Learning for Imitation (EIRLI)

The Empirical Investigation of Representation Learning for Imitation (EIRLI)

Center for Human-Compatible AI 31 Nov 06, 2022
A TensorFlow implementation of the Mnemonic Descent Method.

MDM A Tensorflow implementation of the Mnemonic Descent Method. Mnemonic Descent Method: A recurrent process applied for end-to-end face alignment G.

123 Oct 07, 2022
WORD: Revisiting Organs Segmentation in the Whole Abdominal Region

WORD: Revisiting Organs Segmentation in the Whole Abdominal Region. This repository provides the codebase and dataset for our work WORD: Revisiting Or

Healthcare Intelligence Laboratory 71 Jan 07, 2023
A unet implementation for Image semantic segmentation

Unet-pytorch a unet implementation for Image semantic segmentation 参考网上的Unet做分割的代码,做了一个针对kaggle地盐识别的,请去以下地址获取数据集: https://www.kaggle.com/c/tgs-salt-id

Rabbit 3 Jun 29, 2022
[ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable

Unlearnable Examples Code for ICLR2021 Spotlight Paper "Unlearnable Examples: Making Personal Data Unexploitable " by Hanxun Huang, Xingjun Ma, Sarah

Hanxun Huang 98 Dec 07, 2022
Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.

Real-ESRGAN Colab Demo for Real-ESRGAN . Portable Windows executable file. You can find more information here. Real-ESRGAN aims at developing Practica

Xintao 17.2k Jan 02, 2023
A system for quickly generating training data with weak supervision

Programmatically Build and Manage Training Data Announcement The Snorkel team is now focusing their efforts on Snorkel Flow, an end-to-end AI applicat

Snorkel Team 5.4k Jan 02, 2023
A TensorFlow implementation of FCN-8s

FCN-8s implementation in TensorFlow Contents Overview Examples and demo video Dependencies How to use it Download pre-trained VGG-16 Overview This is

Pierluigi Ferrari 50 Aug 08, 2022
Code for the paper "Zero-shot Natural Language Video Localization" (ICCV2021, Oral).

Zero-shot Natural Language Video Localization (ZSNLVL) by Pseudo-Supervised Video Localization (PSVL) This repository is for Zero-shot Natural Languag

Computer Vision Lab. @ GIST 37 Dec 27, 2022
A Flexible Generative Framework for Graph-based Semi-supervised Learning (NeurIPS 2019)

G3NN This repo provides a pytorch implementation for the 4 instantiations of the flexible generative framework as described in the following paper: A

Jiaqi Ma 14 Oct 11, 2022
N-Omniglot is a large neuromorphic few-shot learning dataset

N-Omniglot [Paper] || [Dataset] N-Omniglot is a large neuromorphic few-shot learning dataset. It reconstructs strokes of Omniglot as videos and uses D

11 Dec 05, 2022
Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

66 Dec 15, 2022
Simulated garment dataset for virtual try-on

Simulated garment dataset for virtual try-on This repository contains the dataset used in the following papers: Self-Supervised Collision Handling via

33 Dec 20, 2022
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

OFA Sys 1.4k Jan 08, 2023
A community run, 5-day PyTorch Deep Learning Bootcamp

Deep Learning Winter School, November 2107. Tel Aviv Deep Learning Bootcamp : http://deep-ml.com. About Tel-Aviv Deep Learning Bootcamp is an intensiv

Shlomo Kashani. 1.3k Sep 04, 2021
StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation

StyleGAN2 with adaptive discriminator augmentation (ADA) — Official TensorFlow implementation Training Generative Adversarial Networks with Limited Da

NVIDIA Research Projects 1.7k Dec 29, 2022
Train emoji embeddings based on emoji descriptions.

emoji2vec This is my attempt to train, visualize and evaluate emoji embeddings as presented by Ben Eisner, Tim Rocktäschel, Isabelle Augenstein, Matko

Miruna Pislar 17 Sep 03, 2022
RoMa: A lightweight library to deal with 3D rotations in PyTorch.

RoMa: A lightweight library to deal with 3D rotations in PyTorch. RoMa (which stands for Rotation Manipulation) provides differentiable mappings betwe

NAVER 90 Dec 27, 2022
CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution

CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution This is the official implementation code of the paper "CondLaneNe

Alibaba Cloud 311 Dec 30, 2022