The code for the NeurIPS 2021 paper "A Unified View of cGANs with and without Classifiers".

Overview

Energy-based Conditional Generative Adversarial Network (ECGAN)

This is the code for the NeurIPS 2021 paper "A Unified View of cGANs with and without Classifiers". The repository is modified from StudioGAN. If you find our work useful, please consider citing the following paper:

@inproceedings{chen2021ECGAN,
  title   = {A Unified View of cGANs with and without Classifiers},
  author  = {Si-An Chen and Chun-Liang Li and Hsuan-Tien Lin},
  booktitle = {Advances in Neural Information Processing Systems},
  year    = {2021}
}

Please feel free to contact Si-An Chen if you have any questions about the code/paper.

Introduction

We propose a new Conditional Generative Adversarial Network (cGAN) framework called Energy-based Conditional Generative Adversarial Network (ECGAN) which provides a unified view of cGANs and achieves state-of-the-art results. We use the decomposition of the joint probability distribution to connect the goals of cGANs and classification as a unified framework. The framework, along with a classic energy model to parameterize distributions, justifies the use of classifiers for cGANs in a principled manner. It explains several popular cGAN variants, such as ACGAN, ProjGAN, and ContraGAN, as special cases with different levels of approximations. An illustration of the framework is shown below.

Requirements

  • Anaconda
  • Python >= 3.6
  • 6.0.0 <= Pillow <= 7.0.0
  • scipy == 1.1.0 (Recommended for fast loading of Inception Network)
  • sklearn
  • seaborn
  • h5py
  • tqdm
  • torch >= 1.6.0 (Recommended for mixed precision training and knn analysis)
  • torchvision >= 0.7.0
  • tensorboard
  • 5.4.0 <= gcc <= 7.4.0 (Recommended for proper use of adaptive discriminator augmentation module)

You can install the recommended environment as follows:

conda env create -f environment.yml -n studiogan

With docker, you can use:

docker pull mgkang/studiogan:0.1

Quick Start

  • Train (-t) and evaluate (-e) the model defined in CONFIG_PATH using GPU 0
CUDA_VISIBLE_DEVICES=0 python3 src/main.py -t -e -c CONFIG_PATH
  • Train (-t) and evaluate (-e) the model defined in CONFIG_PATH using GPUs (0, 1, 2, 3) and DataParallel
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -e -c CONFIG_PATH

Try python3 src/main.py to see available options.

Dataset

  • CIFAR10: StudioGAN will automatically download the dataset once you execute main.py.

  • Tiny Imagenet, Imagenet, or a custom dataset:

    1. download Tiny Imagenet and Imagenet. Prepare your own dataset.
    2. make the folder structure of the dataset as follows:
┌── docs
├── src
└── data
    └── ILSVRC2012 or TINY_ILSVRC2012 or CUSTOM
        ├── train
        │   ├── cls0
        │   │   ├── train0.png
        │   │   ├── train1.png
        │   │   └── ...
        │   ├── cls1
        │   └── ...
        └── valid
            ├── cls0
            │   ├── valid0.png
            │   ├── valid1.png
            │   └── ...
            ├── cls1
            └── ...

Examples and Results

The src/configs directory contains config files used in our experiments.

CIFAR10 (3x32x32)

To train and evaluate ECGAN-UC on CIFAR10:

python3 src/main.py -t -e -c src/configs/CIFAR10/ecgan_v2_none_0_0p01.json
Method Reference IS(⭡) FID(⭣) F_1/8(⭡) F_8(⭡) Cfg Log Weights
BigGAN-Mod StudioGAN 9.746 8.034 0.995 0.994 - - -
ContraGAN StudioGAN 9.729 8.065 0.993 0.992 - - -
Ours - 10.078 7.936 0.990 0.988 Cfg Log Link

Tiny ImageNet (3x64x64)

To train and evaluate ECGAN-UC on Tiny ImageNet:

python3 src/main.py -t -e -c src/configs/TINY_ILSVRC2012/ecgan_v2_none_0_0p01.json --eval_type valid
Method Reference IS(⭡) FID(⭣) F_1/8(⭡) F_8(⭡) Cfg Log Weights
BigGAN-Mod StudioGAN 11.998 31.92 0.956 0.879 - - -
ContraGAN StudioGAN 13.494 27.027 0.975 0.902 - - -
Ours - 18.445 18.319 0.977 0.973 Cfg Log Link

ImageNet (3x128x128)

To train and evaluate ECGAN-UCE on ImageNet (~12 days on 8 NVIDIA V100 GPUs):

python3 src/main.py -t -e -l -sync_bn -c src/configs/ILSVRC2012/imagenet_ecgan_v2_contra_1_0p05.json --eval_type valid
Method Reference IS(⭡) FID(⭣) F_1/8(⭡) F_8(⭡) Cfg Log Weights
BigGAN StudioGAN 28.633 24.684 0.941 0.921 - - -
ContraGAN StudioGAN 25.249 25.161 0.947 0.855 - - -
Ours - 80.685 8.491 0.984 0.985 Cfg Log Link

Generated Images

Here are some selected images generated by ECGAN.

Owner
sianchen
Ph.D. student in Computer Science at National Taiwan University
sianchen
Official Pytorch Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images.

IAug_CDNet Official Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images. Overview We propose a

53 Dec 02, 2022
In this project, we create and implement a deep learning library from scratch.

ARA In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The

22 Aug 23, 2022
DA2Lite is an automated model compression toolkit for PyTorch.

DA2Lite (Deep Architecture to Lite) is a toolkit to compress and accelerate deep network models. ⭐ Star us on GitHub — it helps!! Frameworks & Librari

Sinhan Kang 7 Mar 22, 2022
Keyword spotting on Arm Cortex-M Microcontrollers

Keyword spotting for Microcontrollers This repository consists of the tensorflow models and training scripts used in the paper: Hello Edge: Keyword sp

Arm Software 1k Dec 30, 2022
Face Recognition Attendance Project

Face-Recognition-Attendance-Project In This Project You will learn how to mark attendance using face recognition, Hello Guys This is Gautam Kumar, Thi

Gautam Kumar 1 Dec 03, 2022
The official project of SimSwap (ACM MM 2020)

SimSwap: An Efficient Framework For High Fidelity Face Swapping Proceedings of the 28th ACM International Conference on Multimedia The official reposi

Six_God 2.6k Jan 08, 2023
Official NumPy Implementation of Deep Networks from the Principle of Rate Reduction (2021)

Deep Networks from the Principle of Rate Reduction This repository is the official NumPy implementation of the paper Deep Networks from the Principle

Ryan Chan 49 Dec 16, 2022
OpenDelta - An Open-Source Framework for Paramter Efficient Tuning.

OpenDelta is a toolkit for parameter efficient methods (we dub it as delta tuning), by which users could flexibly assign (or add) a small amount parameters to update while keeping the most paramters

THUNLP 386 Dec 26, 2022
No-reference Image Quality Assessment(NIQA) Algorithms (BRISQUE, NIQE, PIQE, RankIQA, MetaIQA)

No-Reference Image Quality Assessment Algorithms No-reference Image Quality Assessment(NIQA) is a task of evaluating an image without a reference imag

Dae-Young Song 26 Jan 04, 2023
Data Augmentation Using Keras and Python

Data-Augmentation-Using-Keras-and-Python Data augmentation is the process of increasing the number of training dataset. Keras library offers a simple

Happy N. Monday 3 Feb 15, 2022
SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals

SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals Abstract Sleep apnea (SA) is a common slee

9 Dec 21, 2022
Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback

Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback This is our Pytorch implementation for the paper: Yinwei Wei,

17 Jun 10, 2022
VIsually-Pivoted Audio and(N) Text

VIP-ANT: VIsually-Pivoted Audio and(N) Text Code for the paper Connecting the Dots between Audio and Text without Parallel Data through Visual Knowled

Yän.PnG 16 Nov 04, 2022
Code for "On Memorization in Probabilistic Deep Generative Models"

On Memorization in Probabilistic Deep Generative Models This repository contains the code necessary to reproduce the experiments in On Memorization in

The Alan Turing Institute 3 Jun 09, 2022
TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

73 Nov 06, 2022
《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020)

This contains the codes for cross-view geo-localization method described in: Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching, CVPR2020.

41 Oct 27, 2022
A High-Level Fusion Scheme for Circular Quantities published at the 20th International Conference on Advanced Robotics

Monte Carlo Simulation to the Paper A High-Level Fusion Scheme for Circular Quantities published at the 20th International Conference on Advanced Robotics

Sören Kohnert 0 Dec 06, 2021
An end-to-end project on customer segmentation

End-to-end Customer Segmentation Project Note: This project is in progress. Tools Used in This Project Prefect: Orchestrate workflows hydra: Manage co

Ocelot Consulting 8 Oct 06, 2022
LAMDA: Label Matching Deep Domain Adaptation

LAMDA: Label Matching Deep Domain Adaptation This is the implementation of the paper LAMDA: Label Matching Deep Domain Adaptation which has been accep

Tuan Nguyen 9 Sep 06, 2022
Source code for the paper "Periodic Traveling Waves in an Integro-Difference Equation With Non-Monotonic Growth and Strong Allee Effect"

Source code for the paper "Periodic Traveling Waves in an Integro-Difference Equation With Non-Monotonic Growth and Strong Allee Effect" by Michael Ne

M Nestor 1 Apr 19, 2022