State of the Art Neural Networks for Generative Deep Learning

Overview

pyradox-generative

State of the Art Neural Networks for Generative Deep Learning

Downloads Downloads Downloads


Table of Contents


Installation

pip install pyradox-generative

Usage

This library provides light weight trainers for the following generative models:

Vanilla GAN

Just provide your genrator and discriminator and train your GAN

Data Preparation:

from pyradox_generative import GAN
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras

(x_train, y_train), _ = keras.datasets.mnist.load_data()
x_train = x_train.astype(np.float32) / 255
x_train = x_train.reshape(-1, 28, 28, 1) * 2.0 - 1.0

dataset = tf.data.Dataset.from_tensor_slices(x_train)
dataset = dataset.shuffle(1024)
dataset = dataset.batch(32, drop_remainder=True).prefetch(1)

Define the generator and discriminator models:

generator = keras.models.Sequential(
    [
        keras.Input(shape=[28]),
        keras.layers.Dense(7 * 7 * 3),
        keras.layers.Reshape([7, 7, 3]),
        keras.layers.BatchNormalization(),
        keras.layers.Conv2DTranspose(
            32, kernel_size=3, strides=2, padding="same", activation="selu"
        ),
        keras.layers.Conv2DTranspose(
            1, kernel_size=3, strides=2, padding="same", activation="tanh"
        ),
    ],
    name="generator",
)

discriminator = keras.models.Sequential(
    [
        keras.layers.Conv2D(
            32,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
            input_shape=[28, 28, 1],
        ),
        keras.layers.Conv2D(
            3,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
        ),
        keras.layers.Flatten(),
        keras.layers.Dense(1, activation="sigmoid"),
    ],
    name="discriminator",
)

Plug in the models to the trainer class and train them using the very familiar compile and fit methods:

gan = GAN(discriminator=discriminator, generator=generator, latent_dim=28)
gan.compile(
    d_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
    g_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
    loss_fn=keras.losses.BinaryCrossentropy(),
)

history = gan.fit(dataset)

Conditional GAN

Just provide your genrator and discriminator and train your GAN

Data Preparation and calculate the input and output dimensions of generator and discriminator:

from pyradox_generative import ConditionalGAN
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras

CODINGS_SIZE = 28
N_CHANNELS = 1
N_CLASSES = 10
G_INP_CHANNELS = CODINGS_SIZE + N_CLASSES
D_INP_CHANNELS = N_CHANNELS + N_CLASSES

(x_train, y_train), _ = keras.datasets.mnist.load_data()
x_train = x_train
x_train = x_train.astype(np.float32) / 255
x_train = x_train.reshape(-1, 28, 28, 1) * 2.0 - 1.0
y_train = y_train
y_train = keras.utils.to_categorical(y_train, 10)

dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
dataset = dataset.shuffle(1024)
dataset = dataset.batch(32, drop_remainder=True).prefetch(1)

Define the generator and discriminator models:

generator = keras.models.Sequential(
    [
        keras.Input(shape=[G_INP_CHANNELS]),
        keras.layers.Dense(7 * 7 * 3),
        keras.layers.Reshape([7, 7, 3]),
        keras.layers.BatchNormalization(),
        keras.layers.Conv2DTranspose(
            32, kernel_size=3, strides=2, padding="same", activation="selu"
        ),
        keras.layers.Conv2DTranspose(
            1, kernel_size=3, strides=2, padding="same", activation="tanh"
        ),
    ],
    name="generator",
)

discriminator = keras.models.Sequential(
    [
        keras.layers.Conv2D(
            32,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
            input_shape=[28, 28, D_INP_CHANNELS],
        ),
        keras.layers.Conv2D(
            3,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
        ),
        keras.layers.Flatten(),
        keras.layers.Dense(1, activation="sigmoid"),
    ],
    name="discriminator",
)

Plug in the models to the trainer class and train them using the very familiar compile and fit methods:

gan = ConditionalGAN(
    discriminator=discriminator, generator=generator, latent_dim=CODINGS_SIZE
)
gan.compile(
    d_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
    g_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
    loss_fn=keras.losses.BinaryCrossentropy(),
)

history = gan.fit(dataset)

Wasserstein GAN

Just provide your genrator and discriminator and train your GAN

Data Preparation:

from pyradox_generative import WGANGP
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras

(x_train, y_train), _ = keras.datasets.mnist.load_data()
x_train = x_train.astype(np.float32) / 255
x_train = x_train.reshape(-1, 28, 28, 1) * 2.0 - 1.0

dataset = tf.data.Dataset.from_tensor_slices(x_train)
dataset = dataset.shuffle(1024)
dataset = dataset.batch(32, drop_remainder=True).prefetch(1)

Define the generator and discriminator models:

generator = keras.models.Sequential(
    [
        keras.Input(shape=[28]),
        keras.layers.Dense(7 * 7 * 3),
        keras.layers.Reshape([7, 7, 3]),
        keras.layers.BatchNormalization(),
        keras.layers.Conv2DTranspose(
            32, kernel_size=3, strides=2, padding="same", activation="selu"
        ),
        keras.layers.Conv2DTranspose(
            1, kernel_size=3, strides=2, padding="same", activation="tanh"
        ),
    ],
    name="generator",
)

discriminator = keras.models.Sequential(
    [
        keras.layers.Conv2D(
            32,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
            input_shape=[28, 28, 1],
        ),
        keras.layers.Conv2D(
            3,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
        ),
        keras.layers.Flatten(),
        keras.layers.Dense(1, activation="sigmoid"),
    ],
    name="discriminator",
)

Plug in the models to the trainer class and train them using the very familiar compile and fit methods:

gan = WGANGP(
    discriminator=discriminator,
    generator=generator,
    latent_dim=28,
    discriminator_extra_steps=1,
    gp_weight=10,
)
gan.compile(
    d_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
    g_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
)

history = gan.fit(dataset)

Variational Auto Encoder

Just provide your encoder and decoder and train your VAE Sampling is done internally

Data Preparation:

from pyradox_generative import VAE
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras

(x_train, y_train), _ = keras.datasets.mnist.load_data()
x_train = x_train.astype(np.float32) / 255
x_train = x_train.reshape(-1, 28, 28, 1) * 2.0 - 1.0

dataset = tf.data.Dataset.from_tensor_slices(x_train)
dataset = dataset.shuffle(1024)
dataset = dataset.batch(32, drop_remainder=True).prefetch(1)

Define the encoder and decoder models:

encoder = keras.models.Sequential(
    [
        keras.Input(shape=(28, 28, 1)),
        keras.layers.Conv2D(32, 3, activation="relu", strides=2, padding="same"),
        keras.layers.Conv2D(64, 3, activation="relu", strides=2, padding="same"),
        keras.layers.Flatten(),
        keras.layers.Dense(16, activation="relu"),
    ],
    name="encoder",
)

decoder = keras.models.Sequential(
    [
        keras.Input(shape=(28,)),
        keras.layers.Dense(7 * 7 * 64, activation="relu"),
        keras.layers.Reshape((7, 7, 64)),
        keras.layers.Conv2DTranspose(64, 3, activation="relu", strides=2, padding="same"),
        keras.layers.Conv2DTranspose(32, 3, activation="relu", strides=2, padding="same"),
        keras.layers.Conv2DTranspose(1, 3, activation="sigmoid", padding="same"),
    ],
    name="decoder",
)

Plug in the models to the trainer class and train them using the very familiar compile and fit methods:

vae = VAE(encoder=encoder, decoder=decoder, latent_dim=28)
vae.compile(keras.optimizers.Adam(learning_rate=0.001))
history = vae.fit(dataset)

Style GAN

Just provide your genrator and discriminator models and train your GAN

Data Preparation:

from pyradox_generative import StyleGAN
import numpy as np
import tensorflow as tf
from functools import partial

def resize_image(res, image):
    # only donwsampling, so use nearest neighbor that is faster to run
    image = tf.image.resize(
        image, (res, res), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR
    )
    image = tf.cast(image, tf.float32) / 127.5 - 1.0
    return image


def create_dataloader(res):
    (x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
    x_train = x_train[:100, :, :]
    x_train = np.pad(x_train, [(0, 0), (2, 2), (2, 2)], mode="constant")
    x_train = tf.image.grayscale_to_rgb(tf.expand_dims(x_train, axis=3), name=None)
    x_train = tf.data.Dataset.from_tensor_slices(x_train)

    batch_size = 32
    dl = x_train.map(partial(resize_image, res), num_parallel_calls=tf.data.AUTOTUNE)
    dl = dl.shuffle(200).batch(batch_size, drop_remainder=True).prefetch(1).repeat()
    return dl

Define the model by providing number of filters for each each resolution (log 2):

gan = StyleGAN(
    target_res=32,
    start_res=4,
    filter_nums={0: 32, 1: 32, 2: 32, 3: 32, 4: 32, 5: 32},
)
opt_cfg = {"learning_rate": 1e-3, "beta_1": 0.0, "beta_2": 0.99, "epsilon": 1e-8}

start_res_log2 = 2
target_res_log2 = 5

Train the Style GAN:

for res_log2 in range(start_res_log2, target_res_log2 + 1):
    res = 2 ** res_log2
    for phase in ["TRANSITION", "STABLE"]:
        if res == 4 and phase == "TRANSITION":
            continue

        train_dl = create_dataloader(res)

        steps = 10

        gan.compile(
            d_optimizer=tf.keras.optimizers.Adam(**opt_cfg),
            g_optimizer=tf.keras.optimizers.Adam(**opt_cfg),
            loss_weights={"gradient_penalty": 10, "drift": 0.001},
            steps_per_epoch=steps,
            res=res,
            phase=phase,
            run_eagerly=False,
        )

        print(phase)
        history = gan.fit(train_dl, epochs=1, steps_per_epoch=steps)

Cycle GAN

Just provide your genrator and discriminator models and train your GAN

Data Preparation:

import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow import keras
from pyradox_generative import CycleGAN

tfds.disable_progress_bar()
autotune = tf.data.AUTOTUNE
orig_img_size = (286, 286)
input_img_size = (256, 256, 3)


def normalize_img(img):
    img = tf.cast(img, dtype=tf.float32)
    return (img / 127.5) - 1.0


def preprocess_train_image(img, label):
    img = tf.image.random_flip_left_right(img)
    img = tf.image.resize(img, [*orig_img_size])
    img = tf.image.random_crop(img, size=[*input_img_size])
    img = normalize_img(img)
    return img


def preprocess_test_image(img, label):
    img = tf.image.resize(img, [input_img_size[0], input_img_size[1]])
    img = normalize_img(img)
    return img

train_horses, _ = tfds.load(
    "cycle_gan/horse2zebra", with_info=True, as_supervised=True, split="trainA[:5%]"
)
train_zebras, _ = tfds.load(
    "cycle_gan/horse2zebra", with_info=True, as_supervised=True, split="trainB[:5%]"
)

buffer_size = 256
batch_size = 1

train_horses = (
    train_horses.map(preprocess_train_image, num_parallel_calls=autotune)
    .cache()
    .shuffle(buffer_size)
    .batch(batch_size)
)
train_zebras = (
    train_zebras.map(preprocess_train_image, num_parallel_calls=autotune)
    .cache()
    .shuffle(buffer_size)
    .batch(batch_size)
)

Define the generator and discriminator models:

def build_generator(name):
    return keras.models.Sequential(
        [
            keras.layers.Input(shape=input_img_size),
            keras.layers.Conv2D(32, 3, activation="relu", padding="same"),
            keras.layers.Conv2D(32, 3, activation="relu", padding="same"),
            keras.layers.Conv2D(3, 3, activation="tanh", padding="same"),
        ],
        name=name,
    )


def build_discriminator(name):
    return keras.models.Sequential(
        [
            keras.layers.Input(shape=input_img_size),
            keras.layers.Conv2D(32, 3, activation="relu", padding="same"),
            keras.layers.MaxPooling2D(pool_size=2, strides=2),
            keras.layers.Conv2D(32, 3, activation="relu", padding="same"),
            keras.layers.MaxPooling2D(pool_size=2, strides=2),
            keras.layers.Conv2D(32, 3, activation="relu", padding="same"),
            keras.layers.MaxPooling2D(pool_size=2, strides=2),
            keras.layers.Conv2D(1, 3, activation="relu", padding="same"),
        ],
        name=name,
    )

Plug in the models to the trainer class and train them using the very familiar compile and fit methods:

gan = CycleGAN(
    generator_g=build_generator("gen_G"),
    generator_f=build_generator("gen_F"),
    discriminator_x=build_discriminator("disc_X"),
    discriminator_y=build_discriminator("disc_Y"),
)

gan.compile(
    gen_g_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),
    gen_f_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),
    disc_x_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),
    disc_y_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),
)

history = gan.fit(
    tf.data.Dataset.zip((train_horses, train_zebras)),
)

References

Owner
Ritvik Rastogi
I have been writing code since 2016, and taught myself a handful of skills and programming languages. I love solving problems by writing code
Ritvik Rastogi
Statistical and Algorithmic Investing Strategies for Everyone

Eiten - Algorithmic Investing Strategies for Everyone Eiten is an open source toolkit by Tradytics that implements various statistical and algorithmic

Tradytics 2.5k Jan 02, 2023
Reference PyTorch implementation of "End-to-end optimized image compression with competition of prior distributions"

PyTorch reference implementation of "End-to-end optimized image compression with competition of prior distributions" by Benoit Brummer and Christophe

Benoit Brummer 6 Jun 16, 2022
Provide partial dates and retain the date precision through processing

Prefix date parser This is a helper class to parse dates with varied degrees of precision. For example, a data source might state a date as 2001, 2001

Friedrich Lindenberg 13 Dec 14, 2022
Implementation of ConvMixer for "Patches Are All You Need? 🤷"

Patches Are All You Need? 🤷 This repository contains an implementation of ConvMixer for the ICLR 2022 submission "Patches Are All You Need?" by Asher

CMU Locus Lab 934 Jan 08, 2023
Laser device for neutralizing - mosquitoes, weeds and pests

Laser device for neutralizing - mosquitoes, weeds and pests (in progress) Here I will post information for creating a laser device. A warning!! How It

Ildaron 1k Jan 02, 2023
Stacked Generative Adversarial Networks

Stacked Generative Adversarial Networks This repository contains code for the paper "Stacked Generative Adversarial Networks", CVPR 2017. Part of the

Xun Huang 241 May 07, 2022
A lightweight tool to get an AI Infrastructure Stack up in minutes not days.

K3ai will take care of setup K8s for You, deploy the AI tool of your choice and even run your code on it.

k3ai 105 Dec 04, 2022
CSD: Consistency-based Semi-supervised learning for object Detection

CSD: Consistency-based Semi-supervised learning for object Detection (NeurIPS 2019) By Jisoo Jeong, Seungeui Lee, Jee-soo Kim, Nojun Kwak Installation

80 Dec 15, 2022
Paddle implementation for "Highly Efficient Knowledge Graph Embedding Learning with Closed-Form Orthogonal Procrustes Analysis" (NAACL 2021)

ProcrustEs-KGE Paddle implementation for Highly Efficient Knowledge Graph Embedding Learning with Orthogonal Procrustes Analysis 🙈 A more detailed re

Lincedo Lab 4 Jun 09, 2021
Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-art fuzzing techniques

About Fuzzification Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-

gts3.org (<a href=[email protected])"> 55 Oct 25, 2022
Keras like implementation of Deep Learning architectures from scratch using numpy.

Mini-Keras Keras like implementation of Deep Learning architectures from scratch using numpy. How to contribute? The project contains implementations

MANU S PILLAI 5 Oct 10, 2021
Live Hand Tracking Using Python

Live-Hand-Tracking-Using-Python Project Description: In this project, we will be

Hassan Shahzad 2 Jan 06, 2022
The project is an official implementation of our paper "3D Human Pose Estimation with Spatial and Temporal Transformers".

3D Human Pose Estimation with Spatial and Temporal Transformers This repo is the official implementation for 3D Human Pose Estimation with Spatial and

Ce Zheng 363 Dec 28, 2022
HybVIO visual-inertial odometry and SLAM system

HybVIO A visual-inertial odometry system with an optional SLAM module. This is a research-oriented codebase, which has been published for the purposes

Spectacular AI 320 Jan 03, 2023
Optimized code based on M2 for faster image captioning training

Transformer Captioning This repository contains the code for Transformer-based image captioning. Based on meshed-memory-transformer, we further optimi

lyricpoem 16 Dec 16, 2022
Python SDK for building, training, and deploying ML models

Overview of Kubeflow Fairing Kubeflow Fairing is a Python package that streamlines the process of building, training, and deploying machine learning (

Kubeflow 325 Dec 13, 2022
OBBDetection: an oriented object detection toolbox modified from MMdetection

OBBDetection note: If you have questions or good suggestions, feel free to propose issues and contact me. introduction OBBDetection is an oriented obj

MIXIAOXIN_HO 3 Nov 11, 2022
Scalable training for dense retrieval models.

Scalable implementation of dense retrieval. Training on cluster By default it trains locally: PYTHONPATH=.:$PYTHONPATH python dpr_scale/main.py traine

Facebook Research 90 Dec 28, 2022
Image super-resolution (SR) is a fast-moving field with novel architectures attracting the spotlight

Revisiting RCAN: Improved Training for Image Super-Resolution Introduction Image super-resolution (SR) is a fast-moving field with novel architectures

Zudi Lin 76 Dec 01, 2022
Build an Amazon SageMaker Pipeline to Transform Raw Texts to A Knowledge Graph

Build an Amazon SageMaker Pipeline to Transform Raw Texts to A Knowledge Graph This repository provides a pipeline to create a knowledge graph from ra

AWS Samples 3 Jan 01, 2022