PyTorch framework for Deep Learning research and development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write another regular train loop.
Break the cycle - use the Catalyst!
Project manifest. Part of PyTorch Ecosystem. Part of Catalyst Ecosystem:
- Alchemy - experiments logging & visualization
- Catalyst - accelerated deep learning R&D
- Reaction - convenient deep learning models serving
Getting started
pip install -U catalyst
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.contrib.data.cv import ToTensor
from catalyst.contrib.datasets import MNIST
model = torch.nn.Linear(28 * 28, 10)
optimizer = torch.optim.Adam(model.parameters(), lr=0.02)
loaders = {
"train": DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=ToTensor()), batch_size=32),
"valid": DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=ToTensor()), batch_size=32),
}
class CustomRunner(dl.Runner):
def predict_batch(self, batch):
# model inference step
return self.model(batch[0].to(self.device).view(batch[0].size(0), -1))
def _handle_batch(self, batch):
# model train/valid step
x, y = batch
y_hat = self.model(x.view(x.size(0), -1))
loss = F.cross_entropy(y_hat, y)
accuracy01, accuracy03 = metrics.accuracy(y_hat, y, topk=(1, 3))
self.batch_metrics.update(
{"loss": loss, "accuracy01": accuracy01, "accuracy03": accuracy03}
)
if self.is_train_loader:
loss.backward()
self.optimizer.step()
self.optimizer.zero_grad()
runner = CustomRunner()
# model training
runner.train(
model=model,
optimizer=optimizer,
loaders=loaders,
logdir="./logs",
num_epochs=5,
verbose=True,
load_best_on_end=True,
)
# model inference
for prediction in runner.predict_loader(loader=loaders["valid"]):
assert prediction.detach().cpu().numpy().shape[-1] == 10
# model tracing
traced_model = runner.trace(loader=loaders["valid"])
Step by step guide
- Start with Catalyst 101 — Accelerated PyTorch introduction.
- Check minimal examples.
- Try notebook tutorials with Google Colab.
- Read blogposts with use-cases and guides.
- Learn machine learning with our "Deep Learning with Catalyst" course.
- If you would like to contribute to the project, follow our contribution guidelines.
- If you want to support the project, feel free to donate on patreon page or write us with your proposals.
- And do not forget to join our slack for collaboration.
Table of Contents
Overview
Catalyst helps you write compact but full-featured Deep Learning pipelines in a few lines of code. You get a training loop with metrics, early-stopping, model checkpointing and other features without the boilerplate.
Installation
Common installation:
pip install -U catalyst
Specific versions with additional requirements
pip install catalyst[ml] # installs ML-based Catalyst
pip install catalyst[cv] # installs CV-based Catalyst
pip install catalyst[nlp] # installs NLP-based Catalyst
pip install catalyst[tune] # installs Catalyst+Optuna
pip install catalyst[ecosystem] # installs Catalyst.Ecosystem
# master version installation
pip install git+https://github.com/catalyst-team/[email protected] --upgrade
Catalyst is compatible with: Python 3.6+. PyTorch 1.1+.
Tested on Ubuntu 16.04/18.04/20.04, macOS 10.15, Windows 10 and Windows Subsystem for Linux.
Minimal Examples
ML - linear regression
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst.dl import SupervisedRunner
# data
num_samples, num_features = int(1e4), int(1e1)
X, y = torch.rand(num_samples, num_features), torch.rand(num_samples)
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}
# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [3, 6])
# model training
runner = SupervisedRunner()
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
loaders=loaders,
logdir="./logdir",
num_epochs=8,
verbose=True,
)
ML - multiclass classification
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst import dl
# sample data
num_samples, num_features, num_classes = int(1e4), int(1e1), 4
X = torch.rand(num_samples, num_features)
y = (torch.rand(num_samples, ) * num_classes).to(torch.int64)
# pytorch loaders
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}
# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, num_classes)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])
# model training
runner = dl.SupervisedRunner()
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
loaders=loaders,
logdir="./logdir",
num_epochs=3,
callbacks=[dl.AccuracyCallback(num_classes=num_classes)]
)
ML - multilabel classification
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst import dl
# sample data
num_samples, num_features, num_classes = int(1e4), int(1e1), 4
X = torch.rand(num_samples, num_features)
y = (torch.rand(num_samples, num_classes) > 0.5).to(torch.float32)
# pytorch loaders
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}
# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, num_classes)
criterion = torch.nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])
# model training
runner = dl.SupervisedRunner()
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
loaders=loaders,
logdir="./logdir",
num_epochs=3,
callbacks=[dl.MultiLabelAccuracyCallback(threshold=0.5)]
)
CV - MNIST classification
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.contrib.data.cv import ToTensor
from catalyst.contrib.datasets import MNIST
model = torch.nn.Linear(28 * 28, 10)
optimizer = torch.optim.Adam(model.parameters(), lr=0.02)
loaders = {
"train": DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=ToTensor()), batch_size=32),
"valid": DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=ToTensor()), batch_size=32),
}
class CustomRunner(dl.Runner):
def _handle_batch(self, batch):
x, y = batch
y_hat = self.model(x.view(x.size(0), -1))
loss = F.cross_entropy(y_hat, y)
accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))
self.batch_metrics = {
"loss": loss,
"accuracy01": accuracy01,
"accuracy03": accuracy03,
"accuracy05": accuracy05,
}
if self.is_train_loader:
loss.backward()
self.optimizer.step()
self.optimizer.zero_grad()
runner = CustomRunner()
runner.train(
model=model,
optimizer=optimizer,
loaders=loaders,
verbose=True,
)
CV - classification with AutoEncoder
import os
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.contrib.data.cv import ToTensor
from catalyst.contrib.datasets import MNIST
class ClassifyAE(nn.Module):
def __init__(self, in_features, hid_features, out_features):
super().__init__()
self.encoder = nn.Sequential(nn.Linear(in_features, hid_features), nn.Tanh())
self.decoder = nn.Sequential(nn.Linear(hid_features, in_features), nn.Sigmoid())
self.clf = nn.Linear(hid_features, out_features)
def forward(self, x):
z = self.encoder(x)
y_hat = self.clf(z)
x_ = self.decoder(z)
return y_hat, x_
model = ClassifyAE(28 * 28, 128, 10)
optimizer = torch.optim.Adam(model.parameters(), lr=0.02)
loaders = {
"train": DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=ToTensor()), batch_size=32),
"valid": DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=ToTensor()), batch_size=32),
}
class CustomRunner(dl.Runner):
def _handle_batch(self, batch):
x, y = batch
x = x.view(x.size(0), -1)
y_hat, x_ = self.model(x)
loss_clf = F.cross_entropy(y_hat, y)
loss_ae = F.mse_loss(x_, x)
loss = loss_clf + loss_ae
accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))
self.batch_metrics = {
"loss_clf": loss_clf,
"loss_ae": loss_ae,
"loss": loss,
"accuracy01": accuracy01,
"accuracy03": accuracy03,
"accuracy05": accuracy05,
}
if self.is_train_loader:
loss.backward()
self.optimizer.step()
self.optimizer.zero_grad()
runner = CustomRunner()
runner.train(
model=model,
optimizer=optimizer,
loaders=loaders,
verbose=True,
)
CV - classification with Variational AutoEncoder
import os
import numpy as np
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.contrib.data.cv import ToTensor
from catalyst.contrib.datasets import MNIST
LOG_SCALE_MAX = 2
LOG_SCALE_MIN = -10
def normal_sample(loc, log_scale):
scale = torch.exp(0.5 * log_scale)
return loc + scale * torch.randn_like(scale)
class ClassifyVAE(torch.nn.Module):
def __init__(self, in_features, hid_features, out_features):
super().__init__()
self.encoder = nn.Linear(in_features, hid_features * 2)
self.decoder = nn.Sequential(nn.Linear(hid_features, in_features), nn.Sigmoid())
self.clf = nn.Linear(hid_features, out_features)
def forward(self, x, deterministic=False):
z = self.encoder(x)
bs, z_dim = z.shape
loc, log_scale = z[:, :z_dim // 2], z[:, z_dim // 2:]
log_scale = torch.clamp(log_scale, LOG_SCALE_MIN, LOG_SCALE_MAX)
z_ = loc if deterministic else normal_sample(loc, log_scale)
z_ = z_.view(bs, -1)
x_ = self.decoder(z_)
y_hat = self.clf(z_)
return y_hat, x_, loc, log_scale
model = ClassifyVAE(28 * 28, 64, 10)
optimizer = torch.optim.Adam(model.parameters(), lr=0.02)
loaders = {
"train": DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=ToTensor()), batch_size=32),
"valid": DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=ToTensor()), batch_size=32),
}
class CustomRunner(dl.Runner):
def _handle_batch(self, batch):
x, y = batch
x = x.view(x.size(0), -1)
y_hat, x_, loc, log_scale = self.model(x, deterministic=not self.is_train_loader)
loss_clf = F.cross_entropy(y_hat, y)
loss_ae = F.mse_loss(x_, x)
loss_kld = (-0.5 * torch.sum(1 + log_scale - loc.pow(2) - log_scale.exp(), dim=1)).mean()
loss = loss_clf + loss_ae + loss_kld
accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))
self.batch_metrics = {
"loss_clf": loss_clf,
"loss_ae": loss_ae,
"loss_kld": loss_kld,
"loss": loss,
"accuracy01": accuracy01,
"accuracy03": accuracy03,
"accuracy05": accuracy05,
}
if self.is_train_loader:
loss.backward()
self.optimizer.step()
self.optimizer.zero_grad()
runner = CustomRunner()
runner.train(
model=model,
optimizer=optimizer,
loaders=loaders,
verbose=True,
)
CV - segmentation with classification auxiliary task
import os
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.contrib.data.cv import ToTensor
from catalyst.contrib.datasets import MNIST
class ClassifyUnet(nn.Module):
def __init__(self, in_channels, in_hw, out_features):
super().__init__()
self.encoder = nn.Sequential(nn.Conv2d(in_channels, in_channels, 3, 1, 1), nn.Tanh())
self.decoder = nn.Conv2d(in_channels, in_channels, 3, 1, 1)
self.clf = nn.Linear(in_channels * in_hw * in_hw, out_features)
def forward(self, x):
z = self.encoder(x)
z_ = z.view(z.size(0), -1)
y_hat = self.clf(z_)
x_ = self.decoder(z)
return y_hat, x_
model = ClassifyUnet(1, 28, 10)
optimizer = torch.optim.Adam(model.parameters(), lr=0.02)
loaders = {
"train": DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=ToTensor()), batch_size=32),
"valid": DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=ToTensor()), batch_size=32),
}
class CustomRunner(dl.Runner):
def _handle_batch(self, batch):
x, y = batch
x_noise = (x + torch.rand_like(x)).clamp_(0, 1)
y_hat, x_ = self.model(x_noise)
loss_clf = F.cross_entropy(y_hat, y)
iou = metrics.iou(x_, x).mean()
loss_iou = 1 - iou
loss = loss_clf + loss_iou
accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))
self.batch_metrics = {
"loss_clf": loss_clf,
"loss_iou": loss_iou,
"loss": loss,
"iou": iou,
"accuracy01": accuracy01,
"accuracy03": accuracy03,
"accuracy05": accuracy05,
}
if self.is_train_loader:
loss.backward()
self.optimizer.step()
self.optimizer.zero_grad()
runner = CustomRunner()
runner.train(
model=model,
optimizer=optimizer,
loaders=loaders,
verbose=True,
)
CV - MNIST with Metric Learning
from torch.optim import Adam
from torch.utils.data import DataLoader
from catalyst import data, dl, utils
from catalyst.contrib import datasets, models, nn
import catalyst.contrib.data.cv.transforms.torch as t
# 1. train and valid datasets
dataset_root = "."
transforms = t.Compose([t.ToTensor(), t.Normalize((0.1307,), (0.3081,))])
dataset_train = datasets.MnistMLDataset(root=dataset_root, download=True, transform=transforms)
sampler = data.BalanceBatchSampler(labels=dataset_train.get_labels(), p=5, k=10)
train_loader = DataLoader(dataset=dataset_train, sampler=sampler, batch_size=sampler.batch_size)
dataset_val = datasets.MnistQGDataset(root=dataset_root, transform=transforms, gallery_fraq=0.2)
val_loader = DataLoader(dataset=dataset_val, batch_size=1024)
# 2. model and optimizer
model = models.SimpleConv(features_dim=16)
optimizer = Adam(model.parameters(), lr=0.001)
# 3. criterion with triplets sampling
sampler_inbatch = data.HardTripletsSampler(norm_required=False)
criterion = nn.TripletMarginLossWithSampler(margin=0.5, sampler_inbatch=sampler_inbatch)
# 4. training with catalyst Runner
callbacks = [
dl.ControlFlowCallback(dl.CriterionCallback(), loaders="train"),
dl.ControlFlowCallback(dl.CMCScoreCallback(topk_args=[1]), loaders="valid"),
dl.PeriodicLoaderCallback(valid=100),
]
runner = dl.SupervisedRunner(device=utils.get_device())
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
callbacks=callbacks,
loaders={"train": train_loader, "valid": val_loader},
minimize_metric=False,
verbose=True,
valid_loader="valid",
num_epochs=200,
main_metric="cmc01",
)
GAN - MNIST, flatten version
import os
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.contrib.data.cv import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn.modules import Flatten, GlobalMaxPool2d, Lambda
latent_dim = 128
generator = nn.Sequential(
# We want to generate 128 coefficients to reshape into a 7x7x128 map
nn.Linear(128, 128 * 7 * 7),
nn.LeakyReLU(0.2, inplace=True),
Lambda(lambda x: x.view(x.size(0), 128, 7, 7)),
nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),
nn.LeakyReLU(0.2, inplace=True),
nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(128, 1, (7, 7), padding=3),
nn.Sigmoid(),
)
discriminator = nn.Sequential(
nn.Conv2d(1, 64, (3, 3), stride=(2, 2), padding=1),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(64, 128, (3, 3), stride=(2, 2), padding=1),
nn.LeakyReLU(0.2, inplace=True),
GlobalMaxPool2d(),
Flatten(),
nn.Linear(128, 1)
)
model = {"generator": generator, "discriminator": discriminator}
optimizer = {
"generator": torch.optim.Adam(generator.parameters(), lr=0.0003, betas=(0.5, 0.999)),
"discriminator": torch.optim.Adam(discriminator.parameters(), lr=0.0003, betas=(0.5, 0.999)),
}
loaders = {
"train": DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=ToTensor()), batch_size=32),
}
class CustomRunner(dl.Runner):
def _handle_batch(self, batch):
real_images, _ = batch
batch_metrics = {}
# Sample random points in the latent space
batch_size = real_images.shape[0]
random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.device)
# Decode them to fake images
generated_images = self.model["generator"](random_latent_vectors).detach()
# Combine them with real images
combined_images = torch.cat([generated_images, real_images])
# Assemble labels discriminating real from fake images
labels = torch.cat([
torch.ones((batch_size, 1)), torch.zeros((batch_size, 1))
]).to(self.device)
# Add random noise to the labels - important trick!
labels += 0.05 * torch.rand(labels.shape).to(self.device)
# Train the discriminator
predictions = self.model["discriminator"](combined_images)
batch_metrics["loss_discriminator"] = \
F.binary_cross_entropy_with_logits(predictions, labels)
# Sample random points in the latent space
random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.device)
# Assemble labels that say "all real images"
misleading_labels = torch.zeros((batch_size, 1)).to(self.device)
# Train the generator
generated_images = self.model["generator"](random_latent_vectors)
predictions = self.model["discriminator"](generated_images)
batch_metrics["loss_generator"] = \
F.binary_cross_entropy_with_logits(predictions, misleading_labels)
self.batch_metrics.update(**batch_metrics)
runner = CustomRunner()
runner.train(
model=model,
optimizer=optimizer,
loaders=loaders,
callbacks=[
dl.OptimizerCallback(
optimizer_key="generator",
metric_key="loss_generator"
),
dl.OptimizerCallback(
optimizer_key="discriminator",
metric_key="loss_discriminator"
),
],
main_metric="loss_generator",
num_epochs=20,
verbose=True,
logdir="./logs_gan",
)
ML - multiclass classification (fp16 training version)
# pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" git+https://github.com/NVIDIA/apex
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst import dl
# sample data
num_samples, num_features, num_classes = int(1e4), int(1e1), 4
X = torch.rand(num_samples, num_features)
y = (torch.rand(num_samples, ) * num_classes).to(torch.int64)
# pytorch loaders
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}
# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, num_classes)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])
# model training
runner = dl.SupervisedRunner()
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
loaders=loaders,
logdir="./logdir",
num_epochs=3,
callbacks=[dl.AccuracyCallback(num_classes=num_classes)],
fp16=True,
)
ML - multiclass classification (advanced fp16 training version)
# pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" git+https://github.com/NVIDIA/apex
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst import dl
# sample data
num_samples, num_features, num_classes = int(1e4), int(1e1), 4
X = torch.rand(num_samples, num_features)
y = (torch.rand(num_samples, ) * num_classes).to(torch.int64)
# pytorch loaders
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}
# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, num_classes)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])
# model training
runner = dl.SupervisedRunner()
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
loaders=loaders,
logdir="./logdir",
num_epochs=3,
callbacks=[dl.AccuracyCallback(num_classes=num_classes)],
fp16=dict(apex=True, opt_level="O1"),
)
ML - Linear Regression (distributed training version)
#!/usr/bin/env python
import torch
from torch.utils.data import TensorDataset
from catalyst.dl import SupervisedRunner, utils
def datasets_fn(num_features: int):
X = torch.rand(int(1e4), num_features)
y = torch.rand(X.shape[0])
dataset = TensorDataset(X, y)
return {"train": dataset, "valid": dataset}
def train():
num_features = int(1e1)
# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [3, 6])
runner = SupervisedRunner()
runner.train(
model=model,
datasets={
"batch_size": 32,
"num_workers": 1,
"get_datasets_fn": datasets_fn,
"num_features": num_features, # will be passed to datasets_fn
},
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
logdir="./logs/example_distributed_ml",
num_epochs=8,
verbose=True,
distributed=False,
)
utils.distributed_cmd_run(train)
CV - classification with AutoEncoder (distributed training version)
#!/usr/bin/env python
import os
import torch
from torch import nn
from torch.nn import functional as F
from catalyst import dl, metrics, utils
from catalyst.contrib.data.cv import ToTensor
from catalyst.contrib.datasets import MNIST
class ClassifyAE(nn.Module):
def __init__(self, in_features, hid_features, out_features):
super().__init__()
self.encoder = nn.Sequential(nn.Linear(in_features, hid_features), nn.Tanh())
self.decoder = nn.Linear(hid_features, in_features)
self.clf = nn.Linear(hid_features, out_features)
def forward(self, x):
z = self.encoder(x)
y_hat = self.clf(z)
x_ = self.decoder(z)
return y_hat, x_
class CustomRunner(dl.Runner):
def _handle_batch(self, batch):
x, y = batch
x = x.view(x.size(0), -1)
y_hat, x_ = self.model(x)
loss_clf = F.cross_entropy(y_hat, y)
loss_ae = F.mse_loss(x_, x)
loss = loss_clf + loss_ae
accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))
self.batch_metrics = {
"loss_clf": loss_clf,
"loss_ae": loss_ae,
"loss": loss,
"accuracy01": accuracy01,
"accuracy03": accuracy03,
"accuracy05": accuracy05,
}
if self.is_train_loader:
loss.backward()
self.optimizer.step()
self.optimizer.zero_grad()
def datasets_fn():
dataset = MNIST(os.getcwd(), train=False, download=True, transform=ToTensor())
return {"train": dataset, "valid": dataset}
def train():
model = ClassifyAE(28 * 28, 128, 10)
optimizer = torch.optim.Adam(model.parameters(), lr=0.02)
runner = CustomRunner()
runner.train(
model=model,
optimizer=optimizer,
datasets={
"batch_size": 32,
"num_workers": 1,
"get_datasets_fn": datasets_fn,
},
logdir="./logs/distributed_ae",
num_epochs=8,
verbose=True,
)
utils.distributed_cmd_run(train)
ML - multiclass classification (TPU version)
import torch
from torch.utils.data import DataLoader, TensorDataset
from catalyst import dl, utils
# sample data
num_samples, num_features, num_classes = int(1e4), int(1e1), 4
X = torch.rand(num_samples, num_features)
y = (torch.rand(num_samples, ) * num_classes).to(torch.int64)
# pytorch loaders
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=32, num_workers=1)
loaders = {"train": loader, "valid": loader}
# device (TPU > GPU > CPU)
device = utils.get_device() # <--------- TPU device
# model, criterion, optimizer, scheduler
model = torch.nn.Linear(num_features, num_classes).to(device)
criterion = torch.nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.Adam(model.parameters())
# model training
runner = dl.SupervisedRunner(device=device)
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
loaders=loaders,
logdir="./logdir",
num_epochs=3,
callbacks=[dl.AccuracyCallback(num_classes=num_classes)]
)
AutoML - hyperparameters optimization with Optuna
import os
import optuna
import torch
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.contrib.data.cv import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn import Flatten
def objective(trial):
lr = trial.suggest_loguniform("lr", 1e-3, 1e-1)
num_hidden = int(trial.suggest_loguniform("num_hidden", 32, 128))
loaders = {
"train": DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=ToTensor()), batch_size=32),
"valid": DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=ToTensor()), batch_size=32),
}
model = nn.Sequential(
Flatten(), nn.Linear(784, num_hidden), nn.ReLU(), nn.Linear(num_hidden, 10)
)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
criterion = nn.CrossEntropyLoss()
runner = dl.SupervisedRunner()
runner.train(
model=model,
loaders=loaders,
criterion=criterion,
optimizer=optimizer,
callbacks=[
dl.OptunaCallback(trial),
dl.AccuracyCallback(num_classes=10),
],
num_epochs=10,
main_metric="accuracy01",
minimize_metric=False,
)
return runner.best_valid_metrics[runner.main_metric]
study = optuna.create_study(
direction="maximize",
pruner=optuna.pruners.MedianPruner(
n_startup_trials=1, n_warmup_steps=0, interval_steps=1
),
)
study.optimize(objective, n_trials=10, timeout=300)
print(study.best_value, study.best_params)
Features
- Universal train/inference loop.
- Configuration files for model/data hyperparameters.
- Reproducibility – all source code and environment variables will be saved.
- Callbacks – reusable train/inference pipeline parts with easy customization.
- Training stages support.
- Deep Learning best practices - SWA, AdamW, Ranger optimizer, OneCycle, and more.
- Developments best practices - fp16 support, distributed training, slurm support.
Structure
- callbacks - a variety of callbacks for your train-loop customization.
- contrib - additional modules contributed by Catalyst users.
- core - framework core with main abstractions - Experiment, Runner and Callback.
- data - useful tools and scripts for data processing.
- dl - entrypoint for your deep learning experiments.
- experiments - a number of useful experiments extensions for Notebook and Config API.
- metrics – classic ML and CV/NLP/RecSys metrics.
- registry - Catalyst global registry for Config API.
- runners - runners extensions for different deep learning tasks.
- tools - extra tools for Deep Learning research, class-based helpers.
- utils - typical utils for Deep Learning research, function-based helpers.
Tests
All Catalyst code, features and pipelines are fully tested with our own catalyst-codestyle.
In fact, we train a number of different models for various of tasks - image classification, image segmentation, text classification, GANs training and much more. During the tests, we compare their convergence metrics in order to verify the correctness of the training procedure and its reproducibility.
As a result, Catalyst provides fully tested and reproducible best practices for your deep learning research.
Catalyst
Tutorials
- Customizing what happens in
train
- Demo with minimal examples for ML, CV, NLP, GANs and RecSys
- Detailed classification tutorial
- Advanced segmentation tutorial
- Metric Learning tutorial
- Catalyst with Google TPU
Blogposts
- Catalyst 101 — Accelerated PyTorch
- BERT Distillation with Catalyst
- Metric Learning with Catalyst
- Pruning with Catalyst
- Distributed training best practices
- Addressing the Cocktail Party Problem using PyTorch
- Beyond fashion: Deep Learning with Catalyst (Config API)
- Tutorial from Notebook API to Config API (RU)
Docs
- master
- 20.12
- 20.11
- 20.10
- 20.09
- 20.08.2
- 20.07 - dev blog: 20.07 release
- 20.06
- 20.05, 20.05.1
- 20.04, 20.04.1, 20.04.2
Projects
Examples, notebooks and starter kits
- CamVid Segmentation Example - Example of semantic segmentation for CamVid dataset
- Notebook API tutorial for segmentation in Understanding Clouds from Satellite Images Competition
- Catalyst.RL - NeurIPS 2019: Learn to Move - Walk Around – starter kit
- Catalyst.RL - NeurIPS 2019: Animal-AI Olympics - starter kit
- Inria Segmentation Example - An example of training segmentation model for Inria Sattelite Segmentation Challenge
- iglovikov_segmentation - Semantic segmentation pipeline using Catalyst
Competitions
- Kaggle Quick, Draw! Doodle Recognition Challenge - 11th place solution
- Catalyst.RL - NeurIPS 2018: AI for Prosthetics Challenge – 3rd place solution
- Kaggle Google Landmark 2019 - 30th place solution
- iMet Collection 2019 - FGVC6 - 24th place solution
- ID R&D Anti-spoofing Challenge - 14th place solution
- NeurIPS 2019: Recursion Cellular Image Classification - 4th place solution
- MICCAI 2019: Automatic Structure Segmentation for Radiotherapy Planning Challenge 2019
- 3rd place solution for
Task 3: Organ-at-risk segmentation from chest CT scans
- and 4th place solution for
Task 4: Gross Target Volume segmentation of lung cancer
- 3rd place solution for
- Kaggle Seversteal steel detection - 5th place solution
- RSNA Intracranial Hemorrhage Detection - 5th place solution
- APTOS 2019 Blindness Detection – 7th place solution
- Catalyst.RL - NeurIPS 2019: Learn to Move - Walk Around – 2nd place solution
- xView2 Damage Assessment Challenge - 3rd place solution
Paper implementations
- Hierarchical attention for sentiment classification with visualization
- Pediatric bone age assessment
- Implementation of paper "Tell Me Where to Look: Guided Attention Inference Network"
- Implementation of paper "Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Networks"
- Implementation of paper "Utterance-level Aggregation For Speaker Recognition In The Wild"
- Implementation of paper "Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation"
- Implementation of paper "ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks"
Tools and pipelines
- Catalyst.RL – A Distributed Framework for Reproducible RL Research by Scitator
- Catalyst.Classification - Comprehensive classification pipeline with Pseudo-Labeling by Bagxi and Pdanilov
- Catalyst.Segmentation - Segmentation pipelines - binary, semantic and instance, by Bagxi
- Catalyst.Detection - Anchor-free detection pipeline by Avi2011class and TezRomacH
- Catalyst.GAN - Reproducible GANs pipelines by Asmekal
- Catalyst.Neuro - Brain image analysis project, in collaboration with TReNDS Center
- MLComp – distributed DAG framework for machine learning with UI by Lightforever
- Pytorch toolbelt - PyTorch extensions for fast R&D prototyping and Kaggle farming by BloodAxe
- Helper functions - An unstructured set of helper functions by Ternaus
- BERT Distillation with Catalyst by elephantmipt
Talks
- Catalyst-team YouTube channel
- Catalyst.RL – reproducible RL research framework at Stachka
- Catalyst.DL – reproducible DL research framework (rus) and slides (eng) at RIF
- Catalyst.DL – reproducible DL research framework (rus) and slides (eng) at AI-Journey
- Catalyst.DL – fast & reproducible DL at Datastart
- Catalyst.RL - NeurIPS 2019: Learn to Move - Walk Around and slides (eng) at RL reading group Meetup
- Catalyst – accelerated DL & RL (rus) and slides (eng) at Facebook Developer Circle: Moscow | ML & AI Meetup
- Catalyst.RL - Learn to Move - Walk Around 2nd place solution at NeurIPS competition track
- Open Source ML 2019 edition at Datafest.elka
Community
Contribution guide
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.
- Please see the contribution guide for more information.
- By participating in this project, you agree to abide by its Code of Conduct.
User feedback
We have created [email protected]
for "user feedback".
- If you like the project and want to say thanks, this the right place.
- If you would like to start a collaboration between your team and Catalyst team to do better Deep Learning R&D - you are always welcome.
- If you just don't like Github issues and this ways suits you better - feel free to email us.
- Finally, if you do not like something, please, share it with us and we can see how to improve it.
We appreciate any type of feedback. Thank you!
Acknowledgments
Since the beginning of the development of the Сatalyst, a lot of people have influenced it in a lot of different ways.
Catalyst.Team
- Eugene Kachan (bagxi) - Config API improvements and CV pipelines
- Dmytro Doroshenko (ditwoo) - best ever test cases
- Artem Zolkin (arquestro) - documentation grandmaster
- David Kuryakin (dkuryakin) - Reaction design
Catalyst - Metric Learning team
Catalyst.Contributors
- Evgeny Semyonov (lightforever) - MLComp creator
- Andrey Zharkov (asmekal) - Catalyst.GAN initiative
- Aleksey Grinchuk (alexgrinch) and Valentin Khrulkov (khrulkovv) - many RL collaborations
- Alex Gaziev (gazay) - a bunch of Config API improvements and our Config API wizard support
- Eugene Khvedchenya (bloodaxe) - Pytorch-toolbelt library maintainer
- Yury Kashnitsky (yorko) - Catalyst.NLP initiative
Catalyst.Friends
- Vladimir Iglovikov (ternaus) - kaggle grandmaster advices
- Nguyen Xuan Bac (ngxbac) - kaggle competitions support
- Ivan Stepanenko - awesome Catalyst.Ecosystem design
Trusted by
- Awecom
- Researchers@Center for Translational Research in Neuroimaging and Data Science (TReNDS)
- Deep Learning School
- Researchers@Emory University
- Evil Martians
- Researchers@Georgia Institute of Technology
- Researchers@Georgia State University
- Helios
- HPCD Lab
- iFarm
- Kinoplan
- Researchers@Moscow Institute of Physics and Technology
- Neuromation
- Poteha Labs
- Provectus
- Researchers@Skolkovo Institute of Science and Technology
- SoftConstruct
- Researchers@Tinkoff
- Researchers@Yandex.Research
Supported by
Citation
Please use this bibtex if you want to cite this repository in your publications:
@misc{catalyst,
author = {Kolesnikov, Sergey},
title = {Accelerated deep learning R&D},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/catalyst-team/catalyst}},
}