Benchmark VAE - Library for Variational Autoencoder benchmarking

Overview

Python Documentation Status

Documentation

pythae

This library implements some of the most common (Variational) Autoencoder models. In particular it provides the possibility to perform benchmark experiments and comparisons by training the models with the same autoencoding neural network architecture. The feature make your own autoencoder allows you to train any of these models with your own data and own Encoder and Decoder neural networks.

Installation

To install the latest version of this library run the following using pip

$ pip install git+https://github.com/clementchadebec/benchmark_VAE.git

or alternatively you can clone the github repo to access to tests, tutorials and scripts.

$ git clone https://github.com/clementchadebec/benchmark_VAE.git

and install the library

$ cd benchmark_VAE
$ pip install -e .

Available Models

Below is the list of the models currently implemented in the library.

Models Training example Paper Official Implementation
Autoencoder (AE) Open In Colab
Variational Autoencoder (VAE) Open In Colab link
Beta Variational Autoencoder (Beta_VAE) Open In Colab link
Importance Weighted Autoencoder (IWAE) Open In Colab link link
Wasserstein Autoencoder (WAE) Open In Colab link link
Info Variational Autoencoder (INFOVAE_MMD) Open In Colab link
VAMP Autoencoder (VAMP) Open In Colab link link
Hamiltonian VAE (HVAE) Open In Colab link link
Regularized AE with L2 decoder param (RAE_L2) Open In Colab link link
Regularized AE with gradient penalty (RAE_GP) Open In Colab link link
Riemannian Hamiltonian VAE (RHVAE) Open In Colab link

See results for all aforementionned models

Available Samplers

Below is the list of the models currently implemented in the library.

Samplers Models Paper Official Implementation
Normal prior (NormalSampler) all models link
Gaussian mixture (GaussianMixtureSampler) all models link link
VAMP prior sampler (VAMPSampler) VAMP link link
Manifold sampler (RHVAESampler) RHVAE link
Two stage VAE sampler (TwoStageVAESampler) all VAE based models link link

Launching a model training

To launch a model training, you only need to call a TrainingPipeline instance.

>>> from pythae.pipelines import TrainingPipeline
>>> from pythae.models import VAE, VAEConfig
>>> from pythae.trainers import BaseTrainingConfig

>>> # Set up the training configuration
>>> my_training_config = BaseTrainingConfig(
...	output_dir='my_model',
...	num_epochs=50,
...	learning_rate=1e-3,
...	batch_size=200,
...	steps_saving=None
... )
>>> # Set up the model configuration 
>>> my_vae_config = model_config = VAEConfig(
...	input_dim=(1, 28, 28),
...	latent_dim=10
... )
>>> # Build the model
>>> my_vae_model = VAE(
...	model_config=my_vae_config
... )
>>> # Build the Pipeline
>>> pipeline = TrainingPipeline(
... 	training_config=my_training_config,
... 	model=my_vae_model
...	)
>>> # Launch the Pipeline
>>> pipeline(
...	train_data=your_train_data, # must be torch.Tensor or np.array 
...	eval_data=your_eval_data # must be torch.Tensor or np.array
...	)

At the end of training, the best model weights, model configuration and training configuration are stored in a final_model folder available in my_model/MODEL_NAME_training_YYYY-MM-DD_hh-mm-ss (with my_model being the output_dir argument of the BaseTrainingConfig). If you further set the steps_saving argument to a certain value, folders named checkpoint_epoch_k containing the best model weights, optimizer, scheduler, configuration and training configuration at epoch k will also appear in my_model/MODEL_NAME_training_YYYY-MM-DD_hh-mm-ss.

Lauching a training on benchmark datasets

We also provide a training script example here that can be used to train the models on benchmarks datasets (mnist, cifar10, celeba ...). The script can be launched with the following commandline

python training.py --dataset mnist --model_name ae --model_config 'configs/ae_config.json' --training_config 'configs/base_training_config.json'

See README.md for further details on this script

Launching data generation

To launch the data generation process from a trained model, you only need to build your sampler. For instance, to generate new data with your sampler, run the following.

>>> from pythae.models import VAE
>>> from pythae.samplers import NormalSampler
>>> # Retrieve the trained model
>>> my_trained_vae = VAE.load_from_folder(
...	'path/to/your/trained/model'
...	)
>>> # Define your sampler
>>> my_samper = NormalSampler(
...	model=my_trained_vae
...	)
>>> # Generate samples
>>> gen_data = normal_samper.sample(
...	num_samples=50,
...	batch_size=10,
...	output_dir=None,
...	return_gen=True
...	)

If you set output_dir to a specific path, the generated images will be saved as .png files named 00000000.png, 00000001.png ... The samplers can be used with any model as long as it is suited. For instance, a GMMSampler instance can be used to generate from any model but a VAMPSampler will only be usable with a VAMP model. Check here to see which ones apply to your model.

Define you own Autoencoder architecture

Pythae provides you the possibility to define your own neural networks within the VAE models. For instance, say you want to train a Wassertstein AE with a specific encoder and decoder, you can do the following:

>>> from pythae.models.nn import BaseEncoder, BaseDecoder
>>> from pythae.models.base.base_utils import ModelOuput
>>> class My_Encoder(BaseEncoder):
...	def __init__(self, args=None): # Args is a ModelConfig instance
...		BaseEncoder.__init__(self)
...		self.layers = my_nn_layers()
...		
...	def forward(self, x:torch.Tensor) -> ModelOuput:
...		out = self.layers(x)
...		output = ModelOuput(
...			embedding=out # Set the output from the encoder in a ModelOuput instance 
...		)
...		return output
...
... class My_Decoder(BaseDecoder):
...	def __init__(self, args=None):
...		BaseDecoder.__init__(self)
...		self.layers = my_nn_layers()
...		
...	def forward(self, x:torch.Tensor) -> ModelOuput:
...		out = self.layers(x)
...		output = ModelOuput(
...			reconstruction=out # Set the output from the decoder in a ModelOuput instance
...		)
...		return output
...
>>> my_encoder = My_Encoder()
>>> my_decoder = My_Decoder()

And now build the model

>>> from pythae.models import WAE_MMD, WAE_MMD_Config
>>> # Set up the model configuration 
>>> my_wae_config = model_config = WAE_MMD_Config(
...	input_dim=(1, 28, 28),
...	latent_dim=10
... )
...
>>> # Build the model
>>> my_wae_model = WAE_MMD(
...	model_config=my_wae_config,
...	encoder=my_encoder, # pass your encoder as argument when building the model
...	decoder=my_decoder # pass your decoder as argument when building the model
... )

important note 1: For all AE-based models (AE, WAE, RAE_L2, RAE_GP), both the encoder and decoder must return a ModelOutput instance. For the encoder, the ModelOuput instance must contain the embbeddings under the key embedding. For the decoder, the ModelOuput instance must contain the reconstructions under the key reconstruction.

important note 2: For all VAE-based models (VAE, Beta_VAE, IWAE, HVAE, VAMP, RHVAE), both the encoder and decoder must return a ModelOutput instance. For the encoder, the ModelOuput instance must contain the embbeddings and log-covariance matrices (of shape batch_size x latent_space_dim) respectively under the key embedding and log_covariance key. For the decoder, the ModelOuput instance must contain the reconstructions under the key reconstruction.

Using benchmark neural nets

You can also find predefined neural network architectures for the most common data sets (i.e. MNIST, CIFAR, CELEBA ...) that can be loaded as follows

>>> for pythae.models.nn.benchmark.mnist import (
...	Encoder_AE_MNIST, # For AE based model (only return embeddings)
... 	Encoder_VAE_MNIST, # For VAE based model (return embeddings and log_covariances)
... 	Decoder_AE_MNIST
... )

Replace mnist by cifar or celeba to access to other neural nets.

Getting your hands on the code

To help you to understand the way pythae works and how you can train your models with this library we also provide tutorials:

  • making_your_own_autoencoder.ipynb shows you how to pass your own networks to the models implemented in pythae Open In Colab

  • models_training folder provides notebooks showing how to train each implemented model and how to sample from it using pyhtae.samplers.

  • scripts folder provides in particular an example of a training script to train the models on benchmark data sets (mnist, cifar10, celeba ...)

Dealing with issues

If you are experiencing any issues while running the code or request new features/models to be implemented please open an issue on github.

Contributing 🚀

You want to contribute to this library by adding a model, a sampler or simply fix a bug ? That's awesome! Thank you! Please see CONTRIBUTING.md to follow the main contributing guidelines.

Results

Models MNIST CELEBA
AE + GaussianMixtureSampler AE GMM AE GMM
VAE + NormalSampler VAE Normal VAE Normal
VAE + GaussianMixtureSampler VAE GMM VAE GMM
Beta-VAE + NormalSampler Beta Normal Beta Normal
IWAE + Normal sampler IWAE Normal IWAE Normal
WAE + NormalSampler WAE Normal WAE Normal
INFO VAE + NormalSampler INFO Normal INFO Normal
VAMP + VAMPSampler VAMP Vamp VAMP Vamp
HVAE + NormalSampler HVAE Normal HVAE GMM
RAE_L2 + GaussianMixtureSampler RAE L2 GMM RAE L2 GMM
RAE_GP + GaussianMixtureSampler RAE GMM RAE GMM
Riemannian Hamiltonian VAE (RHVAE) + RHVAE Sampler RHVAE RHVAE RHVAE RHVAE
Comments
  • Doubt regarding the Hamiltonian calculations for RHVAE model.

    Doubt regarding the Hamiltonian calculations for RHVAE model.

    In the paper, Hamiltonian is defined as follows:

    H(z, v) = U(z) + K(v)  = -0.5*log(det(G^-1(z)) + 0.5*v^T*v
    

    But in the code, I see extra terms like addition of a joint probability term and a G inverse multiplied in the term for kinetic energy. Are these 2 equations equivalent?

    question 
    opened by shikhar2333 12
  • Integration with the Hugging Face Hub

    Integration with the Hugging Face Hub

    Is your feature request related to a problem? Please describe. As I train models, I would like to easily be able to share them with other people and document them well. I would also like to be able to access other trained models from the community.

    Describe the solution you'd like I would like to have an integration with the Hugging Face Hub (disclaimer: I'm a member of the OS team there). I would like to be able to do model.push_to_hub("osanseviero/my_vae") and get a model directly in the Hub. Some of the benefits of sharing models through the Hub:

    • versioning, commit history and diffs
    • repos provide useful metadata about their tasks, languages, metrics, etc that make them discoverable
    • multiple features from TensorBoard visualizations, leaderboards, and more
    feature request 
    opened by osanseviero 11
  • Can we use wandb sweep with the wandb callbacks provided?

    Can we use wandb sweep with the wandb callbacks provided?

    Is there a way to integrate wandb sweep with the available wandb callback available? If not, could you tell me how to exactly catch the loss values of a model to integrate onto the wandb sweep?

    question 
    opened by shrave 4
  • questions on customized autoencoder

    questions on customized autoencoder

    Hi @clementchadebec ,

    Thanks for pointing me to the notebook yesterday on customized autoencoder. Just have several questions:

    1. why the output dimensions in encoder is higher and higher with the layer depth, but the output dimensions in decoder is lower and lower with the layer depth? I am pretty new to antoencoder. Is this architecture specific to variational autoencoder?

    2. What is ModelOutput function used for? I read the help page saying "Base ModelOutput class fixing the output type from the models." Do you mean to fix the output type to torch tensor type?

    3. Not sure if the method is suitable for 1 dimensional data? Specifically for my customized autoencoder model, the dimension is going to be very very high after encoder. My original data dimension is 6241 * 1. But MLP is working fine to my 1D data.

    Thanks,

    Shan

    question 
    opened by shannjiang 4
  • Can't install due to pickle5 dependency

    Can't install due to pickle5 dependency

    Describe the bug

    pickle5 backports things from the future. But it does not exist in newer python versions:

    package pickle5-0.0.10-py37h8f50634_0 requires python >=3.7,<3.8.0a0, but none of the providers can be installed
    

    To Reproduce Steps to reproduce the behavior: Use python 3.8.13 and try to install the library with micromamba.

    Expected behavior To correctly install and import the library.

    Desktop (please complete the following information):

    • OS: Mac OS Big Sur - Apple M1 chip
    opened by VolodyaCO 3
  • cifar10 data visualization

    cifar10 data visualization

    Hi @clementchadebec, The following code loads the CIFAR10 data as NumPy train & eval:

    cifar10_trainset = datasets.CIFAR10(root='../../data', train=True, download=True, transform=None)
    # array
    train_dataset = cifar10_trainset.data[:-10000].reshape(-1, 3, 32, 32) #(40k,3,32,32)
    eval_dataset = cifar10_trainset.data[-10000:].reshape(-1, 3, 32, 32) # (10k,3,32,32)
    

    when I try to visualize, why it's not an image from the CIFAR10 dataset, what am I doing wrong here?

    npimg = train_dataset[0] # first image from training data
    img_ar = np.transpose(npimg, (1,2,0))
    plt.imshow(img_ar)
    

    image

    *also I can't assign the label (it gets hidden) when I create new issues.

    Thanks, Prachi

    question 
    opened by jprachir 3
  • Model Request: Poincare VAE

    Model Request: Poincare VAE

    It would be great to see the Poincare VAE (or a similar hyperbolic geometry VAE) implemented in pythae!

    Paper: https://arxiv.org/abs/1901.06033 Code: https://github.com/emilemathieu/pvae

    help wanted new model 
    opened by tomhosking 3
  • Is this library compatible with custom datasets?

    Is this library compatible with custom datasets?

    Hello,

    Thank you for your excellent work! As my question states, I wonder how to use this library with a custom dataset. I am new to machine learning and wanted to train a VAE on a relatively large dataset. So, I looked at the provided examples for training different models. However, it seemed to me that I had to load the whole dataset from a .npz file similar to the MNIST or the CelebA datasets. Is there a way to write my own data loader for a custom dataset and then use it with this library?

    Thank you again for your work!

    feature request 
    opened by NamelessGaki 3
  • RHVAE error: mat1 and mat2 shapes cannot be multiplied

    RHVAE error: mat1 and mat2 shapes cannot be multiplied

    Hi,

    I am trying to use RHVAE to perform data augmentation but got an error: ModelError: Error when calling forward method from model. Potential issues:

    • Wrong model architecture -> check encoder, decoder and metric architecture if you provide yours
    • The data input dimension provided is wrong -> when no encoder, decoder or metric provided, a network is built automatically but requires the shape of the flatten input data. Exception raised: <class 'RuntimeError'> with message: mat1 and mat2 shapes cannot be multiplied (2x16384 and 1024x10)

    The input dimension for my data is [1,79,79] instead of [1,28,28] as in the tutorial.

    Every parameter in my model is the same as the tutorial except the input_dim parameter: config = BaseTrainingConfig( output_dir='my_model', learning_rate=1e-4, batch_size=100, num_epochs=100, )

    model_config = RHVAEConfig( input_dim=(1, 79, 79), latent_dim=10, n_lf=1, eps_lf=0.001, beta_zero=0.3, temperature=1.5, regularization=0.001

    )

    model = RHVAE( model_config=model_config, encoder=Encoder_VAE_MNIST(model_config), decoder=Decoder_AE_MNIST(model_config) )

    Any idea how to modify the code to run it?

    good first issue 
    opened by shannjiang 3
  • how Sampler works?

    how Sampler works?

    Hi Clément: Great work on introducing the VAE-oriented library! You have made it more modular like predefined models, pipelines, and so forth. Can you share brief details on how the sampler works under the hood for generations?

    Prachi

    question 
    opened by jprachir 2
  • UnboundLocalError: local variable 'best_model' referenced before assignment

    UnboundLocalError: local variable 'best_model' referenced before assignment

    Hello there, first of all thank you for this repo, I'm quite new to ML and to Pytorch, and this helps me a lot! For which concerns this issue, I'm running pythae on Google Colab, after downloading it simply using $pip install pythae. I experienced a "UnboundLocalError: local variable 'best_model' referenced before assignment" while trying to train a VAE on my custom data (torch tensors). I show you the snippet together with the resulting error.

    Screenshot (34) Screenshot (33)

    Where X_train is a torch.Tensor of shape (28000, 1, 131, 2), each element being a double between [0, 300]. I can't figure out whether I'm doing something wrong, so I kindly ask your help.

    opened by Mirco-Ramo 2
  • Returning callback results when calling pipelines' train method

    Returning callback results when calling pipelines' train method

    Closes #62.

    This PR is a proof of concept for returning values from callbacks which might be useful for immediate manipulation after the pipeline has been run.

    opened by VolodyaCO 0
  • Allow distributed training

    Allow distributed training

    As of now, the library only supports training with one GPU and this could be a limiting factor when training models with large databases. It would be nice to be able to perform distributed training on multiple GPUs.

    Envisioned solution :bulb: : I am thinking of integrating FSDP to the library.

    enhancement feature request 
    opened by clementchadebec 0
  • Implementation of 3D MSSSIM

    Implementation of 3D MSSSIM

    As discussed in issue #68 , here is a PR for the 3D MSSSIM.

    I just re-adapted to Pythae the code from the repository that you already used using the PR for 3D MSSSIM.

    Maybe this requires further tests, let me know what do you think of it.

    Ravi

    opened by ravih18 0
  • MSSSIM VAE is not working with 3D inputs

    MSSSIM VAE is not working with 3D inputs

    Hello @clementchadebec

    MSSSIM VAE model returns an error when using 3D images for training.

    Indeed the MSSSIM implementation in benchmark_VAE/src/pythae/models/msssim_vae/msssim_vae_utils.py only works for 2 images.

    I found the following implementation that seems to work with 3D images: https://github.com/VainF/pytorch-msssim.

    I can make a PR to add it if you think it is a good idea.

    Otherwise I can see if I can generalize the current implementation for 3D images !

    Let me know what do you think of it.

    Ravi

    feature request 
    opened by ravih18 1
  • Multimodality Data Training

    Multimodality Data Training

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Hi, Thanks for the integrated framework for VAE learning, it works well with a single-modal dataset, Now, I want to perform multi-modality training with benchmark_VAE but I cannot find some introduction about custom the reconstruction loss, for instance, calculating the loss per modal and combining them into the final loss, Can you provide some ideas about how to build a custom reconstruction loss function?

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    feature request 
    opened by JunweiLiu0208 3
Releases(v0.0.9)
  • v0.0.9(Oct 19, 2022)

    New features

    • Integration of comet_ml through CometCallback training callbacks further to #55

    Bugs fixed :bug:

    • Fix pickle5 compatibility with python>=3.8
    • update conda-forge feedstock with correct requirements (https://github.com/conda-forge/pythae-feedstock/pull/11)
    Source code(tar.gz)
    Source code(zip)
  • v.0.0.8(Sep 7, 2022)

    New Features:

    • Added MLFlowCallback in TrainingCalbacks further to #44
    • Allow custom Dataset inheriting from torch.utils.data.Dataset to be passed as inputs in the training_pipeline further to #35
    def __call__(
            self,
            train_data: Union[np.ndarray, torch.Tensor, torch.utils.data.Dataset],
            eval_data: Union[np.ndarray, torch.Tensor, torch.utils.data.Dataset] = None,
            callbacks: List[TrainingCallback] = None,
        ):
    
    • Added implementation of Multiply/Partially/Combination IWAE MIWAE, PIWAE and CIWAE (https://arxiv.org/abs/1802.04537)

    Minor changes

    • Unify data handling in FactorVAE with other models. (half of the batch is used for reconstruction and the other one for factorial representation)
    • Change model sanity check method in trainers (use loaders in check instead of datasets)
    • Add encoder/decoder losses needed in CoupledOptimizerTrainer and update tests
    Source code(tar.gz)
    Source code(zip)
  • v.0.0.7(Sep 3, 2022)

    New features

    • Added a PoincareVAE model and PoincareDiskSampler implementation following https://arxiv.org/abs/1901.06033

    Minor changes

    • Added VAE LSTM example
    • Added reproducibility reports
    Source code(tar.gz)
    Source code(zip)
  • v.0.0.6(Jul 22, 2022)

    New features

    • Added a interpolate method allowing to interpolate linearly from given inputs in the latent space of any pythae.models (further to #34)
    • Added a reconstruct method allowing to reconstruct easily given input data with any any pythae.models.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(Jul 7, 2022)

  • v.0.0.3(Jul 5, 2022)

  • v.0.0.2(Jul 4, 2022)

    New features

    • Add a push_to_hf_hub method allowing to push pythae.models instances to the HuggingFace Hub
    • Add a load_from_hf_hub method allowing to download pre-trained models from the Hub
    • Add tutorials (HF Hub saving and reloading and wandb callbacks)
    Source code(tar.gz)
    Source code(zip)
  • v.0.0.1(Jun 14, 2022)

A Python multilingual toolkit for Sentiment Analysis and Social NLP tasks

pysentimiento: A Python toolkit for Sentiment Analysis and Social NLP tasks A Transformer-based library for SocialNLP classification tasks. Currently

298 Jan 07, 2023
Customer Segmentation using RFM

Customer-Segmentation-using-RFM İş Problemi Bir e-ticaret şirketi müşterilerini segmentlere ayırıp bu segmentlere göre pazarlama stratejileri belirlem

Nazli Sener 7 Dec 26, 2021
Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

MilaGraph 36 Nov 22, 2022
Boostcamp AI Tech 3rd / Basic Paper reading w.r.t Embedding

Boostcamp AI Tech 3rd : Basic Paper Reading w.r.t Embedding TL;DR 1992년부터 2018년도까지 이루어진 word/sentence embedding의 중요한 줄기를 이루는 기초 논문 스터디를 진행하고자 합니다. 논

Soyeon Kim 14 Nov 14, 2022
Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021, Pytorch)

S2VD Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021) Requirements and Dependencies Ubuntu 16.04, cuda 10.0 Python 3.6.10, P

Zongsheng Yue 53 Nov 23, 2022
On-device wake word detection powered by deep learning.

Porcupine Made in Vancouver, Canada by Picovoice Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening

Picovoice 2.8k Dec 29, 2022
Tensorflow 2 implementations of the C-SimCLR and C-BYOL self-supervised visual representation methods from "Compressive Visual Representations" (NeurIPS 2021)

Compressive Visual Representations This repository contains the source code for our paper, Compressive Visual Representations. We developed informatio

Google Research 30 Nov 23, 2022
Augmented CLIP - Training simple models to predict CLIP image embeddings from text embeddings, and vice versa.

Train aug_clip against laion400m-embeddings found here: https://laion.ai/laion-400-open-dataset/ - note that this used the base ViT-B/32 CLIP model. S

Peter Baylies 55 Sep 13, 2022
UFPR-ADMR-v2 Dataset

UFPR-ADMR-v2 Dataset The UFPR-ADMRv2 dataset contains 5,000 dial meter images obtained on-site by employees of the Energy Company of Paraná (Copel), w

Gabriel Salomon 8 Sep 29, 2022
DeepLab-ResNet rebuilt in TensorFlow

DeepLab-ResNet-TensorFlow This is an (re-)implementation of DeepLab-ResNet in TensorFlow for semantic image segmentation on the PASCAL VOC dataset. Fr

Vladimir 1.2k Nov 04, 2022
Pytorch Implementation of paper "Noisy Natural Gradient as Variational Inference"

Noisy Natural Gradient as Variational Inference PyTorch implementation of Noisy Natural Gradient as Variational Inference. Requirements Python 3 Pytor

Tony JiHyun Kim 119 Dec 02, 2022
The original weights of some Caffe models, ported to PyTorch.

pytorch-caffe-models This repo contains the original weights of some Caffe models, ported to PyTorch. Currently there are: GoogLeNet (Going Deeper wit

Katherine Crowson 9 Nov 04, 2022
Art Project "Schrödinger's Game of Life"

Repo of the project "Team Creative Quantum AI: Schrödinger's Game of Life" Installation new conda env: conda create --name qcml python=3.8 conda activ

ℍ◮ℕℕ◭ℍ ℝ∈ᛔ∈ℝ 2 Sep 15, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Facebook Research 366 Dec 28, 2022
Resources for the Ki testnet challenge

Ki Testnet Challenge This repository hosts ki-testnet-challenge. A set of scripts and resources to be used for the Ki Testnet Challenge What is the te

Ki Foundation 23 Aug 08, 2022
Python implementation of Project Fluent

Project Fluent This is a collection of Python packages to use the Fluent localization system. python-fluent consists of these packages: fluent.syntax

Project Fluent 155 Dec 28, 2022
Official code of "Mitigating the Mutual Error Amplification for Semi-Supervised Object Detection"

CrossTeaching-SSOD 0. Introduction Official code of "Mitigating the Mutual Error Amplification for Semi-Supervised Object Detection" This repo include

Bruno Ma 9 Nov 29, 2022
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 18, 2021
Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION.

LiMuSE Overview Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION. LiMuSE explores group communication on a multi

Auditory Model and Cognitive Computing Lab 17 Oct 26, 2022