SMPLpix: Neural Avatars from 3D Human Models

Related tags

Deep Learningsmplpix
Overview
subject0_validation_poses.mp4

Left: SMPL-X human mesh registered with SMPLify-X, middle: SMPLpix render, right: ground truth video.

SMPLpix: Neural Avatars from 3D Human Models

SMPLpix neural rendering framework combines deformable 3D models such as SMPL-X with the power of image-to-image translation frameworks (aka pix2pix models).

Please check our WACV 2021 paper or a 5-minute explanatory video for more details on the framework.

Important note: this repository is a re-implementation of the original framework, made by the same author after the end of internship. It does not contain the original Amazon multi-subject, multi-view training data and code, and uses full mesh rasterizations as inputs rather than point projections (as described here).

Demo

Description Link
Process a video into a SMPLpix dataset Open In Colab
Train SMPLpix Open In Colab

Prepare the data

demo_openpose_simplifyx

We provide the Colab notebook for preparing SMPLpix training dataset. This will allow you to create your own neural avatar given monocular video of a human moving in front of the camera.

Run demo training

We provide some preprocessed data which allows you to run and test the training pipeline right away:

git clone https://github.com/sergeyprokudin/smplpix
cd smplpix
python setup.py install
python smplpix/train.py --workdir='/content/smplpix_logs/' \
                        --data_url='https://www.dropbox.com/s/coapl05ahqalh09/smplpix_data_test_final.zip?dl=0'

Train on your own data

You can train SMPLpix on your own data by specifying the path to the root directory with data:

python smplpix/train.py --workdir='/content/smplpix_logs/' \
                        --data_dir='/path/to/data'

The directory should contain train, validation and test folders, each of which should contain input and output folders. Check the structure of the demo dataset for reference.

You can also specify various parameters of training via command line. E.g., to reproduce the results of the demo video:

python smplpix/train.py --workdir='/content/smplpix_logs/' \
                        --data_url='https://www.dropbox.com/s/coapl05ahqalh09/smplpix_data_test_final.zip?dl=0' \
                        --downsample_factor=2 \
                        --n_epochs=500 \
                        --sched_patience=2 \
                        --batch_size=4 \
                        --n_unet_blocks=5 \
                        --n_input_channels=3 \
                        --n_output_channels=3 \
                        --eval_every_nth_epoch=10

Check the args.py for the full list of parameters.

More examples

Animating with novel poses

subject0_test_poses.mp4

Left: poses from the test video sequence, right: SMPLpix renders.

Rendering faces

deca_smplpix_test_renders.mp4

Left: FLAME face model inferred with DECA, middle: ground truth test video, right: SMPLpix render.

Thanks to Maria Paola Forte for providing the sequence.

Few-shot artistic neural style transfer

kabarov_animations.mp4

Left: rendered AMASS motion sequence, right: generated SMPLpix animations. See the explanatory video for details.

Credits to Alexander Kabarov for providing the training sketches.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{prokudin2021smplpix,
  title={SMPLpix: Neural Avatars from 3D Human Models},
  author={Prokudin, Sergey and Black, Michael J and Romero, Javier},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={1810--1819},
  year={2021}
}

License

See the LICENSE file.

Owner
Sergey Prokudin
Postdoctoral researcher in computer vision and machine learning
Sergey Prokudin
PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data.

Anti-Backdoor Learning PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data. Check the unlearning effect

Yige-Li 51 Dec 07, 2022
Reimplementation of Dynamic Multi-scale filters for Semantic Segmentation.

Paddle implementation of Dynamic Multi-scale filters for Semantic Segmentation.

Hongqiang.Wang 2 Nov 01, 2021
Code of 3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces

3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces Installation After cloning the repo open

37 Dec 03, 2022
Second Order Optimization and Curvature Estimation with K-FAC in JAX.

KFAC-JAX - Second Order Optimization with Approximate Curvature in JAX Installation | Quickstart | Documentation | Examples | Citing KFAC-JAX KFAC-JAX

DeepMind 90 Dec 22, 2022
🏎️ Accelerate training and inference of πŸ€— Transformers with easy to use hardware optimization tools

Hugging Face Optimum πŸ€— Optimum is an extension of πŸ€— Transformers, providing a set of performance optimization tools enabling maximum efficiency to t

Hugging Face 842 Dec 30, 2022
The pytorch implementation of the paper "text-guided neural image inpainting" at MM'2020

TDANet: Text-Guided Neural Image Inpainting, MM'2020 (Oral) MM | ArXiv This repository implements the paper "Text-Guided Neural Image Inpainting" by L

LisaiZhang 75 Dec 22, 2022
SCALoss: Side and Corner Aligned Loss for Bounding Box Regression (AAAI2022).

SCALoss PyTorch implementation of the paper "SCALoss: Side and Corner Aligned Loss for Bounding Box Regression" (AAAI 2022). Introduction IoU-based lo

TuZheng 20 Sep 07, 2022
S-attack library. Official implementation of two papers "Are socially-aware trajectory prediction models really socially-aware?" and "Vehicle trajectory prediction works, but not everywhere".

S-attack library: A library for evaluating trajectory prediction models This library contains two research projects to assess the trajectory predictio

VITA lab at EPFL 71 Jan 04, 2023
AAAI-22 paper: SimSR: Simple Distance-based State Representationfor Deep Reinforcement Learning

SimSR Code and dataset for the paper SimSR: Simple Distance-based State Representationfor Deep Reinforcement Learning (AAAI-22). Requirements We assum

7 Dec 19, 2022
Integrated physics-based and ligand-based modeling.

ComBind ComBind integrates data-driven modeling and physics-based docking for improved binding pose prediction and binding affinity prediction. Given

Dror Lab 44 Oct 26, 2022
Adversarial examples to the new ConvNeXt architecture

Adversarial examples to the new ConvNeXt architecture To get adversarial examples to the ConvNeXt architecture, run the Colab: https://github.com/stan

Stanislav Fort 19 Sep 18, 2022
HyperLib: Deep learning in the Hyperbolic space

HyperLib: Deep learning in the Hyperbolic space Background This library implements common Neural Network components in the hypberbolic space (using th

105 Dec 25, 2022
PaRT: Parallel Learning for Robust and Transparent AI

PaRT: Parallel Learning for Robust and Transparent AI This repository contains the code for PaRT, an algorithm for training a base network on multiple

Mahsa 0 May 02, 2022
РСшСния, подсказки, тСсты ΠΈ ΡƒΡ‚ΠΈΠ»ΠΈΡ‚Ρ‹ для Ρ‚Ρ€Π΅Π½ΠΈΡ€ΠΎΠ²ΠΊΠΈ ΠΏΠΎ Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠ°ΠΌ ΠΎΡ‚ ЯндСкса.

РСшСния ΠΈ подсказки ΠΊ Ρ‚Ρ€Π΅Π½ΠΈΡ€ΠΎΠ²ΠΊΠ΅ ΠΏΠΎ Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠ°ΠΌ ΠΎΡ‚ ЯндСкса Π§Ρ‚ΠΎ Π΅ΡΡ‚ΡŒ Π²Π½ΡƒΡ‚Ρ€ΠΈ РСшСния с подсказками ΠΈ коммСнтариями; Ρ€Π΅ΠΊΠΎΠΌΠ΅Π½Π΄ΡƒΡŽ сначала ΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ md Ρ„Π°ΠΉΠ» ΠΏ

Yankovsky Andrey 50 Dec 26, 2022
Optimising chemical reactions using machine learning

Summit Summit is a set of tools for optimising chemical processes. We’ve started by targeting reactions. What is Summit? Currently, reaction optimisat

Sustainable Reaction Engineering Group 75 Dec 14, 2022
Implementation of Bidirectional Recurrent Independent Mechanisms (Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules)

BRIMs Bidirectional Recurrent Independent Mechanisms Implementation of the paper Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neura

Sarthak Mittal 26 May 26, 2022
The pytorch implementation of DG-Font: Deformable Generative Networks for Unsupervised Font Generation

DG-Font: Deformable Generative Networks for Unsupervised Font Generation The source code for 'DG-Font: Deformable Generative Networks for Unsupervised

130 Dec 05, 2022
A pytorch implementation of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".

RE2 This is a pytorch implementation of the ACL 2019 paper "Simple and Effective Text Matching with Richer Alignment Features". The original Tensorflo

287 Dec 21, 2022
Software associated to AAAI paper "Planning with Biological Neurons and Synapses"

jBrain Software associated with the AAAI 2022 paper Francesco D'Amore, Daniel Mitropolsky, Pierluigi Crescenzi, Emanuele Natale, Christos H. Papadimit

Pierluigi Crescenzi 1 Apr 10, 2022
Code for the bachelors-thesis flaky fault localization

Flaky_Fault_Localization Scripts for the Bachelors-Thesis: "Flaky Fault Localization" by Christian Kasberger. The thesis examines the usefulness of sp

Christian Kasberger 1 Oct 26, 2021