Code for "Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency" paper

Overview

UNICORN đŸĻ„

Webpage | Paper | BibTex

car.gif bird.gif moto.gif

PyTorch implementation of "Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency" paper, check out our webpage for details!

If you find this code useful, don't forget to star the repo ⭐ and cite the paper:

@article{monnier2022unicorn,
  title={{Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance 
  Consistency}},
  author={Monnier, Tom and Fisher, Matthew and Efros, Alexei A and Aubry, Mathieu},
  journal={arXiv:2204.10310 [cs]},
  year={2022},
}

Installation 👷

1. Create conda environment 🔧

conda env create -f environment.yml
conda activate unicorn

Optional: some monitoring routines are implemented, you can use them by specifying your visdom port in the config file. You will need to install visdom from source beforehand

git clone https://github.com/facebookresearch/visdom
cd visdom && pip install -e .

2. Download datasets âŦ‡ī¸

bash scripts/download_data.sh

This command will download one of the following datasets:

3. Download pretrained models âŦ‡ī¸

bash scripts/download_model.sh

This command will download one of the following models:

NB: it may happen that gdown hangs, if so you can download them manually with the gdrive links and move them to the models folder.

How to use 🚀

1. 3D reconstruction of car images 🚘

ex_car.png ex_rec.gif

You first need to download the car model (see above), then launch:

cuda=gpu_id model=car.pkl input=demo ./scripts/reconstruct.sh

where:

  • gpu_id is a target cuda device id,
  • car.pkl corresponds to a pretrained model,
  • demo is a folder containing the target images.

It will create a folder demo_rec containing the reconstructed meshes (.obj format + gif visualizations).

2. Reproduce our results 📊

shapenet.gif

To launch a training from scratch, run:

cuda=gpu_id config=filename.yml tag=run_tag ./scripts/pipeline.sh

where:

  • gpu_id is a target cuda device id,
  • filename.yml is a YAML config located in configs folder,
  • run_tag is a tag for the experiment.

Results are saved at runs/${DATASET}/${DATE}_${run_tag} where DATASET is the dataset name specified in filename.yml and DATE is the current date in mmdd format. Some training visual results like reconstruction examples will be saved. Available configs are:

  • sn/*.yml for each ShapeNet category
  • car.yml for CompCars dataset
  • cub.yml for CUB-200 dataset
  • horse.yml for LSUN Horse dataset
  • moto.yml for LSUN Motorbike dataset
  • p3d_car.yml for Pascal3D+ Car dataset

3. Train on a custom dataset 🔮

If you want to learn a model for a custom object category, here are the key things you need to do:

  1. put your images in a custom_name folder inside the datasets folder
  2. write a config custom.yml with custom_name as dataset.name and move it to the configs folder: as a rule of thumb for the progressive conditioning milestones, put the number of epochs corresponding to 500k iterations for each stage
  3. launch training with:
cuda=gpu_id config=custom.yml tag=custom_run_tag ./scripts/pipeline.sh

Further information 📚

If you like this project, check out related works from our group:

Official Implementation of VAT

Semantic correspondence Few-shot segmentation Cost Aggregation Is All You Need for Few-Shot Segmentation For more information, check out project [Proj

Hamacojr 114 Dec 27, 2022
Hyperbolic Procrustes Analysis Using Riemannian Geometry

Hyperbolic Procrustes Analysis Using Riemannian Geometry The code in this repository creates the figures presented in this article: Please notice that

Ronen Talmon's Lab 2 Jan 08, 2023
ANEA: Distant Supervision for Low-Resource Named Entity Recognition

ANEA: Distant Supervision for Low-Resource Named Entity Recognition ANEA is a tool to automatically annotate named entities in unlabeled text based on

Saarland University Spoken Language Systems Group 15 Mar 30, 2022
Teaches a student network from the knowledge obtained via training of a larger teacher network

Distilling-the-knowledge-in-neural-network Teaches a student network from the knowledge obtained via training of a larger teacher network This is an i

Abhishek Sinha 146 Dec 11, 2022
Facebook Research 605 Jan 02, 2023
Help you understand Manual and w/ Clutch point while driving.

įŽ€äŊ“中文 forza_auto_gear forza_auto_gear is a tool for Forza Horizon 5. It will help us understand the best gear shift point using Manual or w/ Clutch in

15 Oct 08, 2022
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

xmu-xiaoma66 7.7k Jan 05, 2023
Speech Recognition is an important feature in several applications used such as home automation, artificial intelligence

Speech Recognition is an important feature in several applications used such as home automation, artificial intelligence, etc. This article aims to provide an introduction on how to make use of the S

RISHABH MISHRA 1 Feb 13, 2022
This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations,

labml.ai Deep Learning Paper Implementations This is a collection of simple PyTorch implementations of neural networks and related algorithms. These i

labml.ai 16.4k Jan 09, 2023
PyTorch implementation of adversarial patch

adversarial-patch PyTorch implementation of adversarial patch This is an implementation of the Adversarial Patch paper. Not official and likely to hav

Jamie Hayes 172 Nov 29, 2022
Patch2Pix: Epipolar-Guided Pixel-Level Correspondences [CVPR2021]

Patch2Pix for Accurate Image Correspondence Estimation This repository contains the Pytorch implementation of our paper accepted at CVPR2021: Patch2Pi

Qunjie Zhou 199 Nov 29, 2022
FedTorch is an open-source Python package for distributed and federated training of machine learning models using PyTorch distributed API

FedTorch is a generic repository for benchmarking different federated and distributed learning algorithms using PyTorch Distributed API.

Machine Learning and Optimization Lab @PennState 136 Dec 23, 2022
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration (NeurIPS 2021) PyTorch implementation of the paper: CoFiNet: Reli

76 Jan 03, 2023
Decorator for PyMC3

sampled Decorator for reusable models in PyMC3 Provides syntactic sugar for reusable models with PyMC3. This lets you separate creating a generative m

Colin 50 Oct 08, 2021
My coursework for Machine Learning (2021 Spring) at National Taiwan University (NTU)

Machine Learning 2021 Machine Learning (NTU EE 5184, Spring 2021) Instructor: Hung-yi Lee Course Website : (https://speech.ee.ntu.edu.tw/~hylee/ml/202

100 Dec 26, 2022
Differential rendering based motion capture blender project.

TraceArmature Summary TraceArmature is currently a set of python scripts that allow for high fidelity motion capture through the use of AI pose estima

William Rodriguez 4 May 27, 2022
MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens

MSG-Transformer Official implementation of the paper MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens, by Jiemin

Hust Visual Learning Team 68 Nov 16, 2022
Combining Reinforcement Learning and Constraint Programming for Combinatorial Optimization

Hybrid solving process for combinatorial optimization problems Combinatorial optimization has found applications in numerous fields, from aerospace to

117 Dec 13, 2022
A certifiable defense against adversarial examples by training neural networks to be provably robust

DiffAI v3 DiffAI is a system for training neural networks to be provably robust and for proving that they are robust. The system was developed for the

SRI Lab, ETH Zurich 202 Dec 13, 2022