[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

Overview

template-pose

Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper (accepted to CVPR 2022)

Van Nguyen Nguyen, Yinlin Hu, Yang Xiao, Mathieu Salzmann and Vincent Lepetit

Check out our paper and webpage for details!

figures/method.png

If our project is helpful for your research, please consider citing :

@inproceedings{nguyen2022template,
    title={Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions},
    author={Nguyen, Van Nguyen and Hu, Yinlin and Xiao, Yang and Salzmann, Mathieu and Lepetit, Vincent},
    booktitle={Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
    year={2022}}

Table of Content

Methodology 🧑‍🎓

We introduce template-pose, which estimates 3D pose of new objects (can be very different from the training ones, i.e LINEMOD dataset) with only their 3D models. Our method requires neither a training phase on these objects nor images depicting them.

Two settings are considered in this work:

Dataset Predict ID object In-plane rotation
(Occlusion-)LINEMOD Yes No
T-LESS No Yes

Installation 👨‍🔧

We recommend creating a new Anaconda environment to use template-pose. Use the following commands to setup a new environment:

conda env create -f environment.yml
conda activate template

Optional: Installation of BlenderProc is required to render synthetic images. It can be ignored if you use our provided template. More details can be found in Datasets.

Datasets 😺 🔌

Before downloading the datasets, you may change this line to define the $ROOT folder (to store data and results).

There are two options:

  1. To download our pre-processed datasets (15GB) + SUN397 dataset (37GB)
./data/download_preprocessed_data.sh

Optional: You can download with following gdrive links and unzip them manually. We recommend keeping $DATA folder structure as detailed in ./data/README to keep pipeline simple:

  1. To download the original datasets and process them from scratch (process GT poses, render templates, compute nearest neighbors). All the main steps are detailed in ./data/README.
./data/download_and_process_from_scratch.sh

For any training with backbone ResNet50, we initialise with pretrained features of MOCOv2 which can be downloaded with the following command:

python -m lib.download_weight --model_name MoCov2

T-LESS 🔌

1. To launch a training on T-LESS:

python train_tless.py --config_path ./config_run/TLESS.json

2. To reproduce the results on T-LESS:

To download pretrained weights (by default, they are saved at $ROOT/pretrained/TLESS.pth):

python -m lib.download_weight --model_name TLESS

Optional: You can download manually with this link

To evaluate model with the pretrained weight:

python test_tless.py --config_path ./config_run/TLESS.json --checkpoint $ROOT/pretrained/TLESS.pth

LINEMOD and Occlusion-LINEMOD 😺

1. To launch a training on LINEMOD:

python train_linemod.py --config_path config_run/LM_$backbone_$split_name.json

For example, with “base" backbone and split #1:

python train_linemod.py --config_path config_run/LM_baseNetwork_split1.json

2. To reproduce the results on LINEMOD:

To download pretrained weights (by default, they are saved at $ROOT/pretrained):

python -m lib.download_weight --model_name LM_$backbone_$split_name

Optional: You can download manually with this link

To evaluate model with a checkpoint_path:

python test_linemod.py --config_path config_run/LM_$backbone_$split_name.json --checkpoint checkpoint_path

For example, with “base" backbone and split #1:

python -m lib.download_weight --model_name LM_baseNetwork_split1
python test_linemod.py --config_path config_run/LM_baseNetwork_split1.json --checkpoint $ROOT/pretrained/LM_baseNetwork_split1.pth

Acknowledgement

The code is adapted from PoseContrast, DTI-Clustering, CosyPose and BOP Toolkit. Many thanks to them!

The authors thank Martin Sundermeyer, Paul Wohlhart and Shreyas Hampali for their fast reply, feedback!

Contact

If you have any question, feel free to create an issue or contact the first author at [email protected]

Owner
Van Nguyen Nguyen
PhD student at Imagine-ENPC, France
Van Nguyen Nguyen
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.

Bottom-Up and Top-Down Attention for Visual Question Answering An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge. The

Hengyuan Hu 731 Jan 03, 2023
A 3D Dense mapping backend library of SLAM based on taichi-Lang designed for the aerial swarm.

TaichiSLAM This project is a 3D Dense mapping backend library of SLAM based Taichi-Lang, designed for the aerial swarm. Intro Taichi is an efficient d

XuHao 230 Dec 19, 2022
A library for graph deep learning research

Documentation | Paper [JMLR] | Tutorials | Benchmarks | Examples DIG: Dive into Graphs is a turnkey library for graph deep learning research. Why DIG?

DIVE Lab, Texas A&M University 1.3k Jan 01, 2023
Accurate Phylogenetic Inference with Symmetry-Preserving Neural Networks

Accurate Phylogenetic Inference with a Symmetry-preserving Neural Network Model Claudia Solis-Lemus Shengwen Yang Leonardo Zepeda-Núñez This repositor

Leonardo Zepeda-Núñez 2 Feb 11, 2022
TLDR: Twin Learning for Dimensionality Reduction

TLDR (Twin Learning for Dimensionality Reduction) is an unsupervised dimensionality reduction method that combines neighborhood embedding learning with the simplicity and effectiveness of recent self

NAVER 105 Dec 28, 2022
Machine Learning automation and tracking

The Open-Source MLOps Orchestration Framework MLRun is an open-source MLOps framework that offers an integrative approach to managing your machine-lea

873 Jan 04, 2023
[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore

[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore. The repository corresponds to the AI6101 of Semester 1, AY2021-2022, starting from 08/2021. The instructors of

AccSrd 1 Sep 22, 2022
fcn by tensorflow

Update An example on how to integrate this code into your own semantic segmentation pipeline can be found in my KittiSeg project repository. tensorflo

9 May 22, 2022
A Python framework for conversational search

Chatty Goose Multi-stage Conversational Passage Retrieval: An Approach to Fusing Term Importance Estimation and Neural Query Rewriting Installation Ma

Castorini 36 Oct 23, 2022
The official homepage of the (outdated) COCO-Stuff 10K dataset.

COCO-Stuff 10K dataset v1.1 (outdated) Holger Caesar, Jasper Uijlings, Vittorio Ferrari Overview Welcome to official homepage of the COCO-Stuff [1] da

Holger Caesar 263 Dec 11, 2022
buildseg is a building extraction plugin of QGIS based on PaddlePaddle.

buildseg buildseg is a building extraction plugin of QGIS based on PaddlePaddle. TODO Extract building on 512x512 remote sensing images. Extract build

Yizhou Chen 11 Sep 26, 2022
Manage the availability of workspaces within Frappe/ ERPNext (sidebar) based on user-roles

Workspace Permissions Manage the availability of workspaces within Frappe/ ERPNext (sidebar) based on user-roles. Features Configure foreach workspace

Patrick.St. 18 Sep 26, 2022
Official implementation of "One-Shot Voice Conversion with Weight Adaptive Instance Normalization".

One-Shot Voice Conversion with Weight Adaptive Instance Normalization By Shengjie Huang, Yanyan Xu*, Dengfeng Ke*, Mingjie Chen, Thomas Hain. This rep

31 Dec 07, 2022
Official code for "Decoupling Zero-Shot Semantic Segmentation"

Decoupling Zero-Shot Semantic Segmentation This is the official code for the arxiv. ZegFormer is the first framework that decouple the zero-shot seman

Jian Ding 108 Dec 30, 2022
Supplemental learning materials for "Fourier Feature Networks and Neural Volume Rendering"

Fourier Feature Networks and Neural Volume Rendering This repository is a companion to a lecture given at the University of Cambridge Engineering Depa

Matthew A Johnson 133 Dec 26, 2022
Sinkformers: Transformers with Doubly Stochastic Attention

Code for the paper : "Sinkformers: Transformers with Doubly Stochastic Attention" Paper You will find our paper here. Compat This package has been dev

Michael E. Sander 31 Dec 29, 2022
A Python library for Deep Probabilistic Modeling

Abstract DeeProb-kit is a Python library that implements deep probabilistic models such as various kinds of Sum-Product Networks, Normalizing Flows an

DeeProb-org 46 Dec 26, 2022
Official codes: Self-Supervised Learning by Estimating Twin Class Distribution

TWIST: Self-Supervised Learning by Estimating Twin Class Distributions Codes and pretrained models for TWIST: @article{wang2021self, title={Self-Sup

Bytedance Inc. 85 Dec 15, 2022
Simple reference implementation of GraphSAGE.

Reference PyTorch GraphSAGE Implementation Author: William L. Hamilton Basic reference PyTorch implementation of GraphSAGE. This reference implementat

William L Hamilton 861 Jan 06, 2023
MolRep: A Deep Representation Learning Library for Molecular Property Prediction

MolRep: A Deep Representation Learning Library for Molecular Property Prediction Summary MolRep is a Python package for fairly measuring algorithmic p

AI-Health @NSCC-gz 83 Dec 24, 2022