A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking

Related tags

Deep LearningPoseRBPF
Overview

PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking

PoseRBPF

Citing PoseRBPF

If you find the PoseRBPF code useful, please consider citing:

@inproceedings{deng2019pose,
author    = {Xinke Deng and Arsalan Mousavian and Yu Xiang and Fei Xia and Timothy Bretl and Dieter Fox},
title     = {PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking},
booktitle = {Robotics: Science and Systems (RSS)},
year      = {2019}
}
@inproceedings{deng2020self,
author    = {Xinke Deng and Yu Xiang and Arsalan Mousavian and Clemens Eppner and Timothy Bretl and Dieter Fox},
title     = {Self-supervised 6D Object Pose Estimation for Robot Manipulation},
booktitle = {International Conference on Robotics and Automation (ICRA)},
year      = {2020}
}

Installation

git clone https://github.com/NVlabs/PoseRBPF.git --recursive

Install dependencies:

  • install anaconda according to the official website.
  • create the virtual env with pose_rbpf_env.yml:
conda env create -f pose_rbpf_env.yml
conda activate pose_rbpf_env
  • compile the YCB Renderer according to the instruction.
  • compile the utility functions with:
sh build.sh

Download

Downolad files as needed. Extract CAD models under the cad_models directory, and extract model weights under the checkpoints directory.

A quick demo on the YCB Video Dataset

demo

  • The demo shows tracking 003_cracker_box on YCB Video Dataset.
  • Run script download_demo.sh to download checkpoint (434 MB), CAD models (743 MB), 2D detections (13 MB), and necessary data (3 GB) for the demo:
./scripts/download_demo.sh
  • Then you should have files organized like:
├── ...
├── PoseRBPF
|   |── cad_models
|   |   |── ycb_models
|   |   └── ...
|   |── checkpoints
|   |   |── ycb_ckpts_roi_rgbd
|   |   |── ycb_codebooks_roi_rgbd
|   |   |── ycb_configs_roi_rgbd
|   |   └── ... 
|   |── detections
|   |   |── posecnn_detections
|   |   |── tless_retina_detections 
|   |── config                      # configuration files for training and DPF
|   |── networks                    # auto-encoder networks
|   |── pose_rbpf                   # particle filters
|   └── ...
|── YCB_Video_Dataset               # to store ycb data
|   |── cameras
|   |── data 
|   |── image_sets 
|   |── keyframes 
|   |── poses
|   └── ...
└── ...
  • Run demo with 003_cracker_box. The results will be stored in ./results/
./scripts/run_demo.sh

Online Real-world Pose Estimation using ROS

ros_demo

  • Due to the incompatibility between ROS Kinetic and Python 3, the ROS node only runs with Python 2.7. We first create the virtual env with pose_rbpf_env_py2.yml:
conda env create -f pose_rbpf_env_py2.yml
conda activate pose_rbpf_env_py2
  • compile the YCB Renderer according to the instruction.
  • compile the utility functions with:
sh build.sh
  • Make sure you can run the demo above first.
  • Install ROS if it's not there:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
sudo apt-get update
sudo apt-get install ros-kinetic-desktop-full
  • Update python packages:
conda install -c auto catkin_pkg
pip install -U rosdep rosinstall_generator wstool rosinstall six vcstools
pip install msgpack
pip install empy
  • Source ROS (every time before launching the node):
source /opt/ros/kinetic/setup.bash
  • Initialze rosdep:
sudo rosdep init
rosdep update

Single object tracking demo:

  • Download demo rosbag:
./scripts/download_ros_demo.sh
  • Run PoseCNN node (with roscore running in another terminal, download PoseCNN weights first):
./scripts/run_ros_demo_posecnn.sh
  • Run PoseRBPF node for RGB-D tracking (with roscore running in another terminal):
./scripts/run_ros_demo.sh
  • (Optional) For RGB tracking run this command instead:
./scripts/run_ros_demo_rgb.sh
  • Run RVIZ in the PoseRBPF directory:
rosrun rviz rviz -d ./ros/tracking.rviz
  • Once you see *** PoseRBPF Ready ... in the PoseRBPF terminal, run rosbag in another terminal, then you should be able to see the tracking demo:
rosbag play ./ros_data/demo_single.bag

Multiple object tracking demo:

  • Download demo rosbag:
./scripts/download_ros_demo_multiple.sh
  • Run PoseCNN node (with roscore running in another terminal, download PoseCNN weights first):
./scripts/run_ros_demo_posecnn.sh
  • Run PoseRBPF node with self-supervised trained RGB Auto-encoder weights:
./scripts/run_ros_demo_rgb_multiple_ssv.sh
  • (Optional) Run PoseRBPF node with RGB-D Auto-encoder weights instead:
./scripts/run_ros_demo_multiple.sh
  • (Optional) Run PoseRBPF node with RGB Auto-encoder weights instead:
./scripts/run_ros_demo_rgb_multiple.sh
  • Run RVIZ in the PoseRBPF directory:
rosrun rviz rviz -d ./ros/tracking.rviz
  • Once you see *** PoseRBPF Ready ... in the PoseRBPF terminal, run rosbag in another terminal, then you should be able to see the tracking demo:
rosbag play ./ros_data/demo_multiple.bag

Note that PoseRBPF takes certain time to initialize each object before tracking. You can pause the ROS bag by pressing space for initialization, and then press space again to resume tracking.

Testing on the YCB Video Dataset

  • Download checkpoints from the google drive folder (ycb_rgbd_full.tar.gz or ycb_rgb_full.tar.gz) and unzip to the checkpoint directory.
  • Download all the data in the YCB Video Dataset so the ../YCB_Video_Dataset/data folder contains all the sequences.
  • Run RGB-D tracking (use 002_master_chef_can as an example here):
sh scripts/test_ycb_rgbd/val_ycb_002_rgbd.sh 0 1
  • Run RGB tracking (use 002_master_chef_can as an example here):
sh scripts/test_ycb_rgb/val_ycb_002_rgb.sh 0 1

Testing on the T-LESS Dataset

  • Download checkpoints from the google drive folder (tless_rgbd_full.tar.gz or tless_rgb_full.tar.gz) and unzip to the checkpoint directory.
  • Download all the data in the T-LESS Dataset so the ../TLess/ folder contains all the sequences.
  • Download all the models for T-LESS objects from the google drive folder.
  • Then you should have files organized like:
├── ...
├── PoseRBPF
|   |── cad_models
|   |   |── ycb_models
|   |   |── tless_models
|   |   └── ...
|   |── checkpoints
|   |   |── tless_ckpts_roi_rgbd
|   |   |── tless_codebooks_roi_rgbd
|   |   |── tless_configs_roi_rgbd
|   |   └── ... 
|   |── detections
|   |   |── posecnn_detections
|   |   |── tless_retina_detections 
|   |── config                      # configuration files for training and DPF
|   |── networks                    # auto-encoder networks
|   |── pose_rbpf                   # particle filters
|   └── ...
|── YCB_Video_Dataset               # to store ycb data
|   |── cameras  
|   |── data 
|   |── image_sets 
|   |── keyframes 
|   |── poses               
|   └── ...   
|── TLess               # to store tless data
|   |── t-less_v2 
|── tless_ckpts_roi_rgbd
|   |   |── test_primesense
|   |   └── ... 
|   └── ...        
└── ...
  • Run RGB-D tracking (use obj_01 as an example here):
sh scripts/test_tless_rgbd/val_tless_01_rgbd.sh 0 1
  • Run RGB tracking (use obj_01 as an example here):
sh scripts/test_tless_rgb/val_tless_01_rgb.sh 0 1

Testing on the DexYCB Dataset

  • Download checkpoints from the google drive folder (ycb_rgbd_full.tar.gz or ycb_rgb_full.tar.gz) and unzip to the checkpoint directory.

  • Download the DexYCB dataset from here.

  • Download PoseCNN results on the DexYCB dataset from here.

  • Create a symlink for the DexYCB dataset and the PoseCNN results

    cd $ROOT/data/DEX_YCB
    ln -s $dex_ycb_data data
    ln -s $results_posecnn_data results_posecnn
  • Install PyTorch PoseCNN layers according to the instructions here.

  • Run RGB-D tracking:

    ./scripts/test_dex_rgbd/dex_ycb_test_rgbd_s0.sh $GPU_ID
    
  • Run RGB tracking:

    ./scripts/test_dex_rgb/dex_ycb_test_rgb_s0.sh $GPU_ID
    

Training

  • Download microsoft coco dataset 2017 val images from here for data augmentation.
  • Store the folder val2017 in ../coco/
  • Run training example for 002_master_chef_can in the YCB objects. The training should be able to run on one single NVIDIA TITAN Xp GPU:
sh scripts/train_ycb_rgbd/train_script_ycb_002.sh

Acknowledgements

We have referred to part of the RoI align code from maskrcnn-benchmark.

License

PoseRBPF is licensed under the NVIDIA Source Code License - Non-commercial.

Owner
NVIDIA Research Projects
NVIDIA Research Projects
Franka Emika Panda manipulator kinematics&dynamics simulation

pybullet_sim_panda Pybullet simulation environment for Franka Emika Panda Dependency pybullet, numpy, spatial_math_mini Simple example (please check s

0 Jan 20, 2022
A python library for highly configurable transformers - easing model architecture search and experimentation.

A python library for highly configurable transformers - easing model architecture search and experimentation.

Anthony Fuller 51 Nov 20, 2022
Towards End-to-end Video-based Eye Tracking

Towards End-to-end Video-based Eye Tracking The code accompanying our ECCV 2020 publication and dataset, EVE. Authors: Seonwook Park, Emre Aksan, Xuco

Seonwook Park 76 Dec 12, 2022
Semantically Contrastive Learning for Low-light Image Enhancement

Semantically Contrastive Learning for Low-light Image Enhancement Here, we propose an effective semantically contrastive learning paradigm for Low-lig

48 Dec 16, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [arxiv] This is the official repository for CDTrans: Cross-domain Transformer for

238 Dec 22, 2022
Code for Recurrent Mask Refinement for Few-Shot Medical Image Segmentation (ICCV 2021).

Recurrent Mask Refinement for Few-Shot Medical Image Segmentation Steps Install any missing packages using pip or conda Preprocess each dataset using

XIE LAB @ UCI 39 Dec 08, 2022
Differentiable Annealed Importance Sampling (DAIS)

Differentiable Annealed Importance Sampling (DAIS) This repository contains the code to reproduce the DAIS results from the paper Differentiable Annea

Guodong Zhang 6 Dec 26, 2021
Code for IntraQ, PyTorch implementation of our paper under review

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization paper Requirements Python = 3.7.10 Pytorch == 1.7

1 Nov 19, 2021
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)

TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020) About The goal of our research problem is illustrated below: give

59 Dec 09, 2022
A multilingual version of MS MARCO passage ranking dataset

mMARCO A multilingual version of MS MARCO passage ranking dataset This repository presents a neural machine translation-based method for translating t

75 Dec 27, 2022
Pretrained models for Jax/Haiku; MobileNet, ResNet, VGG, Xception.

Pre-trained image classification models for Jax/Haiku Jax/Haiku Applications are deep learning models that are made available alongside pre-trained we

Alper Baris CELIK 14 Dec 20, 2022
joint detection and semantic segmentation, based on ultralytics/yolov5,

Multi YOLO V5——Detection and Semantic Segmentation Overeview This is my undergraduate graduation project which based on ultralytics YOLO V5 tag v5.0.

477 Jan 06, 2023
Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity

Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity, such as gratings, photonic-crystal slabs, metasurfaces, surf

Alex Song 17 Dec 19, 2022
PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, wav2lip, picture repair, image editing, photo2cartoon, image style transfer, and so on.

English | 简体中文 PaddleGAN PaddleGAN provides developers with high-performance implementation of classic and SOTA Generative Adversarial Networks, and s

6.4k Jan 09, 2023
Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression"

beyond-preserved-accuracy Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression" How to implemen

Kevin Canwen Xu 10 Dec 23, 2022
Fiddle is a Python-first configuration library particularly well suited to ML applications.

Fiddle Fiddle is a Python-first configuration library particularly well suited to ML applications. Fiddle enables deep configurability of parameters i

Google 227 Dec 26, 2022
Automatic tool focused on deriving metallicities of open clusters

metalcode Automatic tool focused on deriving metallicities of open clusters. Based on the method described in Pöhnl & Paunzen (2010, https://ui.adsabs

2 Dec 13, 2021
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 04, 2023
An introduction to bioimage analysis - http://bioimagebook.github.io

Introduction to Bioimage Analysis This book tries explain the main ideas of image analysis in a practical and engaging way. It's written primarily for

Bioimage Book 20 Nov 28, 2022
Large-scale language modeling tutorials with PyTorch

Large-scale language modeling tutorials with PyTorch 안녕하세요. 저는 TUNiB에서 머신러닝 엔지니어로 근무 중인 고현웅입니다. 이 자료는 대규모 언어모델 개발에 필요한 여러가지 기술들을 소개드리기 위해 마련하였으며 기본적으로

TUNiB 172 Dec 29, 2022