GPU Accelerated Non-rigid ICP for surface registration

Overview

GPU Accelerated Non-rigid ICP for surface registration

Introduction

Preivous Non-rigid ICP algorithm is usually implemented on CPU, and needs to solve sparse least square problem, which is time consuming. In this repo, we implement a pytorch version NICP algorithm based on paper Amberg et al. Detailedly, we leverage the AMSGrad to optimize the linear regresssion, and then found nearest points iteratively. Additionally, we smooth the calculated mesh with laplacian smoothness term. With laplacian smoothness term, the wireframe is also more neat.


Quick Start

install

We use python3.8 and cuda10.2 for implementation. The code is tested on Ubuntu 20.04.

  • The pytorch3d cannot be installed directly from pip install pytorch3d, for the installation of pytorch3d, see pytorch3d.
  • For other packages, run
pip install -r requirements.txt
  • For the template face model, currently we use a processed version of BFM face model from 3DMMfitting-pytorch, download the BFM09_model_info.mat from 3DMMfitting-pytorch and put it into the ./BFM folder.
  • For demo, run
python demo_nicp.py

we show demo for NICP mesh2mesh and NICP mesh2pointcloud. We have two param sets for registration:

milestones = set([50, 80, 100, 110, 120, 130, 140])
stiffness_weights = np.array([50, 20, 5, 2, 0.8, 0.5, 0.35, 0.2])
landmark_weights = np.array([5, 2, 0.5, 0, 0, 0, 0, 0])

This param set is used for registration on fine grained mesh

milestones = set([50, 100])
stiffness_weights = np.array([50, 20, 5])
landmark_weights = np.array([50, 20, 5])

This param set is used for registration on noisy point clouds

Templated Model

You can also use your own templated face model with manually specified landmarks.

Todo

Currently we write some batchwise functions, but batchwise NICP is not supported now. We will support batch NICP in further releases.

You might also like...
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory

A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. 介绍 用以替代 NMS,在所有 bbox 中挑选出最优的集合。 NMS 仅考虑了 bbox 的得分,然后根据 IOU 来

A non-linear, non-parametric Machine Learning method capable of modeling complex datasets
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

Code for
Code for "Learning to Segment Rigid Motions from Two Frames".

rigidmask Code for "Learning to Segment Rigid Motions from Two Frames". ** This is a partial release with inference and evaluation code.

Weakly Supervised Learning of Rigid 3D Scene Flow
Weakly Supervised Learning of Rigid 3D Scene Flow

Weakly Supervised Learning of Rigid 3D Scene Flow This repository provides code and data to train and evaluate a weakly supervised method for rigid 3D

Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these environments (PPO, SAC, evolutionary strategy, and direct trajectory optimization are implemented).

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

Comments
  • Lack of file “BFM09_model_info.mat”

    Lack of file “BFM09_model_info.mat”

    Traceback (most recent call last): File "demo_nicp.py", line 28, in bfm_meshes, bfm_lm_index = load_bfm_model(torch.device('cuda:0')) File "/data/pytorch-nicp/bfm_model.py", line 15, in load_bfm_model bfm_meta_data = loadmat('BFM/BFM09_model_info.mat') File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 224, in loadmat with _open_file_context(file_name, appendmat) as f: File "/root/anaconda3/envs/pytorch3d/lib/python3.8/contextlib.py", line 113, in enter return next(self.gen) File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 17, in _open_file_context f, opened = _open_file(file_like, appendmat, mode) File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 45, in _open_file return open(file_like, mode), True FileNotFoundError: [Errno 2] No such file or directory: 'BFM/BFM09_model_info.mat'

    In 3DMMfitting-pytorch, there are only these files: BFM_exp_idx.mat BFM_front_idx.mat facemodel_info.mat README.md select_vertex_id.mat similarity_Lm3D_all.mat std_exp.txt

    opened by 675492062 2
  • What is the expected time needed for running demo_nicp.py?

    What is the expected time needed for running demo_nicp.py?

    Hello,

    On my computer it seems quite slow to run demo_nicp.py. At least it took more than 1 minutes to get final.obj. Is it correct?

    I ranAMM_NRR for non-rigit ICP registration with two 7000 vertices meshes. It needs ca 1 second with CPU on my computer. With GPU, it might be possible to do the same work in less than 100 ms?

    Thank you!

    opened by 1939938853 0
  • Hi, with landmarks: `landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()`, maybe you can  reshape landmarks from torch.Size([1, 1, 68, 2]) to  torch.Size([1, 68, 2])

    Hi, with landmarks: `landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()`, maybe you can reshape landmarks from torch.Size([1, 1, 68, 2]) to torch.Size([1, 68, 2])

    Hi, with landmarks: landmarks = torch.from_numpy(np.array(landmarks)).to(device).long(), maybe you can reshape landmarks from torch.Size([1, 1, 68, 2]) to torch.Size([1, 68, 2])

    Originally posted by @wuhaozhe in https://github.com/wuhaozhe/pytorch-nicp/issues/3#issuecomment-971453681 hi!I got output as torch.Size([1, 68, 512, 3]) torch.Size([1, 68, 2]) torch.Size([1, 512, 512, 3]) I think the shape of following tensors are right, but I meet the same problem. lm_vertex = torch.gather(lm_vertex, 2, column_index) RuntimeError: CUDA error: device-side assert triggered

    landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()
    
    row_index = landmarks[:, :, 1].view(landmarks.shape[0], -1)
    column_index = landmarks[:, :, 0].view(landmarks.shape[0], -1)
    row_index = row_index.unsqueeze(2).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], shape_img.shape[2], shape_img.shape[3])
    column_index = column_index.unsqueeze(1).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], landmarks.shape[1], shape_img.shape[3])
    print(row_index.shape, landmarks.shape, shape_img.shape)
    
    opened by alicedingyueming 1
  • RuntimeError

    RuntimeError

    Traceback (most recent call last): File "demo_nicp.py", line 27, in target_lm_index, lm_mask = get_mesh_landmark(norm_meshes, dummy_render) File "/data/pytorch-nicp/landmark.py", line 37, in get_mesh_landmark row_index = row_index.unsqueeze(2).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], shape_img.shape[2], shape_img.shape[3]) RuntimeError: The expanded size of the tensor (1) must match the existing size (2) at non-singleton dimension 1. Target sizes: [1, 1, 512, 3]. Tensor sizes: [1, 2, 1, 1]

    I have already configure the environment,but it seems have some problems in the code.What can I do to solve this problem.

    opened by 675492062 8
Releases(v0.1)
Owner
Haozhe Wu
Research interests in Computer Vision and Machine Learning.
Haozhe Wu
[ECE NTUA] 👁 Computer Vision - Lab Projects & Theoretical Problem Sets (2020-2021)

Computer Vision - NTUA (2020-2021) This repository hosts the lab projects and theoretical problem sets of the Computer Vision course held by ECE NTUA

Dimitris Dimos 6 Jul 21, 2022
Recursive Bayesian Networks

Recursive Bayesian Networks This repository contains the code to reproduce the results from the NeurIPS 2021 paper Lieck R, Rohrmeier M (2021) Recursi

Robert Lieck 11 Oct 18, 2022
Put blind watermark into a text with python

text_blind_watermark Put blind watermark into a text. Can be used in Wechat dingding ... How to Use install pip install text_blind_watermark Alice Pu

郭飞 164 Dec 30, 2022
Facial detection, landmark tracking and expression transfer library for Windows, Linux and Mac

Welcome to the CSIRO Face Analysis SDK. Documentation for the SDK can be found in doc/documentation.html. All code in this SDK is provided according t

Luiz Carlos Vieira 7 Jul 16, 2020
Official Implementation of SWAD (NeurIPS 2021)

SWAD: Domain Generalization by Seeking Flat Minima (NeurIPS'21) Official PyTorch implementation of SWAD: Domain Generalization by Seeking Flat Minima.

Junbum Cha 97 Dec 20, 2022
Predicting Tweet Sentiment Maching Learning and streamlit

Predicting-Tweet-Sentiment-Maching-Learning-and-streamlit (I prefere using Visual Studio Code ) Open the folder in VS Code Run the first cell in requi

1 Nov 20, 2021
Rule Based Classification Project For Python

Rule-Based-Classification-Project (ENG) Business Problem: A game company wants to create new level-based customer definitions (personas) by using some

Deniz Can OĞUZ 4 Oct 29, 2022
A decent AI that solves daily Wordle puzzles. Works with different websites with similar wordlists,.

Wordle-AI A decent AI that solves daily "Wordle" puzzles. Works with different websites with similar wordlists. When prompted with "Word:" enter the w

Ethan 1 Feb 10, 2022
Convert onnx models to pytorch.

onnx2torch onnx2torch is an ONNX to PyTorch converter. Our converter: Is easy to use – Convert the ONNX model with the function call convert; Is easy

ENOT 264 Dec 30, 2022
Course materials for Fall 2021 "CIS6930 Topics in Computing for Data Science" at New College of Florida

Fall 2021 CIS6930 Topics in Computing for Data Science This repository hosts course materials used for a 13-week course "CIS6930 Topics in Computing f

Yoshi Suhara 101 Nov 30, 2022
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System

Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System Authors: Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai

Amazon Web Services - Labs 123 Dec 23, 2022
HyperSeg: Patch-wise Hypernetwork for Real-time Semantic Segmentation Official PyTorch Implementation

: We present a novel, real-time, semantic segmentation network in which the encoder both encodes and generates the parameters (weights) of the decoder. Furthermore, to allow maximal adaptivity, the w

Yuval Nirkin 182 Dec 14, 2022
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs

Context Encoders: Feature Learning by Inpainting CVPR 2016 [Project Website] [Imagenet Results] Sample results on held-out images: This is the trainin

Deepak Pathak 829 Dec 31, 2022
Source code for the paper "PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction" in ACL2021

PLOME:Pre-training with Misspelled Knowledge for Chinese Spelling Correction (ACL2021) This repository provides the code and data of the work in ACL20

197 Nov 26, 2022
MRI reconstruction (e.g., QSM) using deep learning methods

deepMRI: Deep learning methods for MRI Authors: Yang Gao, Hongfu Sun This repo is devloped based on Pytorch (1.8 or later) and matlab (R2019a or later

Hongfu Sun 17 Dec 18, 2022
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a

Tianxiang Sun 149 Jan 04, 2023
A curated list of awesome resources combining Transformers with Neural Architecture Search

A curated list of awesome resources combining Transformers with Neural Architecture Search

Yash Mehta 173 Jan 03, 2023
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 05, 2022
Deep learning model, heat map, data prepo

deep learning model, heat map, data prepo

Pamela Dekas 1 Jan 14, 2022
Demo notebooks for Qiskit application modules demo sessions (Oct 8 & 15):

qiskit-application-modules-demo-sessions This repo hosts demo notebooks for the Qiskit application modules demo sessions hosted on Qiskit YouTube. Par

Qiskit Community 46 Nov 24, 2022