A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

Related tags

Deep LearningA-SDF
Overview

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

This repository contains the official implementation for A-SDF introduced in the following paper: A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021). The code is developed based on the Pytorch framework(1.6.0) with python 3.7.6. This repo includes training code and generated data from shape2motion.

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)
JitengMu, Weichao Qiu, Adam Kortylewski, Alan Yuille, Nuno Vasconcelos, Xiaolong Wang
ICCV 2021

The project page with more details is at https://jitengmu.github.io/A-SDF/.

Citation

If you find our code or method helpful, please use the following BibTex entry.

@article{mu2021asdf,
  author    = {Jiteng Mu and
               Weichao Qiu and
               Adam Kortylewski and
               Alan L. Yuille and
               Nuno Vasconcelos and
               Xiaolong Wang},
  title     = {{A-SDF:} Learning Disentangled Signed Distance Functions for Articulated
               Shape Representation},
  journal    = {arXiv preprint arXiv:2104.07645 },
  year      = {2021},
}

Data preparation and layout

Please 1) download dataset and put data in the data directory, and 2) download checkpoints and put the checkpoint in the corresponding example/ directory, e.g. it should look like examples/laptop/laptop-asdf/Model_Parameters/1000.pth.

The dataset is structured as follows, can be, e.g. shape2motion/shape2motion-1-view/shape2motion-2-view/rbo :

data/
    SdfSamples/
        
   
    /
            
    
     /
                
     
      .npz
    SurfaceSamples/
        
      
       /
            
       
        / 
        
         .ply NormalizationParameters/ 
         
          / 
          
           / 
           
            .ply 
           
          
         
        
       
      
     
    
   

Splits of train/test files are stored in a simple JSON format. For examples, see examples/splits/.

How to Use A-SDF

Use the class laptop as illustration. Feel free to change to stapler/washing_machine/door/oven/eyeglasses/refrigerator for exploring other categories.

(a) Train a model

To train a model, run

python train.py -e examples/laptop/laptop-asdf/

(b) Reconstruction

To use a trained model to reconstruct explicit mesh representations of shapes from the test set, run the follow scripts, where -m recon_testset_ttt for inference with test-time adaptation and -m recon_testset otherwise.

python test.py -e examples/laptop/laptop-asdf/ -c 1000 -m recon_testset_ttt

To compute the chamfer distance, run:

python eval.py -e examples/laptop/laptop-asdf/ -c 1000 -m recon_testset_ttt

(c) Generation

To use a trained model to genrate explicit mesh of unseen articulations (specified in asdf/asdf_reconstruct.py) of shapes from the test set, run the follow scripts. Note that -m mode should be consistent with the one for reconstruction: -m generation_ttt for inference with test-time adaptation and -m generation otherwise.

python test.py -e examples/laptop/laptop-asdf/ -c 1000 -m generation_ttt
python eval.py -e examples/laptop/laptop-asdf/ -c 1000 -m generation_ttt

(d) Interpolation

To use a trained model to interpolate explicit mesh of unseen articulations (specified in asdf/asdf_reconstruct.py) of shapes from the test set, run:

python test.py -e examples/laptop/laptop-asdf/ -c 1000 -m inter_testset
python eval.py -e examples/laptop/laptop-asdf/ -c 1000 -m inter_testset

(e) Partial Point Cloud

To use a trained model to reconstruct and generate explicit meshes from partial pointcloud: (1) download the partial point clouds dataset laptop-1/2-view-0.025.zip from dataset first and (2) put the laptop checkpoint trained on shape2motion in examples/laptop/laptop-asdf-1/2-view/, (3) then run the following scripts, where --dataset shape2motion-1-view for partial point clouds generated from a single depth image and --dataset shape2motion-2-view for the case generated from two depth images of different view points, -m can be one of recon_testset/recon_testset_ttt/generation/generation_ttt, similar to previous experiments.

python test.py -e examples/laptop/laptop-asdf-1-view/ -c 1000 -m recon_testset_ttt/generation_ttt --dataset shape2motion-1-view
python eval.py -e examples/laptop/laptop-asdf-1-view/ -c 1000 -m recon_testset_ttt/generation_ttt

(f) RBO dataset

To test a model on the rbo dataset: (1) download the generated partial point clouds of the laptop class from the rbo dataset --- rbo_laptop_release_test.zip from dataset first, (2) put the laptop checkpoint trained on shape2motion in examples/laptop/laptop-asdf-rbo/, (3) then run the following,

python test.py -e examples/laptop/laptop-asdf-rbo/ -m recon_testset_ttt/generation_ttt -c 1000 --dataset rbo
python eval_rbo.py -e examples/laptop/laptop-asdf-rbo/ -m recon_testset_ttt/generation_ttt -c 1000

Dataset generation details are included in the 'dataset_generation/rbo'.

Data Generation

Stay tuned. We follow (1) ANSCH to create URDF for shape2motion dataset,(2) Manifold to create watertight meshes, (3) and modified mesh_to_sdf for generating sampled points and sdf values.

Acknowledgement

The code is heavily based on Jeong Joon Park's DeepSDF from facebook.

Owner
Ph.D. student
Patch-Based Deep Autoencoder for Point Cloud Geometry Compression

Patch-Based Deep Autoencoder for Point Cloud Geometry Compression Overview The ever-increasing 3D application makes the point cloud compression unprec

17 Dec 05, 2022
Hand Gesture Volume Control is AIML based project which uses image processing to control the volume of your Computer.

Hand Gesture Volume Control Modules There are basically three modules Handtracking Program Handtracking Module Volume Control Program Handtracking Pro

VITTAL 1 Jan 12, 2022
Learning to See by Looking at Noise

Learning to See by Looking at Noise This is the official implementation of Learning to See by Looking at Noise. In this work, we investigate a suite o

Manel Baradad Jurjo 82 Dec 24, 2022
scikit-learn inspired API for CRFsuite

sklearn-crfsuite sklearn-crfsuite is a thin CRFsuite (python-crfsuite) wrapper which provides interface simlar to scikit-learn. sklearn_crfsuite.CRF i

417 Dec 20, 2022
ADOP: Approximate Differentiable One-Pixel Point Rendering

ADOP: Approximate Differentiable One-Pixel Point Rendering Abstract: We present a novel point-based, differentiable neural rendering pipeline for scen

Darius Rückert 1.9k Jan 06, 2023
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)

TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020) About The goal of our research problem is illustrated below: give

59 Dec 09, 2022
Official implementation of the paper DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows

DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows Official implementation of the paper DeFlow: Learning Complex Im

Valentin Wolf 86 Nov 16, 2022
Individual Tree Crown classification on WorldView-2 Images using Autoencoder -- Group 9 Weak learners - Final Project (Machine Learning 2020 Course)

Created by Olga Sutyrina, Sarah Elemili, Abduragim Shtanchaev and Artur Bille Individual Tree Crown classification on WorldView-2 Images using Autoenc

2 Dec 08, 2022
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Code for Paper "Imbalanced Gradients: A Subtle Cause of Overestimated Adv

Hanxun Huang 11 Nov 30, 2022
Caffe models in TensorFlow

Caffe to TensorFlow Convert Caffe models to TensorFlow. Usage Run convert.py to convert an existing Caffe model to TensorFlow. Make sure you're using

Saumitro Dasgupta 2.8k Dec 31, 2022
A configurable, tunable, and reproducible library for CTR prediction

FuxiCTR This repo is the community dev version of the official release at huawei-noah/benchmark/FuxiCTR. Click-through rate (CTR) prediction is an cri

XUEPAI 397 Dec 30, 2022
MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python

Digital Image Processing Python MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python TO-DO: Refactor scripts, curren

Merve Noyan 24 Oct 16, 2022
Graph Neural Networks with Keras and Tensorflow 2.

Welcome to Spektral Spektral is a Python library for graph deep learning, based on the Keras API and TensorFlow 2. The main goal of this project is to

Daniele Grattarola 2.2k Jan 08, 2023
An easier way to build neural search on the cloud

An easier way to build neural search on the cloud Jina is a deep learning-powered search framework for building cross-/multi-modal search systems (e.g

Jina AI 17k Jan 02, 2023
For holding anime-related object classification and detection models

Animesion An end-to-end framework for anime-related object classification, detection, segmentation, and other models. Update: 01/22/2020. Due to time-

Edwin Arkel Rios 72 Nov 30, 2022
Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, as a standalone package for Pytorch

Triangle Multiplicative Module - Pytorch Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or c

Phil Wang 22 Oct 28, 2022
Image Classification - A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

0 Jan 23, 2022
Why Are You Weird? Infusing Interpretability in Isolation Forest for Anomaly Detection

Why, hello there! This is the supporting notebook for the research paper — Why Are You Weird? Infusing Interpretability in Isolation Forest for Anomal

2 Dec 14, 2021
code for our ECCV 2020 paper "A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation"

Code for our ECCV (2020) paper A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation. Prerequisites: python == 3.6.8 pytorch ==1.1.0

32 Nov 27, 2022
A library that allows for inference on probabilistic models

Bean Machine Overview Bean Machine is a probabilistic programming language for inference over statistical models written in the Python language using

Meta Research 234 Dec 29, 2022