Neural Surface Maps

Overview

Neural Surface Maps

Official implementation of Neural Surface Maps - Luca Morreale, Noam Aigerman, Vladimir Kim, Niloy J. Mitra

[Paper] [Project Page]

How-To

Replicating the results is possible following these steps:

  1. Parametrize the surface
  2. Prepare surface sample
  3. Overfit the surface
  4. Neural parametrization of the surface
  5. Optimize surface-to-surface map
  6. Optimize a map between a collection

1. Surface Parametrization

This is a preprocessing step. You can use SLIM[1] from this repo to fulfill this step.

2. Sample preparation

Given a parametrized surface (prev. step), we need to convert it into a sample. First of all, we need to over sample the surface with Meshlab. You can use the midpoint subdivision filter.

Once the super-sampled surface is ready then you can convert it into a sample:

python -m preprocessing.convert_sample surface_slim.obj surface_slim_oversampled.obj output_sample.pth

The file output_sample.pth is the sample ready to be over-fitted.

3. Overfit surface

A surface representation is generated with:

python -m training_surface_map dataset.sample_path=output_sample.pth

This will save a surface map inside outputs/neural_maps folder. The folder name follows this patterns: overfit_[timestamp]. Inside that folder, the map is saved under the sample fodler as pth file.

The overfitted surface can be generated with:

python -m show_surface_map

please, set the path to the pth file just created inside the script.

4. Neural parametrization

Generating a neural parametrization need to run:

python -m training_parametrization_map dataset.sample_path=your_surface_map.pth

Like for the overfitting, this saves the map inside outputs/neural_maps folder. The folder name have the following patterns parametrization_[timestamp].

To display the paramtrization obtained run:

python -m show_parametrization_map

please, set the path to the pth file just created inside the script.

5. Optimize surface-to-surface map

To generating a inter-surface map run:

python -m training_intersurface_map dataset.sample_path_g=your_surface_map_a.pth dataset.sample_path_f=your_surface_map_b.pth

Note, this steps requires two surface maps. A source, sample_path_g, and a target, sample_path_f.

Likewise the overfitting, the map is saved inside outputs/neural_maps. The inter-surface map folder pattern is intersurface_[timestamp]. The pth file is inside the models folder.

To display the inter-surface map run:

python -m show_intersurface_map

remember to set the path of the maps inside the script.

6. Optimize collection map

A collection between a set of surface maps can be optimized with:

python -m training_intersurface_map dataset.sample_path_g=your_surface_map_g.pth dataset.sample_path_f=your_surface_map_f.pth dataset.sample_path_q=your_surface_map_q.pth

Note, this steps requires three surface maps. A source, sample_path_g, and two targets, sample_path_f and sample_path_q.

This will save two maps inside outputs/neural_maps folder. The folder name follows this patterns: collection_[timestamp], under the folder models you can find two *.pth file.

To display the collection map run:

python -m show_collection_map

remember to set the path of maps inside the script.


Dependencies

Dependencies are listed in environment.yml. Using conda, all the packages can be installed with conda env create -f environment.yml.

On top of the packages above, please install also pytorch svd on gpu package.


Data

Any mesh can be used for this process. A data example can be downloaded here.


Citation

@misc{morreale2021neural,
      title={Neural Surface Maps},
      author={Luca Morreale and Noam Aigerman and Vladimir Kim and Niloy J. Mitra},
      year={2021},
      eprint={2103.16942},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

References

[1] Scalable locally injective mappings - Michael Rabinovich et. al. - ACM Transactions on Graphics (TOG) 2017

Owner
Luca Morreale
Luca Morreale
FluidNet re-written with ATen tensor lib

fluidnet_cxx: Accelerating Fluid Simulation with Convolutional Neural Networks. A PyTorch/ATen Implementation. This repository is based on the paper,

JoliBrain 50 Jun 07, 2022
Official PyTorch implementation of MAAD: A Model and Dataset for Attended Awareness

MAAD: A Model for Attended Awareness in Driving Install // Datasets // Training // Experiments // Analysis // License Official PyTorch implementation

7 Oct 16, 2022
Live training loss plot in Jupyter Notebook for Keras, PyTorch and others

livelossplot Don't train deep learning models blindfolded! Be impatient and look at each epoch of your training! (RECENT CHANGES, EXAMPLES IN COLAB, A

Piotr Migdał 1.2k Jan 08, 2023
113 Nov 28, 2022
Official Pytorch Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images.

IAug_CDNet Official Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images. Overview We propose a

53 Dec 02, 2022
Neural style transfer as a class in PyTorch

pt-styletransfer Neural style transfer as a class in PyTorch Based on: https://github.com/alexis-jacq/Pytorch-Tutorials Adds: StyleTransferNet as a cl

Tyler Kvochick 31 Jun 27, 2022
Lightweight mmm - Lightweight (Bayesian) Media Mix Model

Lightweight (Bayesian) Media Mix Model This is not an official Google product. L

Google 342 Jan 03, 2023
To model the probability of a soccer coach leave his/her team during Campeonato Brasileiro for 10 chosen teams and considering years 2018, 2019 and 2020.

To model the probability of a soccer coach leave his/her team during Campeonato Brasileiro for 10 chosen teams and considering years 2018, 2019 and 2020.

Larissa Sayuri Futino Castro dos Santos 1 Jan 20, 2022
Fre-GAN: Adversarial Frequency-consistent Audio Synthesis

Fre-GAN Vocoder Fre-GAN: Adversarial Frequency-consistent Audio Synthesis Training: python train.py --config config.json Citation: @misc{kim2021frega

Rishikesh (ऋषिकेश) 93 Dec 17, 2022
Cowsay - A rewrite of cowsay in python

Python Cowsay A rewrite of cowsay in python. Allows for parsing of existing .cow

James Ansley 3 Jun 27, 2022
Official repo for BMVC2021 paper ASFormer: Transformer for Action Segmentation

ASFormer: Transformer for Action Segmentation This repo provides training & inference code for BMVC 2021 paper: ASFormer: Transformer for Action Segme

42 Dec 23, 2022
Res2Net for Instance segmentation and Object detection using MaskRCNN

Res2Net for Instance segmentation and Object detection using MaskRCNN Since the MaskRCNN-benchmark of facebook is deprecated, we suggest to use our mm

Res2Net Applications 55 Oct 30, 2022
🔥 Cannlytics-powered artificial intelligence 🤖

Cannlytics AI 🔥 Cannlytics-powered artificial intelligence 🤖 🏗️ Installation 🏃‍♀️ Quickstart 🧱 Development 🦾 Automation 💸 Support 🏛️ License ?

Cannlytics 3 Nov 11, 2022
Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

6 Dec 19, 2022
EssentialMC2 Video Understanding

EssentialMC2 Introduction EssentialMC2 is a complete system to solve video understanding tasks including MHRL(representation learning), MECR2( relatio

Alibaba 106 Dec 11, 2022
Detecting Blurred Ground-based Sky/Cloud Images

Detecting Blurred Ground-based Sky/Cloud Images With the spirit of reproducible research, this repository contains all the codes required to produce t

1 Oct 20, 2021
Codebase for Attentive Neural Hawkes Process (A-NHP) and Attentive Neural Datalog Through Time (A-NDTT)

Introduction Codebase for the paper Transformer Embeddings of Irregularly Spaced Events and Their Participants. This codebase contains two packages: a

Alan Yang 28 Dec 12, 2022
CVAT is free, online, interactive video and image annotation tool for computer vision

Computer Vision Annotation Tool (CVAT) CVAT is free, online, interactive video and image annotation tool for computer vision. It is being used by our

OpenVINO Toolkit 8.6k Jan 04, 2023
links and status of cool gradio demos

awesome-demos This is a list of some wonderful demos & applications built with Gradio. Here's how to contribute yours! 🖊️ Natural language processing

Gradio 96 Dec 30, 2022
The implementation for the SportsCap (IJCV 2021)

SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos ProjectPage | Paper | Video | Dataset (Part01

Chen Xin 79 Dec 16, 2022