Official implementation of "Implicit Neural Representations with Periodic Activation Functions"

Related tags

Deep Learningsiren
Overview

Implicit Neural Representations with Periodic Activation Functions

Project Page | Paper | Data

Explore Siren in Colab

Vincent Sitzmann*, Julien N. P. Martel*, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein
Stanford University, *denotes equal contribution

This is the official implementation of the paper "Implicit Neural Representations with Periodic Activation Functions".

siren_video

Google Colab

If you want to experiment with Siren, we have written a Colab. It's quite comprehensive and comes with a no-frills, drop-in implementation of SIREN. It doesn't require installing anything, and goes through the following experiments / SIREN properties:

  • Fitting an image
  • Fitting an audio signal
  • Solving Poisson's equation
  • Initialization scheme & distribution of activations
  • Distribution of activations is shift-invariant
  • Periodicity & behavior outside of the training range.

Tensorflow Playground

You can also play arond with a tiny SIREN interactively, directly in the browser, via the Tensorflow Playground here. Thanks to David Cato for implementing this!

Get started

If you want to reproduce all the results (including the baselines) shown in the paper, the videos, point clouds, and audio files can be found here.

You can then set up a conda environment with all dependencies like so:

conda env create -f environment.yml
conda activate siren

High-Level structure

The code is organized as follows:

  • dataio.py loads training and testing data.
  • training.py contains a generic training routine.
  • modules.py contains layers and full neural network modules.
  • meta_modules.py contains hypernetwork code.
  • utils.py contains utility functions, most promintently related to the writing of Tensorboard summaries.
  • diff_operators.py contains implementations of differential operators.
  • loss_functions.py contains loss functions for the different experiments.
  • make_figures.py contains helper functions to create the convergence videos shown in the video.
  • ./experiment_scripts/ contains scripts to reproduce experiments in the paper.

Reproducing experiments

The directory experiment_scripts contains one script per experiment in the paper.

To monitor progress, the training code writes tensorboard summaries into a "summaries"" subdirectory in the logging_root.

Image experiments

The image experiment can be reproduced with

python experiment_scripts/train_img.py --model_type=sine

The figures in the paper were made by extracting images from the tensorboard summaries. Example code how to do this can be found in the make_figures.py script.

Audio experiments

This github repository comes with both the "counting" and "bach" audio clips under ./data.

They can be trained with

python experiment_scipts/train_audio.py --model_type=sine --wav_path=<path_to_audio_file>

Video experiments

The "bikes" video sequence comes with scikit-video and need not be downloaded. The cat video can be downloaded with the link above.

To fit a model to a video, run

python experiment_scipts/train_video.py --model_type=sine --experiment_name bikes_video

Poisson experiments

For the poisson experiments, there are three separate scripts: One for reconstructing an image from its gradients (train_poisson_grad_img.py), from its laplacian (train_poisson_lapl_image.py), and to combine two images (train_poisson_gradcomp_img.py).

Some of the experiments were run using the BSD500 datast, which you can download here.

SDF Experiments

To fit a Signed Distance Function (SDF) with SIREN, you first need a pointcloud in .xyz format that includes surface normals. If you only have a mesh / ply file, this can be accomplished with the open-source tool Meshlab.

To reproduce our results, we provide both models of the Thai Statue from the 3D Stanford model repository and the living room used in our paper for download here.

To start training a SIREN, run:

python experiments_scripts/train_single_sdf.py --model_type=sine --point_cloud_path=<path_to_the_model_in_xyz_format> --batch_size=250000 --experiment_name=experiment_1

This will regularly save checkpoints in the directory specified by the rootpath in the script, in a subdirectory "experiment_1". The batch_size is typically adjusted to fit in the entire memory of your GPU. Our experiments show that with a 256, 3 hidden layer SIREN one can set the batch size between 230-250'000 for a NVidia GPU with 12GB memory.

To inspect a SDF fitted to a 3D point cloud, we now need to create a mesh from the zero-level set of the SDF. This is performed with another script that uses a marching cubes algorithm (adapted from the DeepSDF github repo) and creates the mesh saved in a .ply file format. It can be called with:

python experiments_scripts/test_single_sdf.py --checkpoint_path=<path_to_the_checkpoint_of_the_trained_model> --experiment_name=experiment_1_rec 

This will save the .ply file as "reconstruction.ply" in "experiment_1_rec" (be patient, the marching cube meshing step takes some time ;) ) In the event the machine you use for the reconstruction does not have enough RAM, running test_sdf script will likely freeze. If this is the case, please use the option --resolution=512 in the command line above (set to 1600 by default) that will reconstruct the mesh at a lower spatial resolution.

The .ply file can be visualized using a software such as Meshlab (a cross-platform visualizer and editor for 3D models).

Helmholtz and wave equation experiments

The helmholtz and wave equation experiments can be reproduced with the train_wave_equation.py and train_helmholtz.py scripts.

Torchmeta

We're using the excellent torchmeta to implement hypernetworks. We realized that there is a technical report, which we forgot to cite - it'll make it into the camera-ready version!

Citation

If you find our work useful in your research, please cite:

@inproceedings{sitzmann2019siren,
    author = {Sitzmann, Vincent
              and Martel, Julien N.P.
              and Bergman, Alexander W.
              and Lindell, David B.
              and Wetzstein, Gordon},
    title = {Implicit Neural Representations
              with Periodic Activation Functions},
    booktitle = {arXiv},
    year={2020}
}

Contact

If you have any questions, please feel free to email the authors.

Owner
Vincent Sitzmann
Incoming Assistant Professor @mit EECS. I'm researching neural scene representations - the way neural networks learn to represent information on our world.
Vincent Sitzmann
Deep and online learning with spiking neural networks in Python

Introduction The brain is the perfect place to look for inspiration to develop more efficient neural networks. One of the main differences with modern

Jason Eshraghian 447 Jan 03, 2023
SiT: Self-supervised vIsion Transformer

This repository contains the official PyTorch self-supervised pretraining, finetuning, and evaluation codes for SiT (Self-supervised image Transformer).

Sara Ahmed 275 Dec 28, 2022
Random Forests for Regression with Missing Entries

Random Forests for Regression with Missing Entries These are specific codes used in the article: On the Consistency of a Random Forest Algorithm in th

Irving Gómez-Méndez 1 Nov 15, 2021
Deep motion generator collections

GenMotion GenMotion (/gen’motion/) is a Python library for making skeletal animations. It enables easy dataset loading and experiment sharing for synt

23 May 24, 2022
[CVPR2021] Invertible Image Signal Processing

Invertible Image Signal Processing This repository includes official codes for "Invertible Image Signal Processing (CVPR2021)". Figure: Our framework

Yazhou XING 281 Dec 31, 2022
Equivariant layers for RC-complement symmetry in DNA sequence data

Equi-RC Equivariant layers for RC-complement symmetry in DNA sequence data This is a repository that implements the layers as described in "Reverse-Co

7 May 19, 2022
Datasets, tools, and benchmarks for representation learning of code.

The CodeSearchNet challenge has been concluded We would like to thank all participants for their submissions and we hope that this challenge provided

GitHub 1.8k Dec 25, 2022
Shuwa Gesture Toolkit is a framework that detects and classifies arbitrary gestures in short videos

Shuwa Gesture Toolkit is a framework that detects and classifies arbitrary gestures in short videos

Google 89 Dec 22, 2022
Convert dog pictures into various painting styles. Try LimnPet

LimnPet Cartoon stylization service project Try our service » Home page · Team notion · Members 목차 프로젝트 소개 프로젝트 목표 사용한 기술스택과 수행도구 팀원 구현 기능 주요 기능 추가 기능

LiJell 7 Jul 14, 2022
Impelmentation for paper Feature Generation and Hypothesis Verification for Reliable Face Anti-Spoofing

FGHV Impelmentation for paper Feature Generation and Hypothesis Verification for Reliable Face Anti-Spoofing Requirements Python 3.6 Pytorch 1.5.0 Cud

5 Jun 02, 2022
[ACL-IJCNLP 2021] "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets"

EarlyBERT This is the official implementation for the paper in ACL-IJCNLP 2021 "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets" by

VITA 13 May 11, 2022
GPU Accelerated Non-rigid ICP for surface registration

GPU Accelerated Non-rigid ICP for surface registration Introduction Preivous Non-rigid ICP algorithm is usually implemented on CPU, and needs to solve

Haozhe Wu 144 Jan 04, 2023
[CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation

RCIL [CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation Chang-Bin Zhang1, Jia-Wen Xiao1, Xialei Liu1, Ying-Cong Chen2

Chang-Bin Zhang 71 Dec 28, 2022
Official implementation of the ICCV 2021 paper "Conditional DETR for Fast Training Convergence".

The DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergen

281 Dec 30, 2022
Active and Sample-Efficient Model Evaluation

Active Testing: Sample-Efficient Model Evaluation Hi, good to see you here! 👋 This is code for "Active Testing: Sample-Efficient Model Evaluation". P

Jannik Kossen 19 Oct 30, 2022
Mememoji - A facial expression classification system that recognizes 6 basic emotions: happy, sad, surprise, fear, anger and neutral.

a project built with deep convolutional neural network and ❤️ Table of Contents Motivation The Database The Model 3.1 Input Layer 3.2 Convolutional La

Jostine Ho 761 Dec 05, 2022
TargetAllDomainObjects - A python wrapper to run a command on against all users/computers/DCs of a Windows Domain

TargetAllDomainObjects A python wrapper to run a command on against all users/co

Podalirius 19 Dec 13, 2022
FairMOT - A simple baseline for one-shot multi-object tracking

FairMOT - A simple baseline for one-shot multi-object tracking

Yifu Zhang 3.6k Jan 08, 2023
Certifiable Outlier-Robust Geometric Perception

Certifiable Outlier-Robust Geometric Perception About This repository holds the implementation for certifiably solving outlier-robust geometric percep

83 Dec 31, 2022
PyTorch implementation of DUL (Data Uncertainty Learning in Face Recognition, CVPR2020)

PyTorch implementation of DUL (Data Uncertainty Learning in Face Recognition, CVPR2020)

Mouxiao Huang 20 Nov 15, 2022