DIVeR: Deterministic Integration for Volume Rendering

Related tags

Deep Learningdiver
Overview

DIVeR: Deterministic Integration for Volume Rendering

This repo contains the training and evaluation code for DIVeR.

Setup

  • python 3.8
  • pytorch 1.9.0
  • pytorch-lightning 1.2.10
  • torchvision 0.2.2
  • torch-scatter 2.0.8

Dataset

Pre-trained models

Both our real-time and offline models can be found in here.

Usage

Edit configs/config.py to configure a training and setup dataset path.

To reproduce the results of the paper, replace config.py with other configuration files under the same folder.

The 'implicit' training stage takes around 40GB GPU memory and the 'implicit-explicit' stage takes around 20GB GPU memory. Decreasing the voxel grid size by a factor of 2 results in models that require around 10GB GPU memory, which causes acceptable deduction on rendering quality.

Training

To train an explicit or implicit model:

python train.py --experiment_name=EXPERIMENT_NAME \
				--device=GPU_DEVICE\
				--resume=True # if want to resume a training

After training an implicit model, the explicit model can be trained:

python train.py --experiment_name=EXPERIMENT_NAME \
				--ft=CHECKPOINT_PATH_TO_IMPLICIT_MODEL_CHECKPOINT\
				--device=GPU_DEVICE\
				--resume=True

Post processing

After the coarse model training and the fine 'implicit-explicit' model training, we perform voxel culling:

python prune.py --checkpoint_path=PATH_TO_MODEL_CHECKPOINT_FOLDER\
				--coarse_size=COARSE_IMAGE_SIZE\
				--fine_size=FINE_IMAGE_SIZE\
				--fine_ray=1 # to get rays that pass through non-empty space, 0 otherwise\
				--batch=BATCH_SIZE\
				--device=GPU_DEVICE

which stores the max-scattered 3D alpha map under model checkpoint folder as alpha_map.pt . The rays that pass through non-empty space is also stored under model checkpoint folder. For Nerf-synthetic dataset, we directly store the rays in fine_rays.npz; for Tanks&Temples and BlendedMVS, we store the mask for each pixel under folder masks which indicates the pixels (rays) to be sampled.

To convert the checkpoint file in training to pytorch model weight or serialized weight file for real-time rendering:

python convert.py --checkpoint_path=PATH_TO_MODEL_CHECKPOINT_FILE\
				  --serialize=1 # if want to build serialized weight, 0 otherwise

The converted files will be stored under the same folder as the checkpoint file, where the pytorch model weight file is named as weight.pth, and the serialized weight file is named as serialized.pth

Evaluation

To extract the offline rendered images:

python eval.py --checkpoint_path=PATH_TO_MODEL_CHECKPOINT_FILE\
			   --output_path=PATH_TO_OUTPUT_IMAGES_FOLDER\
			   --batch=BATCH_SIZE\
			   --device=GPU_DEVICE

To extract the real-time rendered images and test the mean FPS on the test sequence:

pyrhon eval_rt.py --checkpoint_path=PATH_TO_SERIALIZED_WEIGHT_FILE
				  --output_path=PATH_TO_OUPUT_IMAGES_FOLDER\
				  --decoder={32,64} # diver32, diver64\ 
				  --device=GPU_DEVICE

Resources

Citation

@misc{wu2021diver,
      title={DIVeR: Real-time and Accurate Neural Radiance Fields with Deterministic Integration for Volume Rendering}, 
      author={Liwen Wu and Jae Yong Lee and Anand Bhattad and Yuxiong Wang and David Forsyth},
      year={2021},
      eprint={2111.10427},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Bridging the Gap between Label- and Reference based Synthesis(ICCV 2021)

Bridging the Gap between Label- and Reference based Synthesis(ICCV 2021) Tensorflow implementation of Bridging the Gap between Label- and Reference-ba

huangqiusheng 8 Jul 13, 2022
Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning. CVPR 2018

Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning Tensorflow code and models for the paper: Large Scale Fine-Grained Categ

Yin Cui 187 Oct 01, 2022
Autoregressive Models in PyTorch.

Autoregressive This repository contains all the necessary PyTorch code, tailored to my presentation, to train and generate data from WaveNet-like auto

Christoph Heindl 41 Oct 09, 2022
Source code for the paper "PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction" in ACL2021

PLOME:Pre-training with Misspelled Knowledge for Chinese Spelling Correction (ACL2021) This repository provides the code and data of the work in ACL20

197 Nov 26, 2022
codes for Self-paced Deep Regression Forests with Consideration on Ranking Fairness

Self-paced Deep Regression Forests with Consideration on Ranking Fairness This is official codes for paper Self-paced Deep Regression Forests with Con

Learning in Vision 4 Sep 11, 2022
minimizer-space de Bruijn graphs (mdBG) for whole genome assembly

rust-mdbg: Minimizer-space de Bruijn graphs (mdBG) for whole-genome assembly rust-mdbg is an ultra-fast minimizer-space de Bruijn graph (mdBG) impleme

Barış Ekim 148 Dec 01, 2022
Learning Continuous Image Representation with Local Implicit Image Function

LIIF This repository contains the official implementation for LIIF introduced in the following paper: Learning Continuous Image Representation with Lo

Yinbo Chen 1k Dec 25, 2022
A general and strong 3D object detection codebase that supports more methods, datasets and tools (debugging, recording and analysis).

ALLINONE-Det ALLINONE-Det is a general and strong 3D object detection codebase built on OpenPCDet, which supports more methods, datasets and tools (de

Michael.CV 5 Nov 03, 2022
Weight estimation in CT by multi atlas techniques

maweight A Python package for multi-atlas based weight estimation for CT images, including segmentation by registration, feature extraction and model

György Kovács 0 Dec 24, 2021
PyJokes - Joking around with Python library pyjokes

Hi, it's Muhaimin again 👋 This is something unorthodox but cool. Don't forget t

Muhaimin A. Salay Kanton 1 Feb 02, 2022
Tutorial page of the Climate Hack, the greatest hackathon ever

Tutorial page of the Climate Hack, the greatest hackathon ever

UCL Artificial Intelligence Society 12 Jul 02, 2022
Code for the Higgs Boson Machine Learning Challenge organised by CERN & EPFL

A method to solve the Higgs boson challenge using Least Squares - Novae This project is the Project 1 of EPFL CS-433 Machine Learning. The project is

Giacomo Orsi 1 Nov 09, 2021
Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)

Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021) authors: Boris Knyazev, Michal Drozdzal, Graham Taylor, Adriana Romero-Soriano Overv

Facebook Research 462 Jan 03, 2023
Veri Setinizi Yolov5 Formatına Dönüştürün

Veri Setinizi Yolov5 Formatına Dönüştürün! Bu Repo da Neler Var? Xml Formatındaki Veri Setini .Txt Formatına Çevirme Xml Formatındaki Dosyaları Silme

Kadir Nar 4 Aug 22, 2022
a morph transfer UGATIT for image translation.

Morph-UGATIT a morph transfer UGATIT for image translation. Introduction 中文技术文档 This is Pytorch implementation of UGATIT, paper "U-GAT-IT: Unsupervise

55 Nov 14, 2022
This is project is the implementation of the DeepShift: Towards Multiplication-Less Neural Networks paper

DeepShift This is project is the implementation of the DeepShift: Towards Multiplication-Less Neural Networks paper, that aims to replace multiplicati

Mostafa Elhoushi 88 Dec 23, 2022
A mini lib that implements several useful functions binding to PyTorch in C++.

Torch-gather A mini library that implements several useful functions binding to PyTorch in C++. What does gather do? Why do we need it? When dealing w

maxwellzh 8 Sep 07, 2022
Official Implementation for Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation We present a generic image-to-image translation framework, pixel2style2pixel (pSp

2.8k Dec 30, 2022
Repo for the Tutorials of Day1-Day3 of the Nordic Probabilistic AI School 2021 (https://probabilistic.ai/)

ProbAI 2021 - Probabilistic Programming and Variational Inference Tutorial with Pryo Day 1 (June 14) Slides Notebook: students_PPLs_Intro Notebook: so

PGM-Lab 46 Nov 01, 2022
Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition

Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition Introduction Run attack: SGADV.py Objective function: foolbox/attacks/gradi

1 Jul 18, 2022