pixelNeRF: Neural Radiance Fields from One or Few Images

Overview

pixelNeRF: Neural Radiance Fields from One or Few Images

Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa
UC Berkeley

Teaser

arXiv: http://arxiv.org/abs/2012.02190

This is the official repository for our paper, pixelNeRF, pending final release. The two object experiment is still missing. Several features may also be added.

Environment setup

To start, we prefer creating the environment using conda:

conda env create -f environment.yml
conda activate pixelnerf

Please make sure you have up-to-date NVIDIA drivers supporting CUDA 10.2 at least.

Alternatively use pip -r requirements.txt.

Getting the data

While we could have used a common data format, we chose to keep DTU and ShapeNet (NMR) datasets in DVR's format and SRN data in the original SRN format. Our own two-object data is in NeRF's format. Data adapters are built into the code.

Running the model (video generation)

The main implementation is in the src/ directory, while evalutation scripts are in eval/.

First, download all pretrained weight files from https://drive.google.com/file/d/1UO_rL201guN6euoWkCOn-XpqR2e8o6ju/view?usp=sharing. Extract this to /checkpoints/ , so that /checkpoints/dtu/pixel_nerf_latest exists.

ShapeNet Multiple Categories (NMR)

  1. Download NMR ShapeNet renderings (see Datasets section, 1st link)
  2. Run using
    • python eval/gen_video.py -n sn64 --gpu_id --split test -P '2' -D /NMR_Dataset -S 0
    • For unseen category generalization: python eval/gen_video.py -n sn64_unseen --gpu_id= --split test -P '2' -D /NMR_Dataset -S 0

Replace with desired GPU id(s), space separated for multiple. Replace -S 0 with -S to run on a different ShapeNet object id. Replace -P '2' with -P ' ' to use a different input view. Replace --split test with --split train | val to use different data split. Append -R=20000 if running out of memory.

Result will be at visuals/sn64/videot .mp4 or visuals/sn64_unseen/videot .mp4 . The script will also print the path.

Pre-generated results for all ShapeNet objects with comparison may be found at https://www.ocf.berkeley.edu/~sxyu/ZG9yaWF0aA/pixelnerf/cross_v2/

ShapeNet Single-Category (SRN)

  1. Download SRN car (or chair) dataset from Google drive folder in Datasets section. Extract to /cars_
  2. python eval/gen_video.py -n srn_car --gpu_id= --split test -P '64 104' -D /cars -S 1

Use -P 64 for 1-view (view numbers are from SRN). The chair set case is analogous (replace car with chair). Our models are trained with random 1/2 views per batch during training. This seems to degrade performance especially for 1-view. It may be preferrable to use a fixed number of views instead.

DTU

Make sure you have downloaded the pretrained weights above.

  1. Download DTU dataset from Google drive folder in Datasets section. Extract to some directory, to get: /rs_dtu_4
  2. Run using python eval/gen_video.py -n dtu --gpu_id= --split val -P '22 25 28' -D /rs_dtu_4 -S 3 --scale 0.25

Replace with desired GPU id(s). Replace -S 3 with -S to run on a different scene. This is not DTU scene number but 0-14 in the val set. Remove --scale 0.25 to render at full resolution (quite slow).

Result will be at visuals/dtu/videov .mp4. The script will also print the path.

Note that for DTU, I only use train/val sets, where val is used for test. This is due to the very small size of the dataset. The model overfits to the train set significantly during training.

Real Car Images

Note: requires PointRend from detectron2. Install detectron2 by following https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md.

Make sure you have downloaded the pretrained weights above.

  1. Download any car image. Place it in /input . Some example images are shipped with the repo. The car should be fully visible.
  2. Run the preprocessor script: python scripts/preproc.py. This saves input/*_normalize.png. If the result is not reasonable, PointRend didn't work; please try another imge.
  3. Run python eval/eval_real.py. Outputs will be in /output

The Stanford Car dataset contains many example car images: https://ai.stanford.edu/~jkrause/cars/car_dataset.html. Note the normalization heuristic has been slightly modified compared to the paper. There may be some minor differences. You can pass -e -20 to eval_real.py to set the elevation higher in the generated video.

Overview of flags

Generally, all scripts in the project take the following flags

  • -n : experiment name, matching checkpoint directory name
  • -D : dataset directory. To save typing, you can set a default data directory for each expname in expconf.conf under datadir. For SRN/multi_obj datasets with separate directories e.g. path/cars_train, path/cars_val, put -D path/cars.
  • --split : data set split
  • -S : scene or object id to render
  • --gpu_id : GPU id(s) to use, space delimited. All scripts except calc_metrics.py are parallelized. If not specified, uses GPU 0. Examples: --gpu_id=0 or --gpu_id='0 1 3'.
  • -R : Batch size of rendered rays per object. Default is 50000 (eval) and 128 (train); make it smaller if you run out of memory. On large-memory GPUs, you can set it to 100000 for eval.
  • -c : config file. Automatically inferred for the provided experiments from the expname. Thus the flag is only required when working with your own expnames. You can associate a config file with any additional expnames in the config section of /expconf.conf .

Please refer the the following table for a list of provided experiments with associated config and data files:

Name expname -n config -c (automatic from expconf.conf) Data file data dir -D
ShapeNet category-agnostic sn64 conf/exp/sn64.conf NMR_Dataset.zip (from AWS) path/NMR_Dataset
ShapeNet unseen category sn64_unseen conf/exp/sn64_unseen.conf NMR_Dataset.zip (from AWS) + genlist.py path/NMR_Dataset
SRN chairs srn_chair conf/exp/srn.conf srn_chairs.zip path/chairs
SRN cars srn_car conf/exp/srn.conf srn_cars.zip path/cars
DTU dtu conf/exp/dtu.conf dtu_dataset.zip path/rs_dtu_4
Two chairs mult_obj conf/exp/mult_obj.conf multi_chair_{train/val/test}.zip path

Quantitative evaluation instructions

All evaluation code is in eval/ directory. The full, parallelized evaluation code is in eval/eval.py.

Approximate Evaluation

The full evaluation can be extremely slow (taking many days), especially for the SRN dataset. Therefore we also provide eval_approx.py for approximate evaluation.

  • Example python eval/eval_approx.py -D /cars -n srn_car

Add --seed to try a different random seed.

Full Evaluation

Here we provide commands for full evaluation with eval/eval.py. After running this you should also use eval/calc_metrics.py, described in the section below, to obtain final metrics.

Append --gpu_id= to specify GPUs, for example --gpu_id=0 or --gpu_id='0 1 3'. It is highly recommended to use multiple GPUs if possible to finish in reasonable time. We use 4-10 for evaluations as available. Resume-capability is built-in, and you can simply run the command again to resume if the process is terminated.

In all cases, a source-view specification is required. This can be either -P or -L. -P 'view1 view2..' specifies a set of fixed input views. In contrast, -L should point to a viewlist file (viewlist/src_*.txt) which specifies views to use for each object.

Renderings and progress will be saved to the output directory, specified by -O .

ShapeNet Multiple Categories (NMR)

  • Category-agnostic eval python eval/eval.py -D /NMR_Dataset -n sn64 -L viewlist/src_dvr.txt --multicat -O eval_out/sn64
  • Unseen category eval python eval/eval.py -D /NMR_Dataset -n sn64_unseen -L viewlist/src_gen.txt --multicat -O eval_out/sn64_unseen

ShapeNet Single-Category (SRN)

  • SRN car 1-view eval python eval/eval.py -D /cars -n srn_car -P '64' -O eval_out/srn_car_1v
  • SRN car 2-view eval python eval/eval.py -D /cars -n srn_car -P '64 104' -O eval_out/srn_car_2v

The command for chair is analogous (replace car with chair). The input views 64, 104 are taken from SRN. Our method is by no means restricted to using such views.

DTU

  • 1-view python eval/eval.py -D /rs_dtu_4 --split val -n dtu -P '25' -O eval_out/dtu_1v
  • 3-view python eval/eval.py -D /rs_dtu_4 --split val -n dtu -P '22 25 28' -O eval_out/dtu_3v
  • 6-view python eval/eval.py -D /rs_dtu_4 --split val -n dtu -P '22 25 28 40 44 48' -O eval_out/dtu_6v
  • 9-view python eval/eval.py -D /rs_dtu_4 --split val -n dtu -P '22 25 28 40 44 48 0 8 13' -O eval_out/dtu_9v

In training, we always provide 3-views, so the improvement with more views is limited.

Final Metric Computation

The above computes PSNR and SSIM without quantization. The final metrics we report in the paper use the rendered images saved to disk, and also includes LPIPS + category breakdown. To do so run the eval/calc_metrics.py, as in the following examples

  • NMR ShapeNet experiment: python eval/calc_metrics.py -D /NMR_Dataset -O eval_out/sn64 -F dvr --list_name 'softras_test' --multicat --gpu_id=
  • SRN car 2-view: python eval/calc_metrics.py -D /cars -O eval_out/srn_car_2v -F srn --gpu_id= (warning: untested after changes)
  • DTU: python eval/calc_metrics.py -D /rs_dtu_4/DTU -O eval_out/dtu_3v -F dvr --list_name 'new_val' --exclude_dtu_bad --dtu_sort

Adjust -O according to the -O flag of the eval.py command. (Note: Currently this script has an ugly standalone argument parser.) This should print a metric summary like the following

psnr 26.799268696042386
ssim 0.9102204550379002
lpips 0.10784384977842876
WROTE eval_sn64/all_metrics.txt
airplane     psnr: 29.756697 ssim: 0.946906 lpips: 0.084329 n_inst: 809
bench        psnr: 26.351427 ssim: 0.911226 lpips: 0.116299 n_inst: 364
cabinet      psnr: 27.720198 ssim: 0.910426 lpips: 0.104584 n_inst: 315
car          psnr: 27.579590 ssim: 0.942079 lpips: 0.094841 n_inst: 1500
chair        psnr: 23.835303 ssim: 0.857738 lpips: 0.145518 n_inst: 1356
display      psnr: 24.217023 ssim: 0.867284 lpips: 0.129138 n_inst: 219
lamp         psnr: 28.579184 ssim: 0.912794 lpips: 0.113561 n_inst: 464
loudspeaker  psnr: 24.435302 ssim: 0.855195 lpips: 0.140653 n_inst: 324
rifle        psnr: 30.597488 ssim: 0.968040 lpips: 0.065629 n_inst: 475
sofa         psnr: 26.944224 ssim: 0.907861 lpips: 0.116114 n_inst: 635
table        psnr: 25.591960 ssim: 0.898314 lpips: 0.098103 n_inst: 1702
telephone    psnr: 27.128039 ssim: 0.921897 lpips: 0.097074 n_inst: 211
vessel       psnr: 29.180307 ssim: 0.938936 lpips: 0.110670 n_inst: 388
---
total        psnr: 26.799269 ssim: 0.910220 lpips: 0.107844

Training instructions

Training code is in train/ directory, specifically train/train.py.

  • Example for training to DTU: python train/train.py -n dtu_exp -c conf/exp/dtu.conf -D /rs_dtu_4 -V 3 --gpu_id= --resume
  • Example for training to SRN cars, 1 view: python train/train.py -n srn_car_exp -c conf/exp/srn.conf -D /cars --gpu_id= --resume
  • Example for training to ShapeNet multi-object, 2 view: python train/train.py -n multi_obj -c conf/exp/multi_obj.conf -D --gpu_id= --resume

Additional flags

  • --resume to resume from checkpoint, if available. Usually just pass this to be safe.
  • -V to specify number of input views to train with. Default is 1.
    • -V 'numbers separated by space' to use random number of views per batch. This does not work so well in our experience but we use it for SRN experiment.
  • -B batch size of objects, default 4
  • --lr , --epochs
  • --no_bbox_step to specify iteration after which to stop using bounding-box sampling. Set to 0 to disable.

If the checkpoint becomes corrupted for some reason (e.g. if process crashes when saving), a backup is saved to checkpoints/ /pixel_nerf_backup . To avoid having to specify -c, -D each time, edit /expconf.conf and add rows for your expname in the config and datadir sections.

Log files and visualizations

View logfiles with tensorboard --logdir /logs/ . Visualizations are written to /visuals/ / _ _vis.png . They are of the form

  • Top coarse, bottom fine (1 row if fine sample disabled)
  • Left-to-right: input-views, depth, output, alpha.

BibTeX

@misc{yu2020pixelnerf,
      title={pixelNeRF: Neural Radiance Fields from One or Few Images},
      author={Alex Yu and Vickie Ye and Matthew Tancik and Angjoo Kanazawa},
      year={2020},
      eprint={2012.02190},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

Parts of the code were based on from kwea123's NeRF implementation: https://github.com/kwea123/nerf_pl. Some functions are borrowed from DVR https://github.com/autonomousvision/differentiable_volumetric_rendering and PIFu https://github.com/shunsukesaito/PIFu

Owner
Alex Yu
Researcher at UC Berkeley
Alex Yu
Pytorch implementation of MixNMatch

MixNMatch: Multifactor Disentanglement and Encoding for Conditional Image Generation [Paper] Yuheng Li, Krishna Kumar Singh, Utkarsh Ojha, Yong Jae Le

910 Dec 30, 2022
OneShot Learning-based hotword detection.

EfficientWord-Net Hotword detection based on one-shot learning Home assistants require special phrases called hotwords to get activated (eg:"ok google

ANT-BRaiN 102 Dec 25, 2022
Background Matting: The World is Your Green Screen

Background Matting: The World is Your Green Screen By Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steve Seitz, and Ira Kemelmacher-Shlizerman Th

Soumyadip Sengupta 4.6k Jan 04, 2023
Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task

multi-task_losses_optimizer Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task 已经实验过了,不会有cuda out of memory情况 ##Par

14 Dec 25, 2022
IPATool-py: download ipa easily

IPATool-py Python version of IPATool! Installation pip3 install -r requirements.txt Usage Quickstart: download app with specific bundleId into DIR: p

159 Dec 30, 2022
GANTheftAuto is a fork of the Nvidia's GameGAN

Description GANTheftAuto is a fork of the Nvidia's GameGAN, which is research focused on emulating dynamic game environments. The early research done

Harrison 801 Dec 27, 2022
Software for Multimodalty 2D+3D Facial Expression Recognition (FER) UI

EmotionUI Software for Multimodalty 2D+3D Facial Expression Recognition (FER) UI. demo screenshot (with RealSense) required packages Python = 3.6 num

Yang Jiao 2 Dec 23, 2021
The Face Mask recognition system uses AI technology to detect the person with or without a mask.

Face Mask Detection Face Mask Detection system built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect

Rohan Kasabe 4 Apr 05, 2022
Graph-based community clustering approach to extract protein domains from a predicted aligned error matrix

Using a predicted aligned error matrix corresponding to an AlphaFold2 model , returns a series of lists of residue indices, where each list corresponds to a set of residues clustering together into a

Tristan Croll 24 Nov 23, 2022
Code for the ICCV2021 paper "Personalized Image Semantic Segmentation"

PSS: Personalized Image Semantic Segmentation Paper PSS: Personalized Image Semantic Segmentation Yu Zhang, Chang-Bin Zhang, Peng-Tao Jiang, Ming-Ming

张宇 15 Jul 09, 2022
Easily Process a Batch of Cox Models

ezcox: Easily Process a Batch of Cox Models The goal of ezcox is to operate a batch of univariate or multivariate Cox models and return tidy result. ⏬

Shixiang Wang 15 May 23, 2022
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 05, 2022
La source de mon module 'pyfade' disponible sur Pypi.

Version: 1.2 Introduction Pyfade est un module permettant de créer des dégradés colorés. Il vous permettra de changer chaque ligne de votre texte par

Billy 20 Sep 12, 2021
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

xmu-xiaoma66 7.7k Jan 05, 2023
A GUI to automatically create a TOPAS-readable MLC simulation file

Python script to create a TOPAS-readable simulation file descriring a Multi-Leaf-Collimator. Builds the MLC using the data from a 3D .stl file.

Sebastian Schäfer 0 Jun 19, 2022
Code accompanying the paper Shared Independent Component Analysis for Multi-subject Neuroimaging

ShICA Code accompanying the paper Shared Independent Component Analysis for Multi-subject Neuroimaging Install Move into the ShICA directory cd ShICA

8 Nov 07, 2022
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 04, 2022
UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss This repository contains the TensorFlow implementation of the paper UnF

Simon Meister 270 Nov 06, 2022
Official implementation of YOGO for Point-Cloud Processing

You Only Group Once: Efficient Point-Cloud Processing with Token Representation and Relation Inference Module By Chenfeng Xu, Bohan Zhai, Bichen Wu, T

Chenfeng Xu 67 Dec 20, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 01, 2022