(Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters

Overview

NeRF--: Neural Radiance Fields Without Known Camera Parameters

Project Page | Arxiv | Colab Notebook | Data

Zirui Wang¹, Shangzhe Wu², Weidi Xie², Min Chen³, Victor Adrian Prisacariu¹.

¹Active Vision Lab + ²Visual Geometry Group + ³e-Research Centre, University of Oxford.

Overview

We provide 3 training targets in this repository, under the tasks directory:

  1. task/nerfmm/train.py: This is our main training script for the NeRF-LLFF dataset, which estimates camera poses, focal lenghts and a NeRF jointly and monitors the absolute trajectory error (ATE) between our estimation of camera parameters and COLMAP estimation during training. This target can also start training from a COLMAP initialisation and refine the COLMAP camera parameters.
  2. task/refine_nerfmm/train.py: This is the training script that refines a pretrained nerfmm system.
  3. task/any_folder/train.py: This is a training script that takes a folder that contains forward-facing images and trains with our nerfmm system without making any comparison with COLMAP. It is similar to what we offer in our CoLab notebook and we treat this any_folder target as a playgraound, where users can try novel view synthesis by just providing an image folder and do not care how the camera parameter estimation compares with COLMAP.

For each target, we provide relevant utilities to evaluate our system. Specifically,

  • for the nerfmm target, we provide three utility files:
    • eval.py to evaluate image rendering quality on validation splits with PSNR, SSIM and LPIPS, i.e, results in Table 1.
    • spiral.py to render novel views using a spiral camera trajectory, i.e. results in Figure 1.
    • vis_learned_poses.py to visualise our camera parameter estimation with COLMAP estimation in 3D. It also computes ATE between them, i.e. E1 in Table 2.
  • for the refine_nerfmm target, all utilities in nerfmm target above are compatible with refine_nerfmm target, since it just refines a pretrained nerfmm system.
  • for the any_folder target, it has its own spiral.py and vis_learned_poses.py utilities, as it does not compare with COLMAP. It does not have a eval.py file as this target is treated as a playground and does not split images to train/validation sets. It only provides novel view synthesis results via the spiral.py file.

Table of Content

Environment

We provide a requirement.yml file to set up a conda environment:

git clone https://github.com/ActiveVisionLab/nerfmm.git
cd nerfmm
conda env create -f environment.yml

Generally, our code should be able to run with any pytorch >= 1.1 .

(Optional) Install open3d for visualisation. You might need a physical monitor to install this lib.

pip install open3d

Get Data

We use the NeRF-LLFF dataset with two small structural changes:

  1. We remove their image_4 and image_8 folder and downsample images to any desirable resolution during data loading dataloader/with_colmap.py, by calling PyTorch's interpolate function.
  2. We explicitly generate two txt files for train/val image ids. i.e. take every 8th image as the validation set, as in the official NeRF train/val split. The only difference is that we store them as txt files while NeRF split them during data loading. The file produces these two txt files is utils/split_dataset.py.

In addition to the NeRF-LLFF dataset, we provide two demo scenes to demonstrate how to use the any_folder target.

We pack the re-structured LLFF data and our data to a tar ball (~1.8G), to get it, run:

wget https://www.robots.ox.ac.uk/~ryan/nerfmm2021/nerfmm_release_data.tar.gz

Untar the data:

tar -xzvf path/to/the/tar.gz

Training

We show how to:

  1. train a nerfmm from scratch, i.e. initialise camera poses with identity matrices and focal lengths with image resolution:
    python tasks/nerf/train.py \
    --base_dir='path/to/nerfmm_release/data' \
    --scene_name='LLFF/fern'
  2. train a nerfmm from COLMAP initialisation:
    python tasks/nerf/train.py \
    --base_dir='path/to/nerfmm_release/data' \
    --scene_name='LLFF/fern' \
    --start_refine_pose_epoch=1000 \
    --start_refine_focal_epoch=1000
    This command initialises a nerfmm target with COLMAP parameters, trains with them for 1000 epochs, and starts refining those parameters after 1000 epochs.
  3. train a nerfmm from a pretrained nerfmm:
    python tasks/refine_nerfmm/train.py \
    --base_dir='path/to/nerfmm_release/data' \
    --scene_name='LLFF/fern' --start_refine_epoch=1000 \
    --ckpt_dir='path/to/a/dir/contains/nerfmm/ckpts'
    This command initialises a refine_nerfmm target with a set of pretrained nerfmm parameters, trains with them for 1000 epochs, and starts refining those parameters after 1000 epochs.
  4. train an any_folder from scratch given an image folder:
    python tasks/any_folder/train.py \
    --base_dir='path/to/nerfmm_release/data' \
    --scene_name='any_folder_demo/desk'
    This command trains an any_folder target using a provided demo scene desk.

(Optional) set a symlink to the downloaded data:

mkdir data_dir  # do it in this nerfmm repo
cd data_dir
ln -s /path/to/downloaded/data ./nerfmm_release_data
cd ..

this can simplify the above training commands, for example:

python tasks/nerfmm/train.py

Evaluation

Compute image quality metrics

Call eval.py in nerfmm target:

python tasks/nerfmm/eval.py \
--base_dir='path/to/nerfmm_release/data' \
--scene_name='LLFF/fern' \
--ckpt_dir='path/to/a/dir/contains/nerfmm/ckpts'

This file can be used to evaluate a checkpoint trained with refine_nerfmm target. For some scenes, you might need to tweak with --opt_eval_lr option to get the best results. Common values for opt_eval_lr are 0.01 / 0.005 / 0.001 / 0.0005 / 0.0001. The default value is 0.001. Overall, it finds validation poses that can produce highest PSNR on validation set while freezing NeRF and focal lengths. We do this because the learned camera pose space is different from the COLMAP estimated camera pose space.

Render novel views

Call spiral.py in each target. The spiral.py in nerfmm is compatible with refine_nerfmm target:

python spiral.py \
--base_dir='path/to/nerfmm_release/data' \
--scene_name='LLFF/fern' \
--ckpt_dir='path/to/a/dir/contains/nerfmm/ckpts'

Visualise estimated poses in 3D

Call vis_learned_poses.py in each target. The vis_learned_poses.py in nerfmm is compatible with refine_nerfmm target:

python spiral.py \
--base_dir='path/to/nerfmm_release/data' \
--scene_name='LLFF/fern' \
--ckpt_dir='path/to/a/dir/contains/nerfmm/ckpts'

Acknowledgement

Shangzhe Wu is supported by Facebook Research. Weidi Xie is supported by Visual AI (EP/T028572/1).

The authors would like to thank Tim Yuqing Tang for insightful discussions and proofreading.

During our NeRF implementation, we referenced several open sourced NeRF implementations, and we thank their contributions. Specifically, we referenced functions from nerf and nerf-pytorch, and borrowed/modified code from nerfplusplus and nerf_pl. We especially appreciate the detailed code comments and git issue answers in nerf_pl.

Citation

@article{wang2021nerfmm,
  title={Ne{RF}$--$: Neural Radiance Fields Without Known Camera Parameters},
  author={Zirui Wang and Shangzhe Wu and Weidi Xie and Min Chen and Victor Adrian Prisacariu},
  journal={arXiv preprint arXiv:2102.07064},
  year={2021}
}
Owner
Active Vision Laboratory
Active Vision Laboratory
SuRE Evaluation: A Supplementary Material

SuRE Evaluation: A Supplementary Material This repository contains supplementary material regarding the evaluations presented in the paper Visual Expl

NYU Visualization Lab 0 Dec 14, 2021
Styled Augmented Translation

SAT Style Augmented Translation Introduction By collecting high-quality data, we were able to train a model that outperforms Google Translate on 6 dif

139 Dec 29, 2022
Prevent `CUDA error: out of memory` in just 1 line of code.

🐨 Koila Koila solves CUDA error: out of memory error painlessly. Fix it with just one line of code, and forget it. 🚀 Features 🙅 Prevents CUDA error

RenChu Wang 1.7k Jan 02, 2023
Training RNNs as Fast as CNNs

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

ReConsider ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin

Facebook Research 47 Jul 26, 2022
Exploration-Exploitation Dilemma Solving Methods

Exploration-Exploitation Dilemma Solving Methods Medium article for this repo - HERE In ths repo I implemented two techniques for tackling mentioned t

Aman Mishra 6 Jan 25, 2022
MoveNet Single Pose on OpenVINO

MoveNet Single Pose tracking on OpenVINO Running Google MoveNet Single Pose models on OpenVINO. A convolutional neural network model that runs on RGB

35 Nov 11, 2022
A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

A small demonstration of using WebDataset with ImageNet and PyTorch Lightning This is a small repo illustrating how to use WebDataset on ImageNet. usi

50 Dec 16, 2022
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Sohil Shah 197 Nov 29, 2022
Aerial Imagery dataset for fire detection: classification and segmentation (Unmanned Aerial Vehicle (UAV))

Aerial Imagery dataset for fire detection: classification and segmentation using Unmanned Aerial Vehicle (UAV) Title FLAME (Fire Luminosity Airborne-b

79 Jan 06, 2023
Alfred-Restore-Iterm-Arrangement - An Alfred workflow to restore iTerm2 window Arrangements

Alfred-Restore-Iterm-Arrangement This alfred workflow will list avaliable iTerm2

7 May 10, 2022
Piotr - IoT firmware emulation instrumentation for training and research

Piotr: Pythonic IoT exploitation and Research Introduction to Piotr Piotr is an emulation helper for Qemu that provides a convenient way to create, sh

Damien Cauquil 51 Nov 09, 2022
BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalanced Tongue Data

Balanced-Evolutionary-Semi-Stacking Code for the paper ''BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalan

0 Jan 16, 2022
ICNet for Real-Time Semantic Segmentation on High-Resolution Images, ECCV2018

ICNet for Real-Time Semantic Segmentation on High-Resolution Images by Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, Jiaya Jia, details a

Hengshuang Zhao 594 Dec 31, 2022
LightLog is an open source deep learning based lightweight log analysis tool for log anomaly detection.

LightLog Introduction LightLog is an open source deep learning based lightweight log analysis tool for log anomaly detection. Function description [BG

25 Dec 17, 2022
Code for the paper "Query Embedding on Hyper-relational Knowledge Graphs"

Query Embedding on Hyper-Relational Knowledge Graphs This repository contains the code used for the experiments in the paper Query Embedding on Hyper-

DimitrisAlivas 19 Jul 26, 2022
Model Quantization Benchmark

Introduction MQBench is an open-source model quantization toolkit based on PyTorch fx. The envision of MQBench is to provide: SOTA Algorithms. With MQ

500 Jan 06, 2023
The Fundamental Clustering Problems Suite (FCPS) summaries 54 state-of-the-art clustering algorithms, common cluster challenges and estimations of the number of clusters as well as the testing for cluster tendency.

FCPS Fundamental Clustering Problems Suite The package provides over sixty state-of-the-art clustering algorithms for unsupervised machine learning pu

9 Nov 27, 2022
PyTorch implementation of neural style transfer algorithm

neural-style-pt This is a PyTorch implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias

770 Jan 02, 2023
Optimizing DR with hard negatives and achieving SOTA first-stage retrieval performance on TREC DL Track (SIGIR 2021 Full Paper).

Optimizing Dense Retrieval Model Training with Hard Negatives Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma 🔥 News 2021-10

Jingtao Zhan 99 Dec 27, 2022