BARF: Bundle-Adjusting Neural Radiance Fields 🤮 (ICCV 2021 oral)

Overview

BARF 🤮 : Bundle-Adjusting Neural Radiance Fields

Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey
IEEE International Conference on Computer Vision (ICCV), 2021 (oral presentation)

Project page: https://chenhsuanlin.bitbucket.io/bundle-adjusting-NeRF
arXiv preprint: https://arxiv.org/abs/2104.06405

We provide PyTorch code for the NeRF experiments on both synthetic (Blender) and real-world (LLFF) datasets.


Prerequisites

This code is developed with Python3 (python3). PyTorch 1.9+ is required.
It is recommended use Anaconda to set up the environment. Install the dependencies and activate the environment barf-env with

conda env create --file requirements.yaml python=3
conda activate barf-env

Initialize the external submodule dependencies with

git submodule update --init --recursive

Dataset

  • Synthetic data (Blender) and real-world data (LLFF)

    Both the Blender synthetic data and LLFF real-world data can be found in the NeRF Google Drive. For convenience, you can download them with the following script: (under this repo)
    # Blender
    gdown --id 18JxhpWD-4ZmuFKLzKlAw-w5PpzZxXOcG # download nerf_synthetic.zip
    unzip nerf_synthetic.zip
    rm -f nerf_synthetic.zip
    mv nerf_synthetic data/blender
    # LLFF
    gdown --id 16VnMcF1KJYxN9QId6TClMsZRahHNMW5g # download nerf_llff_data.zip
    unzip nerf_llff_data.zip
    rm -f nerf_llff_data.zip
    mv nerf_llff_data data/llff
    The data directory should contain the subdirectories blender and llff. If you already have the datasets downloaded, you can alternatively soft-link them within the data directory.
  • iPhone (TODO)


Running the code

  • BARF models

    To train and evaluate BARF:

    # <GROUP> and <NAME> can be set to your likes, while <SCENE> is specific to datasets
    
    # Blender (<SCENE>={chair,drums,ficus,hotdog,lego,materials,mic,ship})
    python3 train.py --group=<GROUP> --model=barf --yaml=barf_blender --name=<NAME> --data.scene=<SCENE> --barf_c2f=[0.1,0.5]
    python3 evaluate.py --group=<GROUP> --model=barf --yaml=barf_blender --name=<NAME> --data.scene=<SCENE> --data.val_sub= --resume
    
    # LLFF (<SCENE>={fern,flower,fortress,horns,leaves,orchids,room,trex})
    python3 train.py --group=<GROUP> --model=barf --yaml=barf_llff --name=<NAME> --data.scene=<SCENE> --barf_c2f=[0.1,0.5]
    python3 evaluate.py --group=<GROUP> --model=barf --yaml=barf_llff --name=<NAME> --data.scene=<SCENE> --resume

    All the results will be stored in the directory output/<GROUP>/<NAME>. You may want to organize your experiments by grouping different runs in the same group.

    To train baseline models:

    • Full positional encoding: omit the --barf_c2f argument.
    • No positional encoding: add --arch.posenc!.

    If you want to evaluate a checkpoint at a specific iteration number, use --resume=<ITER_NUMBER> instead of just --resume.

  • Training the original NeRF

    If you want to train the reference NeRF models (assuming known camera poses):

    # Blender
    python3 train.py --group=<GROUP> --model=nerf --yaml=nerf_blender --name=<NAME> --data.scene=<SCENE>
    python3 evaluate.py --group=<GROUP> --model=nerf --yaml=nerf_blender --name=<NAME> --data.scene=<SCENE> --data.val_sub= --resume
    
    # LLFF
    python3 train.py --group=<GROUP> --model=nerf --yaml=nerf_llff --name=<NAME> --data.scene=<SCENE>
    python3 evaluate.py --group=<GROUP> --model=nerf --yaml=nerf_llff --name=<NAME> --data.scene=<SCENE> --resume

    If you wish to replicate the results from the original NeRF paper, use --yaml=nerf_blender_repr or --yaml=nerf_llff_repr instead for Blender or LLFF respectively. There are some differences, e.g. NDC will be used for the LLFF forward-facing dataset. (The reference NeRF models considered in the paper do not use NDC to parametrize the 3D points.)

  • Visualizing the results

    We have included code to visualize the training over TensorBoard and Visdom. The TensorBoard events include the following:

    • SCALARS: the rendering losses and PSNR over the course of optimization. For BARF, the rotational/translational errors with respect to the given poses are also computed.
    • IMAGES: visualization of the RGB images and the RGB/depth rendering.

    We also provide visualization of 3D camera poses in Visdom. Run visdom -port 9000 to start the Visdom server.
    The Visdom host server is default to localhost; this can be overridden with --visdom.server (see options/base.yaml for details). If you want to disable Visdom visualization, add --visdom!.


Codebase structure

The main engine and network architecture in model/barf.py inherit those from model/nerf.py. This codebase is structured so that it is easy to understand the actual parts BARF is extending from NeRF. It is also simple to build your exciting applications upon either BARF or NeRF -- just inherit them again! This is the same for dataset files (e.g. data/blender.py).

To understand the config and command lines, take the below command as an example:

python3 train.py --group=<GROUP> --model=barf --yaml=barf_blender --name=<NAME> --data.scene=<SCENE> --barf_c2f=[0.1,0.5]

This will run model/barf.py as the main engine with options/barf_blender.yaml as the main config file. Note that barf hierarchically inherits nerf (which inherits base), making the codebase customizable.
The complete configuration will be printed upon execution. To override specific options, add --<key>=value or --<key1>.<key2>=value (and so on) to the command line. The configuration will be loaded as the variable opt throughout the codebase.

Some tips on using and understanding the codebase:

  • The computation graph for forward/backprop is stored in var throughout the codebase.
  • The losses are stored in loss. To add a new loss function, just implement it in compute_loss() and add its weight to opt.loss_weight.<name>. It will automatically be added to the overall loss and logged to Tensorboard.
  • If you are using a multi-GPU machine, you can add --gpu=<gpu_number> to specify which GPU to use. Multi-GPU training/evaluation is currently not supported.
  • To resume from a previous checkpoint, add --resume=<ITER_NUMBER>, or just --resume to resume from the latest checkpoint.
  • (to be continued....)

If you find our code useful for your research, please cite

@inproceedings{lin2021barf,
  title={BARF: Bundle-Adjusting Neural Radiance Fields},
  author={Lin, Chen-Hsuan and Ma, Wei-Chiu and Torralba, Antonio and Lucey, Simon},
  booktitle={IEEE International Conference on Computer Vision ({ICCV})},
  year={2021}
}

Please contact me ([email protected]) if you have any questions!

Owner
Chen-Hsuan Lin
Research scientist @NVIDIA, PhD in Robotics @ CMU
Chen-Hsuan Lin
🔅 Shapash makes Machine Learning models transparent and understandable by everyone

🎉 What's new ? Version New Feature Description Tutorial 1.6.x Explainability Quality Metrics To help increase confidence in explainability methods, y

MAIF 2.1k Dec 27, 2022
Supervised multi-SNE (S-multi-SNE): Multi-view visualisation and classification

S-multi-SNE Supervised multi-SNE (S-multi-SNE): Multi-view visualisation and classification A repository containing the code to reproduce the findings

Theodoulos Rodosthenous 3 Apr 15, 2022
Official repository for the CVPR 2021 paper "Learning Feature Aggregation for Deep 3D Morphable Models"

Deep3DMM Official repository for the CVPR 2021 paper Learning Feature Aggregation for Deep 3D Morphable Models. Requirements This code is tested on Py

38 Dec 27, 2022
Repository containing the PhD Thesis "Formal Verification of Deep Reinforcement Learning Agents"

Getting Started This repository contains the code used for the following publications: Probabilistic Guarantees for Safe Deep Reinforcement Learning (

Edoardo Bacci 5 Aug 31, 2022
Gradient Step Denoiser for convergent Plug-and-Play

Source code for the paper "Gradient Step Denoiser for convergent Plug-and-Play"

Samuel Hurault 11 Sep 17, 2022
PyTorch implementation of some learning rate schedulers for deep learning researcher.

pytorch-lr-scheduler PyTorch implementation of some learning rate schedulers for deep learning researcher. Usage WarmupReduceLROnPlateauScheduler Visu

Soohwan Kim 59 Dec 08, 2022
Recurrent Variational Autoencoder that generates sequential data implemented with pytorch

Pytorch Recurrent Variational Autoencoder Model: This is the implementation of Samuel Bowman's Generating Sentences from a Continuous Space with Kim's

Daniil Gavrilov 347 Nov 14, 2022
DeLag: Detecting Latency Degradation Patterns in Service-based Systems

DeLag: Detecting Latency Degradation Patterns in Service-based Systems Replication package of the work "DeLag: Detecting Latency Degradation Patterns

SEALABQualityGroup @ University of L'Aquila 2 Mar 24, 2022
chainladder - Property and Casualty Loss Reserving in Python

chainladder (python) chainladder - Property and Casualty Loss Reserving in Python This package gets inspiration from the popular R ChainLadder package

Casualty Actuarial Society 130 Dec 07, 2022
Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding

🍐 quince Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding 🍐 Installation $ git clone

Andrew Jesson 19 Jun 23, 2022
Official code for Score-Based Generative Modeling through Stochastic Differential Equations

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains the official implementation for the paper Score-Based Gen

Yang Song 818 Jan 06, 2023
Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.

Tensorflow-Mobile-Generic-Object-Localizer Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label. Ori

Ibai Gorordo 11 Nov 15, 2022
code for paper"A High-precision Semantic Segmentation Method Combining Adversarial Learning and Attention Mechanism"

PyTorch implementation of UAGAN(U-net Attention Generative Adversarial Networks) This repository contains the source code for the paper "A High-precis

Tong 8 Apr 25, 2022
3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks

3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks Introduction This repository contains the code and models for the follo

124 Jan 06, 2023
The authors' implementation of Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations

Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations This is the authors' implementation of Unsupervised Adversarial Learning of

Dwango Media Village 140 Dec 07, 2022
Few-shot Neural Architecture Search

One-shot Neural Architecture Search uses a single supernet to approximate the performance each architecture. However, this performance estimation is super inaccurate because of co-adaption among oper

Yiyang Zhao 38 Oct 18, 2022
Revisiting Temporal Alignment for Video Restoration

Revisiting Temporal Alignment for Video Restoration [arXiv] Kun Zhou, Wenbo Li, Liying Lu, Xiaoguang Han, Jiangbo Lu We provide our results at Google

52 Dec 25, 2022
Reinforcement Learning Theory Book (rus)

Reinforcement Learning Theory Book (rus)

qbrick 206 Nov 27, 2022
General Assembly Capstone: NBA Game Predictor

Project 6: Predicting NBA Games Problem Statement Can I predict the results of NBA games from the back-half of a season from the opening half of the s

Adam Muhammad Klesc 1 Jan 14, 2022
Code accompanying paper: Meta-Learning to Improve Pre-Training

Meta-Learning to Improve Pre-Training This folder contains code to run experiments in the paper Meta-Learning to Improve Pre-Training, NeurIPS 2021. P

28 Dec 31, 2022