[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database

Overview

RetrievalFuse

Paper | Project Page | Video

RetrievalFuse: Neural 3D Scene Reconstruction with a Database
Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai
ICCV2021

This repository contains the code for the ICCV 2021 paper RetrievalFuse, a novel approach for 3D reconstruction from low resolution distance field grids and from point clouds.

In contrast to traditional generative learned models which encode the full generative process into a neural network and can struggle with maintaining local details at the scene level, we introduce a new method that directly leverages scene geometry from the training database.

File and Folders


Broad code structure is as follows:

File / Folder Description
config/super_resolution Super-resolution experiment configs
config/surface_reconstruction Surface reconstruction experiment configs
config/base Defaults for configurations
config/config_handler.py Config file parser
data/splits Training and validation splits for different datasets
dataset/scene.py SceneHandler class for managing access to scene data samples
dataset/patched_scene_dataset.py Pytorch dataset class for scene data
external/ChamferDistancePytorch For calculating rough chamfer distance between prediction and target while training
model/attention.py Attention, folding and unfolding modules
model/loss.py Loss functions
model/refinement.py Refinement network
model/retrieval.py Retrieval network
model/unet.py U-Net model used as a backbone in refinement network
runs/ Checkpoint and visualizations for experiments dumped here
trainer/train_retrieval.py Lightning module for training retrieval network
trainer/train_refinement.py Lightning module for training refinement network
util/arguments.py Argument parsing (additional arguments apart from those in config)
util/filesystem_logger.py For copying source code for each run in the experiment log directory
util/metrics.py Rough metrics for logging during training
util/mesh_metrics.py Final metrics on meshes
util/retrieval.py Script to dump retrievals once retrieval networks have been trained; needed for training refinement.
util/visualizations.py Utility scripts for visualizations

Further, the data/ directory has the following layout

data                    # root data directory
├── sdf_008             # low-res (8^3) distance fields
    ├── 
   
         
        ├── 
    
     
        ├── 
     
      
        ├── 
      
       
        ...
    ├── 
       
         ... ├── sdf_016 # low-res (16^3) distance fields ├── 
        
          ├── 
         
           ├── 
          
            ├── 
           
             ... ├── 
            
              ... ├── sdf_064 # high-res (64^3) distance fields ├── 
             
               ├── 
              
                ├── 
               
                 ├── 
                
                  ... ├── 
                 
                   ... ├── pc_20K # point cloud inputs ├── 
                  
                    ├── 
                   
                     ├── 
                    
                      ├── 
                     
                       ... ├── 
                      
                        ... ├── splits # train/val splits ├── size # data needed by SceneHandler class (autocreated on first run) ├── occupancy # data needed by SceneHandler class (autocreated on first run) 
                      
                     
                    
                   
                  
                 
                
               
              
             
            
           
          
         
        
       
      
     
    
   

Dependencies


Install the dependencies using pip ```bash pip install -r requirements.txt ``` Be sure that you pull the `ChamferDistancePytorch` submodule in `external`.

Data Preparation


For ShapeNetV2 and Matterport, get the appropriate meshes from the datasets. For 3DFRONT get the 3DFUTURE meshes and 3DFRONT scripts. For getting 3DFRONT meshes use our fork of 3D-FRONT-ToolBox to create room meshes.

Once you have the meshes, use our fork of sdf-gen to create distance field low-res inputs and high-res targets. For creating point cloud inputs simply use trimesh.sample.sample_surface (check util/misc/sample_scene_point_clouds). Place the processed data in appropriate directories:

  • data/sdf_008/ or data/sdf_016/ for low-res inputs

  • data/pc_20K/ for point clouds inputs

  • data/sdf_064/ for targets

Training the Retrieval Network


To train retrieval networks use the following command:

python trainer/train_retrieval.py --config config/<config> --val_check_interval 5 --experiment retrieval --wandb_main --sanity_steps 1

We provide some sample configurations for retrieval.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/retrieval_008_064.yaml
  • config/super_resolution/3DFront/retrieval_008_064.yaml
  • config/super_resolution/Matterport3D/retrieval_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/retrieval_128_064.yaml
  • config/surface_reconstruction/3DFront/retrieval_128_064.yaml
  • config/surface_reconstruction/Matterport3D/retrieval_128_064.yaml

Once trained, create the retrievals for train/validation set using the following commands:

python util/retrieval.py  --mode map --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config>
python util/retrieval.py --mode compose --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config> 

Training the Refinement Network


Use the following command to train the refinement network

python trainer/train_refinement.py --config <config> --val_check_interval 5 --experiment refinement --sanity_steps 1 --wandb_main --retrieval_ckpt <retrieval_ckpt>

Again, sample configurations for refinement are provided in the config directory.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/refinement_008_064.yaml
  • config/super_resolution/3DFront/refinement_008_064.yaml
  • config/super_resolution/Matterport3D/refinement_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/refinement_128_064.yaml
  • config/surface_reconstruction/3DFront/refinement_128_064.yaml
  • config/surface_reconstruction/Matterport3D/refinement_128_064.yaml

Visualizations and Logs


Visualizations and checkpoints are dumped in the `runs/` directory. Logs are uploaded to the user's [Weights&Biases](https://wandb.ai/site) dashboard.

Citation


If you find our work useful in your research, please consider citing:
@inproceedings{siddiqui2021retrievalfuse,
  title = {RetrievalFuse: Neural 3D Scene Reconstruction with a Database},
  author = {Siddiqui, Yawar and Thies, Justus and Ma, Fangchang and Shan, Qi and Nie{\ss}ner, Matthias and Dai, Angela},
  booktitle = {Proc. International Conference on Computer Vision (ICCV)},
  month = oct,
  year = {2021},
  doi = {},
  month_numeric = {10}
}

License


The code from this repository is released under the MIT license.
Owner
Yawar Nihal Siddiqui
Yawar Nihal Siddiqui
An implementation of the efficient attention module.

Efficient Attention An implementation of the efficient attention module. Description Efficient attention is an attention mechanism that substantially

Shen Zhuoran 194 Dec 15, 2022
Implements an infinite sum of poisson-weighted convolutions

An infinite sum of Poisson-weighted convolutions Kyle Cranmer, Aug 2018 If viewing on GitHub, this looks better with nbviewer: click here Consider a v

Kyle Cranmer 26 Dec 07, 2022
Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-art fuzzing techniques

About Fuzzification Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-

gts3.org (<a href=[email protected])"> 55 Oct 25, 2022
Hyperparameter tuning for humans

KerasTuner KerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily c

Keras 2.6k Dec 27, 2022
Sequence to Sequence Models with PyTorch

Sequence to Sequence models with PyTorch This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch At present it ha

Sandeep Subramanian 708 Dec 19, 2022
PaSST: Efficient Training of Audio Transformers with Patchout

PaSST: Efficient Training of Audio Transformers with Patchout This is the implementation for Efficient Training of Audio Transformers with Patchout Pa

165 Dec 26, 2022
Contains a bunch of different python programm tasks

py_tasks Contains a bunch of different python programm tasks Armstrong.py - calculate Armsrong numbers in range from 0 to n with / without cache and c

Dmitry Chmerenko 1 Dec 17, 2021
StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking

StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking Datasets You can download datasets that have been pre-pr

25 May 29, 2022
利用Tensorflow实现基于CNN的中文短文本分类

Text Classification with CNN 使用卷积神经网络进行中文文本分类 CNN做句子分类的论文可以参看: Convolutional Neural Networks for Sentence Classification 还可以去读dennybritz大牛的博客:Implemen

Jeremiah 4 Nov 08, 2022
The final project for "Applying AI to Wearable Device Data" course from "AI for Healthcare" - Udacity.

Motion Compensated Pulse Rate Estimation Overview This project has 2 main parts. Develop a Pulse Rate Algorithm on the given training data. Then Test

Omar Laham 2 Oct 25, 2022
Miscellaneous and lightweight network tools

Network Tools Collection of miscellaneous and lightweight network tools to simplify daily operations, administration, and troubleshooting of networks.

Nicholas Russo 22 Mar 22, 2022
Official implement of Paper:A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sening images

A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images 深度监督影像融合网络DSIFN用于高分辨率双时相遥感影像变化检测 Of

Chenxiao Zhang 135 Dec 19, 2022
ReGAN: Sequence GAN using RE[INFORCE|LAX|BAR] based PG estimators

Sequence Generation with GANs trained by Gradient Estimation Requirements: PyTorch v0.3 Python 3.6 CUDA 9.1 (For GPU) Origin The idea is from paper Se

40 Nov 03, 2022
PyTorch implementation for the paper Pseudo Numerical Methods for Diffusion Models on Manifolds

Pseudo Numerical Methods for Diffusion Models on Manifolds (PNDM) This repo is the official PyTorch implementation for the paper Pseudo Numerical Meth

Luping Liu (刘路平) 196 Jan 05, 2023
PyDeepFakeDet is an integrated and scalable tool for Deepfake detection.

PyDeepFakeDet An integrated and scalable library for Deepfake detection research. Introduction PyDeepFakeDet is an integrated and scalable Deepfake de

Junke, Wang 49 Dec 11, 2022
Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning

Manifold-SCA Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning The repo is org

Yuanyuan Yuan 172 Dec 29, 2022
The official implementation for "FQ-ViT: Fully Quantized Vision Transformer without Retraining".

FQ-ViT [arXiv] This repo contains the official implementation of "FQ-ViT: Fully Quantized Vision Transformer without Retraining". Table of Contents In

132 Jan 08, 2023
GLODISMO: Gradient-Based Learning of Discrete Structured Measurement Operators for Signal Recovery

GLODISMO: Gradient-Based Learning of Discrete Structured Measurement Operators for Signal Recovery This is the code to the paper: Gradient-Based Learn

3 Feb 15, 2022
Code accompanying the paper Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs (Chen et al., CVPR 2020, Oral).

Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs This repository contains PyTorch implementation of our pa

Shizhe Chen 178 Dec 29, 2022
The code for our paper Semi-Supervised Learning with Multi-Head Co-Training

Semi-Supervised Learning with Multi-Head Co-Training (PyTorch) Abstract Co-training, extended from self-training, is one of the frameworks for semi-su

cmc 6 Dec 04, 2022