Back to Event Basics: SSL of Image Reconstruction for Event Cameras

Overview

Back to Event Basics: SSL of Image Reconstruction for Event Cameras

Minimal code for Back to Event Basics: Self-Supervised Learning of Image Reconstruction for Event Cameras via Photometric Constancy, CVPR'21.

Usage

This project uses Python >= 3.7.3. After setting up your virtual environment, please install the required python libraries through:

pip install -r requirements.txt

Code is formatted with Black (PEP8) using a pre-commit hook. To configure it, run:

pre-commit install

Data format

Similarly to researchers from Monash University, this project processes events through the HDF5 data format. Details about the structure of these files can be found in datasets/tools/.

Inference

Download our pre-trained models from here.

Our HDF5 version of sequences from the Event Camera Dataset can also be downloaded from here for evaluation purposes.

To estimate optical flow from the input events:

python eval_flow.py 
   

   

 

To perform image reconstruction from the input events:

python eval_reconstruction.py 
   

   

 

In configs/, you can find the configuration files associated to these scripts and vary the inference settings (e.g., number of input events, dataset).

Training

Our framework can be trained using any event camera dataset. However, if you are interested in using our training data, you can download it from here. The datasets are expected at datasets/data/, but this location can be modified in the configuration files.

To train an image reconstruction and optical flow model, you need to adapt the training settings in configs/train_reconstruction.yml. Here, you can choose the training dataset, the number of input events, the neural networks to be used (EV-FlowNet or FireFlowNet for optical flow; E2VID or FireNet for image reconstruction), the number of epochs, the optimizer and learning rate, etc. To start the training from scratch, run:

python train_reconstruction.py

Alternatively, if you have a model that you would like to keep training from, you can use

python train_reconstruction.py --prev_model 
   

   

This is handy if, for instance, you just want to train the image reconstruction model and use a pre-trained optical flow network. For this, you can set train_flow: False in configs/train_reconstruction.yml, and run:

python train_reconstruction.py --prev_model 
   

   

If you just want to train an optical flow network, adapt configs/train_flow.yml, and run:

python train_flow.py

Note that we use MLflow to keep track of all the experiments.

Citations

If you use this library in an academic context, please cite the following:

@article{paredes2020back,
  title={Back to Event Basics: Self-Supervised Learning of Image Reconstruction for Event Cameras via Photometric Constancy},
  author={Paredes-Vall{\'e}s, Federico and de Croon, Guido C. H. E.},
  journal={arXiv preprint arXiv:2009.08283},
  year={2020}
}

Acknowledgements

This code borrows from the following open source projects, whom we would like to thank:

Owner
TU Delft
TU Delft - MAVLab
TU Delft
A Python type explainer!

typesplainer A Python typehint explainer! Available as a cli, as a website, as a vscode extension, as a vim extension Usage First, install the package

Typesplainer 79 Dec 01, 2022
Automatic caption evaluation metric based on typicality analysis.

SeMantic and linguistic UndeRstanding Fusion (SMURF) Automatic caption evaluation metric described in the paper "SMURF: SeMantic and linguistic UndeRs

Joshua Feinglass 6 Jan 09, 2022
This example implements the end-to-end MLOps process using Vertex AI platform and Smart Analytics technology capabilities

MLOps with Vertex AI This example implements the end-to-end MLOps process using Vertex AI platform and Smart Analytics technology capabilities. The ex

Google Cloud Platform 238 Dec 21, 2022
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)

Skyformer This repository is the official implementation of Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr"om Method (NeurIPS 2021).

Qi Zeng 46 Sep 20, 2022
Official Pytorch Implementation of: "Semantic Diversity Learning for Zero-Shot Multi-label Classification"(2021) paper

Semantic Diversity Learning for Zero-Shot Multi-label Classification Paper Official PyTorch Implementation Avi Ben-Cohen, Nadav Zamir, Emanuel Ben Bar

28 Aug 29, 2022
Kaggle | 9th place single model solution for TGS Salt Identification Challenge

UNet for segmenting salt deposits from seismic images with PyTorch. General We, tugstugi and xuyuan, have participated in the Kaggle competition TGS S

Erdene-Ochir Tuguldur 276 Dec 20, 2022
CaLiGraph Ontology as a Challenge for Semantic Reasoners ([email protected]'21)

CaLiGraph for Semantic Reasoning Evaluation Challenge This repository contains code and data to use CaLiGraph as a benchmark dataset in the Semantic R

Nico Heist 0 Jun 08, 2022
A Context-aware Visual Attention-based training pipeline for Object Detection from a Webpage screenshot!

CoVA: Context-aware Visual Attention for Webpage Information Extraction Abstract Webpage information extraction (WIE) is an important step to create k

Keval Morabia 41 Jan 01, 2023
High-Resolution Image Synthesis with Latent Diffusion Models

Latent Diffusion Models Requirements A suitable conda environment named ldm can be created and activated with: conda env create -f environment.yaml co

CompVis Heidelberg 5.6k Jan 04, 2023
Equivariant CNNs for the sphere and SO(3) implemented in PyTorch

Equivariant CNNs for the sphere and SO(3) implemented in PyTorch

Jonas Köhler 893 Dec 28, 2022
MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data

This repository is the official PyTorch implementation of Meta-Balance. Find the paper on arxiv MetaBalance: High-Performance Neural Networks for Clas

Arpit Bansal 20 Oct 18, 2021
Towards the D-Optimal Online Experiment Design for Recommender Selection (KDD 2021)

Towards the D-Optimal Online Experiment Design for Recommender Selection (KDD 2021) Contact 0 Jan 11, 2022

Pytorch port of Google Research's LEAF Audio paper

leaf-audio-pytorch Pytorch port of Google Research's LEAF Audio paper published at ICLR 2021. This port is not completely finished, but the Leaf() fro

Dennis Fedorishin 80 Oct 31, 2022
Explanatory Learning: Beyond Empiricism in Neural Networks

Explanatory Learning This is the official repository for "Explanatory Learning: Beyond Empiricism in Neural Networks". Datasets Download the datasets

GLADIA Research Group 10 Dec 06, 2022
[CVPR22] Official codebase of Semantic Segmentation by Early Region Proxy.

RegionProxy Figure 2. Performance vs. GFLOPs on ADE20K val split. Semantic Segmentation by Early Region Proxy Yifan Zhang, Bo Pang, Cewu Lu CVPR 2022

Yifan 54 Nov 29, 2022
The deployment framework aims to provide a simple, lightweight, fast integrated, pipelined deployment framework that ensures reliability, high concurrency and scalability of services.

savior是一个能够进行快速集成算法模块并支持高性能部署的轻量开发框架。能够帮助将团队进行快速想法验证(PoC),避免重复的去github上找模型然后复现模型;能够帮助团队将功能进行流程拆解,很方便的提高分布式执行效率;能够有效减少代码冗余,减少不必要负担。

Tao Luo 125 Dec 22, 2022
Official implementation of Few-Shot and Continual Learning with Attentive Independent Mechanisms

Few-Shot and Continual Learning with Attentive Independent Mechanisms This repository is the official implementation of Few-Shot and Continual Learnin

Chikan_Huang 25 Dec 08, 2022
Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation and a convolutional neural network (CNN) for image classification

About subwAI subwAI - a project for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation

82 Jan 01, 2023
Codebase for the paper titled "Continual learning with local module selection"

This repository contains the codebase for the paper Continual Learning via Local Module Composition. Setting up the environemnt Create a new conda env

Oleksiy Ostapenko 20 Dec 10, 2022