Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference

Related tags

Deep LearningRawVSR
Overview

RawVSR

This repo contains the official codes for our paper:

Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference

Xiaohong Liu, Kangdi Shi, Zhe Wang, Jun Chen

plot

Accepted in IEEE Transactions on Image Processing

[Paper Download] [Video]


Dependencies and Installation

  1. Clone repo

    $ git clone https://github.com/proteus1991/RawVSR.git
  2. Install dependent packages

    $ cd RawVSR
    $ pip install -r requirements.txt
  3. Setup the Deformable Convolution Network (DCN)

    Since our RawVSR use the DCN for feature alignment extracted from different video frames, we follow the setup in EDVR, where more details can be found.

    $ python setup.py develop

    Note that the deform_conv_cuda.cpp and deform_conv_cuda_kernel.cu have been modified to solve compile errors in PyTorch >= 1.7.0. If your PyTorch version < 1.7.0, you may need to download the original setup code.

Introduction

  • train.py and test.py are the entry codes for training and testing the RawVSR.
  • ./data/ contains the codes for data loading.
  • ./dataset/ contains the corresponding video sequences.
  • ./dcn/ is the dependencies of DCN.
  • ./models/ contains the codes to define the network.
  • ./utils/ includes the utilities.
  • ./weight_checkpoint/ saves checkpoints and the best network weight.

Raw Video Dataset (RawVD)

Since we are not aware of the existence of publicly available raw video datasets, to train our RawVSR, a raw video dataset dubbled as RawVD is built. plot

In this dataset, we provide the ground-truth sRGB frames in folder 1080p_gt_rgb. Low-resolution (LR) Raw frames are in folder 1080p_lr_d_raw_2 and 1080p_lr_d_raw_4 in terms of different scale ratios. Their corresponding sRGB frames are in folder 1080p_lr_d_rgb_2 and 1080p_lr_d_rgb_4, where d in folder name stands for the degradations including defocus blurring and heteroscedastic Gaussian noise. We also released the original raw videos in Magic Lantern Video (MLV) format. The corresponding software to play it can be found here. Details can be found in Section 3 of our paper.

Quick Start

1. Testing

Make sure all dependencies are successfully installed.

Run test.py with --scale_ratio and save_image tags.

$ python test.py --scale_ratio 4 --save_image

The help of --scale_ratio and save_image tags is shown by running:

$ python test.py -h

If everything goes well, the following messages will appear in your bash:

--- Hyper-parameter default settings ---
train settings:
 {'dataroot_GT': '/media/lxh/SSD_DATA/raw_test/gt/1080p/1080p_gt_rgb', 'dataroot_LQ': '/media/lxh/SSD_DATA/raw_test/w_d/1080p/1080p_lr_d_raw_4', 'lr': 0.0002, 'num_epochs': 100, 'N_frames': 7, 'n_workers': 12, 'batch_size': 24, 'GT_size': 256, 'LQ_size': 64, 'scale': 4, 'phase': 'train'}
val settings:
 {'dataroot_GT': '/media/lxh/SSD_DATA/raw_test/gt/1080p/1080p_gt_rgb', 'dataroot_LQ': '/media/lxh/SSD_DATA/raw_test/w_d/1080p/1080p_lr_d_raw_4', 'N_frames': 7, 'n_workers': 12, 'batch_size': 2, 'phase': 'val', 'save_image': True}
network settings:
 {'nf': 64, 'nframes': 7, 'groups': 8, 'back_RBs': 4}
dataset settings:
 {'dataset_name': 'RawVD'}
--- testing results ---
store: 29.04dB
painting: 29.02dB
train: 28.59dB
city: 29.08dB
tree: 28.06dB
avg_psnr: 28.76dB
--- end ---

The RawVSR is tested on our elaborately-collected RawVD. Here the PSNR results should be the same as Table 1 in our paper.

2. Training

Run train.py without --save_image tag to reduce the training time.

$ python train.py --scale_ratio 4

If you want to change the default hyper-parameters (e.g., modifying the batch_size), simply go config.py. All network and training/testing settings are stored there.

Acknowledgement

Some codes (e.g., DCN) are borrowed from EDVR with modification.

Cite

If you use this code, please kindly cite

@article{liu2020exploit,
  title={Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference},
  author={Liu, Xiaohong and Shi, Kangdi and Wang, Zhe and Chen, Jun},
  journal={arXiv preprint arXiv:2008.10710},
  year={2020}
}

Contact

Should you have any question about this code, please open a new issue directly. For any other questions, you might contact me in email: [email protected].

Owner
Xiaohong Liu
Xiaohong Liu
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language mod

20.5k Jan 08, 2023
Reference implementation for Deep Unsupervised Learning using Nonequilibrium Thermodynamics

Diffusion Probabilistic Models This repository provides a reference implementation of the method described in the paper: Deep Unsupervised Learning us

Jascha Sohl-Dickstein 238 Jan 02, 2023
Convert Apple NeuralHash model for CSAM Detection to ONNX.

Apple NeuralHash is a perceptual hashing method for images based on neural networks. It can tolerate image resize and compression.

Asuhariet Ygvar 1.5k Dec 31, 2022
Pytorch implementation of "A simple neural network module for relational reasoning" (Relational Networks)

Pytorch implementation of Relational Networks - A simple neural network module for relational reasoning Implemented & tested on Sort-of-CLEVR task. So

Kim Heecheol 800 Dec 05, 2022
A New Approach to Overgenerating and Scoring Abstractive Summaries

We provide the source code for the paper "A New Approach to Overgenerating and Scoring Abstractive Summaries" accepted at NAACL'21. If you find the code useful, please cite the following paper.

Kaiqiang Song 4 Apr 03, 2022
Parris, the automated infrastructure setup tool for machine learning algorithms.

README Parris, the automated infrastructure setup tool for machine learning algorithms. What Is This Tool? Parris is a tool for automating the trainin

Joseph Greene 319 Aug 02, 2022
duralava is a neural network which can simulate a lava lamp in an infinite loop.

duralava duralava is a neural network which can simulate a lava lamp in an infinite loop. Example This is not a real lava lamp but a "fake" one genera

Maximilian Bachl 87 Dec 20, 2022
💊 A 3D Generative Model for Structure-Based Drug Design (NeurIPS 2021)

A 3D Generative Model for Structure-Based Drug Design Coming soon... Citation @inproceedings{luo2021sbdd, title={A 3D Generative Model for Structu

Shitong Luo 118 Jan 05, 2023
A library for hidden semi-Markov models with explicit durations

hsmmlearn hsmmlearn is a library for unsupervised learning of hidden semi-Markov models with explicit durations. It is a port of the hsmm package for

Joris Vankerschaver 69 Dec 20, 2022
🐸STT integration examples

🐸 STT 0.9.x Examples These are various examples on how to use or integrate 🐸 STT using our packages. It is a good way to just try out 🐸 STT before

coqui 92 Dec 19, 2022
Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding

Vision Longformer This project provides the source code for the vision longformer paper. Multi-Scale Vision Longformer: A New Vision Transformer for H

Microsoft 209 Dec 30, 2022
Complementary Patch for Weakly Supervised Semantic Segmentation, ICCV21 (poster)

CPN (ICCV2021) This is an implementation of Complementary Patch for Weakly Supervised Semantic Segmentation, which is accepted by ICCV2021 poster. Thi

Ferenas 20 Dec 12, 2022
Official MegEngine implementation of CREStereo(CVPR 2022 Oral).

[CVPR 2022] Practical Stereo Matching via Cascaded Recurrent Network with Adaptive Correlation This repository contains MegEngine implementation of ou

MEGVII Research 309 Dec 30, 2022
Exemplo de implementação do padrão circuit breaker em python

fast-circuit-breaker Circuit breakers existem para permitir que uma parte do seu sistema falhe sem destruir todo seu ecossistema de serviços. Michael

James G Silva 17 Nov 10, 2022
Zero-shot Synthesis with Group-Supervised Learning (ICLR 2021 paper)

GSL - Zero-shot Synthesis with Group-Supervised Learning Figure: Zero-shot synthesis performance of our method with different dataset (iLab-20M, RaFD,

Andy_Ge 62 Dec 21, 2022
Rotary Transformer

[中文|English] Rotary Transformer Rotary Transformer is an MLM pre-trained language model with rotary position embedding (RoPE). The RoPE is a relative

325 Jan 03, 2023
Code for NeurIPS 2021 paper 'Spatio-Temporal Variational Gaussian Processes'

Spatio-Temporal Variational GPs This repository is the official implementation of the methods in the publication: O. Hamelijnck, W.J. Wilkinson, N.A.

AaltoML 26 Sep 16, 2022
Safe Local Motion Planning with Self-Supervised Freespace Forecasting, CVPR 2021

Safe Local Motion Planning with Self-Supervised Freespace Forecasting By Peiyun Hu, Aaron Huang, John Dolan, David Held, and Deva Ramanan Citing us Yo

Peiyun Hu 90 Dec 01, 2022
This is a project based on retinaface face detection, including ghostnet and mobilenetv3

English | 简体中文 RetinaFace in PyTorch Chinese detailed blog:https://zhuanlan.zhihu.com/p/379730820 Face recognition with masks is still robust---------

pogg 59 Dec 21, 2022
This program will stylize your photos with fast neural style transfer.

Neural Style Transfer (NST) Using TensorFlow Demo TensorFlow TensorFlow is an end-to-end open source platform for machine learning. It has a comprehen

Ismail Boularbah 1 Aug 08, 2022