Detail-Preserving Transformer for Light Field Image Super-Resolution

Related tags

Deep LearningDPT
Overview

DPT

Official Pytorch implementation of the paper "Detail-Preserving Transformer for Light Field Image Super-Resolution" accepted by AAAI 2022 .

Updates

  • 2022.01: Our method is available at the newly-released repository BasicLFSR, an open-source and easy-to-use toolbox for LF image SR.
  • 2022.01: The code is released.

Requirements

  • Python 3.7.7
  • Pytorch=1.5.0
  • torchvision=0.6.0
  • h5py=2.8.0
  • Matlab

Dataset

We use the EPFL, HCInew, HCIold, INRIA and STFgantry datasets for both training and testing. You can download the above dataset from Baidu Drive (key:912V).

Download the visual results

We share the super-resolved results generated by our DPT. Then, researchers can compare their methods to our DPT without performing inference. Results are available at Baidu Drive (key:912V).

Prepare the datasets

To generate the training data,

 Using Matlab to run `GenerateTrainingData.m`

To generate the testing data,

 Using Matlab to run `GenerateTestData.m`

We also provide the processed datasets we used in the paper. The processed datasets are avaliable at Baidu Drive (key:912V).

Train

To perform DPT training, please run

python train.py

Checkpoint will be saved to ./log/.

Test

To evaluate DPT performance, please run

python test.py

The performance of DPT on five datasets will be printed on the screen. The visual result of each scene will be saved in ./Results/. The PSNR and SSIM values of each scene will aslo be saved in ./PSNRSSIM/.

Generate visual results

To generate the visual super-resolved results,

Using Matlab to run `GenerateResultImages.m` 

The '.mat' files in ./Results/ will be converted to '.png' images to ./SRimages/.

To generate the visual gradient results, please run

python generate_visual_gradient_map.py 

Gradient results will be saved to ./GRAimages/.

Citation

If you find this work helpful, please consider citing the following paper:

@article{wang2022detail,
  title={Detail Preserving Transformer for Light Field Image Super-Resolution},
  author={Wang, Shunzhou and Zhou, Tianfei and Lu, Yao and Di, Huijun},
  journal={arXiv preprint arXiv:2201.00346},
  year={2022}
}

Acknowledgements

This code is heavily based on LF-DFNet. We also refer to the codes in VSR-Transformer, COLA-Net, and SPSR. We thank the authors for sharing the codes. We would like to thank Yingqian Wang for his help with LFSR. We would also like to thank Zhengyu Liang for adding our DPT to the repository BasicLFSR.

Contact

If you have any question about this work, feel free to concat with me via [email protected].

Implementation for Simple Spectral Graph Convolution in ICLR 2021

Simple Spectral Graph Convolutional Overview This repo contains an example implementation of the Simple Spectral Graph Convolutional (S^2GC) model. Th

allenhaozhu 64 Dec 31, 2022
Library extending Jupyter notebooks to integrate with Apache TinkerPop and RDF SPARQL.

Graph Notebook: easily query and visualize graphs The graph notebook provides an easy way to interact with graph databases using Jupyter notebooks. Us

Amazon Web Services 501 Dec 28, 2022
PASTRIE: A Corpus of Prepositions Annotated with Supersense Tags in Reddit International English

PASTRIE Official release of the corpus described in the paper: Michael Kranzlein, Emma Manning, Siyao Peng, Shira Wein, Aryaman Arora, and Nathan Schn

NERT @ Georgetown 4 Dec 02, 2021
A data-driven approach to quantify the value of classifiers in a machine learning ensemble.

Documentation | External Resources | Research Paper Shapley is a Python library for evaluating binary classifiers in a machine learning ensemble. The

Benedek Rozemberczki 188 Dec 29, 2022
This code provides various models combining dilated convolutions with residual networks

Overview This code provides various models combining dilated convolutions with residual networks. Our models can achieve better performance with less

Fisher Yu 1.1k Dec 30, 2022
Sequence Modeling with Structured State Spaces

Structured State Spaces for Sequence Modeling This repository provides implementations and experiments for the following papers. S4 Efficiently Modeli

HazyResearch 896 Jan 01, 2023
The PyTorch implementation of paper REST: Debiased Social Recommendation via Reconstructing Exposure Strategies

REST The PyTorch implementation of paper REST: Debiased Social Recommendation via Reconstructing Exposure Strategies. Usage Download dataset Download

DMIRLAB 2 Mar 13, 2022
Scalable training for dense retrieval models.

Scalable implementation of dense retrieval. Training on cluster By default it trains locally: PYTHONPATH=.:$PYTHONPATH python dpr_scale/main.py traine

Facebook Research 90 Dec 28, 2022
The implementation of the lifelong infinite mixture model

Lifelong infinite mixture model 📋 This is the implementation of the Lifelong infinite mixture model 📋 Accepted by ICCV 2021 Title : Lifelong Infinit

Fei Ye 5 Oct 20, 2022
A PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing"

A PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf 2021). Abstract In this work we propose Pathfind

Benedek Rozemberczki 49 Dec 01, 2022
An efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning"

MMGEN-FaceStylor English | įŽ€äŊ“中文 Introduction This repo is an efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits

OpenMMLab 182 Dec 27, 2022
The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

PlantStereo This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Wang Qingyu 14 Nov 28, 2022
Implementation of CVPR 2020 Dual Super-Resolution Learning for Semantic Segmentation

Dual super-resolution learning for semantic segmentation 2021-01-02 Subpixel Update Happy new year! The 2020-12-29 update of SISR with subpixel conv p

Sam 79 Nov 24, 2022
DetCo: Unsupervised Contrastive Learning for Object Detection

DetCo: Unsupervised Contrastive Learning for Object Detection arxiv link News Sparse RCNN+DetCo improves from 45.0 AP to 46.5 AP(+1.5) with 3x+ms trai

Enze Xie 234 Dec 18, 2022
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information

ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information This repository contains code, model, dataset for ChineseBERT at ACL2021. Ch

413 Dec 01, 2022
Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646

[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization LPN [Paper] NEWs Prerequisites Python 3.6 GPU Memory = 8G Numpy 1.

46 Dec 14, 2022
Implements an infinite sum of poisson-weighted convolutions

An infinite sum of Poisson-weighted convolutions Kyle Cranmer, Aug 2018 If viewing on GitHub, this looks better with nbviewer: click here Consider a v

Kyle Cranmer 26 Dec 07, 2022
Code and Data for the paper: Molecular Contrastive Learning with Chemical Element Knowledge Graph [AAAI 2022]

Knowledge-enhanced Contrastive Learning (KCL) Molecular Contrastive Learning with Chemical Element Knowledge Graph [ AAAI 2022 ]. We construct a Chemi

Fangyin 58 Dec 26, 2022
Composing methods for ML training efficiency

MosaicML Composer contains a library of methods, and ways to compose them together for more efficient ML training.

MosaicML 2.8k Jan 08, 2023
Tackling data scarcity in Speech Translation using zero-shot multilingual Machine Translation techniques

Tackling data scarcity in Speech Translation using zero-shot multilingual Machine Translation techniques This repository is derived from the NMTGMinor

Tu Anh Dinh 1 Sep 07, 2022