Progressive Image Deraining Networks: A Better and Simpler Baseline

Related tags

Deep LearningPReNet
Overview

Progressive Image Deraining Networks: A Better and Simpler Baseline

[arxiv] [pdf] [supp]

Introduction

This paper provides a better and simpler baseline deraining network by discussing network architecture, input and output, and loss functions. Specifically, by repeatedly unfolding a shallow ResNet, progressive ResNet (PRN) is proposed to take advantage of recursive computation. A recurrent layer is further introduced to exploit the dependencies of deep features across stages, forming our progressive recurrent network (PReNet). Furthermore, intra-stage recursive computation of ResNet can be adopted in PRN and PReNet to notably reduce network parameters with graceful degradation in deraining performance (PRN_r and PReNet_r). For network input and output, we take both stage-wise result and original rainy image as input to each ResNet and finally output the prediction of residual image. As for loss functions, single MSE or negative SSIM losses are sufficient to train PRN and PReNet. Experiments show that PRN and PReNet perform favorably on both synthetic and real rainy images. Considering its simplicity, efficiency and effectiveness, our models are expected to serve as a suitable baseline in future deraining research.

Prerequisites

  • Python 3.6, PyTorch >= 0.4.0
  • Requirements: opencv-python, tensorboardX
  • Platforms: Ubuntu 16.04, cuda-8.0 & cuDNN v-5.1 (higher versions also work well)
  • MATLAB for computing evaluation metrics

Datasets

PRN and PReNet are evaluated on four datasets*: Rain100H [1], Rain100L [1], Rain12 [2] and Rain1400 [3]. Please download the testing datasets from BaiduYun or OneDrive, and place the unzipped folders into ./datasets/test/.

To train the models, please download training datasets: RainTrainH [1], RainTrainL [1] and Rain12600 [3] from BaiduYun or OneDrive, and place the unzipped folders into ./datasets/train/.

*We note that:

(i) The datasets in the website of [1] seem to be modified. But the models and results in recent papers are all based on the previous version, and thus we upload the original training and testing datasets to BaiduYun and OneDrive.

(ii) For RainTrainH, we strictly exclude 546 rainy images that have the same background contents with testing images. All our models are trained on remaining 1,254 training samples.

Getting Started

1) Testing

We have placed our pre-trained models into ./logs/.

Run shell scripts to test the models:

bash test_Rain100H.sh   # test models on Rain100H
bash test_Rain100L.sh   # test models on Rain100L
bash test_Rain12.sh     # test models on Rain12
bash test_Rain1400.sh   # test models on Rain1400 
bash test_Ablation.sh   # test models in Ablation Study
bash test_real.sh       # test PReNet on real rainy images

All the results in the paper are also available at BaiduYun. You can place the downloaded results into ./results/, and directly compute all the evaluation metrics in this paper.

2) Evaluation metrics

We also provide the MATLAB scripts to compute the average PSNR and SSIM values reported in the paper.

 cd ./statistic
 run statistic_Rain100H.m
 run statistic_Rain100L.m
 run statistic_Rain12.m
 run statistic_Rain1400.m
 run statistic_Ablation.m  # compute the metrics in Ablation Study

Average PSNR/SSIM values on four datasets:

Dataset PRN PReNet PRN_r PReNet_r JORDER[1] RESCAN[4]
Rain100H 28.07/0.884 29.46/0.899 27.43/0.874 28.98/0.892 26.54/0.835 28.88/0.866
Rain100L 36.99/0.977 37.48/0.979 36.11/0.973 37.10/0.977 36.61/0.974 ---
Rain12 36.62/0.952 36.66/0.961 36.16/0.961 36.69/0.962 33.92/0.953 ---
Rain1400 31.69/0.941 32.60/0.946 31.31/0.937 32.44/0.944 --- ---

*We note that:

(i) The metrics by JORDER[1] are computed directly based on the deraining images provided by the authors.

(ii) RESCAN[4] is re-trained with their default settings: (1) RESCAN for Rain100H is trained on the full 1800 rainy images, while our models are all trained on the strict 1254 rainy images. (2) The re-trained model of RESCAN is available at here.

(iii) The deraining results by JORDER and RESCAN can be downloaded from here, and their metrics in the above table can be computed by the Matlab scripts.

3) Training

Run shell scripts to train the models:

bash train_PReNet.sh      
bash train_PRN.sh   
bash train_PReNet_r.sh    
bash train_PRN_r.sh  

You can use tensorboard --logdir ./logs/your_model_path to check the training procedures.

Model Configuration

The following tables provide the configurations of options.

Training Mode Configurations

Option Default Description
batchSize 18 Training batch size
recurrent_iter 6 Number of recursive stages
epochs 100 Number of training epochs
milestone [30,50,80] When to decay learning rate
lr 1e-3 Initial learning rate
save_freq 1 save intermediate model
use_GPU True use GPU or not
gpu_id 0 GPU id
data_path N/A path to training images
save_path N/A path to save models and status

Testing Mode Configurations

Option Default Description
use_GPU True use GPU or not
gpu_id 0 GPU id
recurrent_iter 6 Number of recursive stages
logdir N/A path to trained model
data_path N/A path to testing images
save_path N/A path to save results

References

[1] Yang W, Tan RT, Feng J, Liu J, Guo Z, Yan S. Deep joint rain detection and removal from a single image. In IEEE CVPR 2017.

[2] Li Y, Tan RT, Guo X, Lu J, Brown MS. Rain streak removal using layer priors. In IEEE CVPR 2016.

[3] Fu X, Huang J, Zeng D, Huang Y, Ding X, Paisley J. Removing rain from single images via a deep detail network. In IEEE CVPR 2017.

[4] Li X, Wu J, Lin Z, Liu H, Zha H. Recurrent squeeze-and-excitation context aggregation net for single image deraining.In ECCV 2018.

Citation

 @inproceedings{ren2019progressive,
   title={Progressive Image Deraining Networks: A Better and Simpler Baseline},
   author={Ren, Dongwei and Zuo, Wangmeng and Hu, Qinghua and Zhu, Pengfei and Meng, Deyu},
   booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
   year={2019},
 }
Create Data & AI apps in 20 lines of code with Shimoku

Install with: pip install shimoku-api-python Start with: from os import getenv import shimoku_api_python.client as Shimoku

Shimoku 5 Nov 07, 2022
SMD-Nets: Stereo Mixture Density Networks

SMD-Nets: Stereo Mixture Density Networks This repository contains a Pytorch implementation of "SMD-Nets: Stereo Mixture Density Networks" (CVPR 2021)

Fabio Tosi 115 Dec 26, 2022
FastyAPI is a Stack boilerplate optimised for heavy loads.

FastyAPI A FastAPI based Stack boilerplate for heavy loads. Explore the docs » View Demo · Report Bug · Request Feature Table of Contents About The Pr

Ali Chaayb 47 Dec 27, 2022
Implementation of Wasserstein adversarial attacks.

Stronger and Faster Wasserstein Adversarial Attacks Code for Stronger and Faster Wasserstein Adversarial Attacks, appeared in ICML 2020. This reposito

21 Oct 06, 2022
Music library streaming app written in Flask & VueJS

djtaytay This is a little toy app made to explore Vue, brush up on my Python, and make a remote music collection accessable through a web interface. I

Ryan Tasson 6 May 27, 2022
Keqing Chatbot With Python

KeqingChatbot A public running instance can be found on telegram as @keqingchat_bot. Requirements Python 3.8 or higher. A bot token. Local Deploy git

Rikka-Chan 2 Jan 16, 2022
SciKit-Learn Laboratory (SKLL) makes it easy to run machine learning experiments.

SciKit-Learn Laboratory This Python package provides command-line utilities to make it easier to run machine learning experiments with scikit-learn. O

ETS 528 Nov 25, 2022
Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal"

Patch-wise Adversarial Removal Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal

4 Oct 12, 2022
JugLab 33 Dec 30, 2022
My freqtrade strategies

My freqtrade-strategies Hi there! This is repo for my freqtrade-strategies. My name is Ilya Zelenchuk, I'm a lecturer at the SPbU university (https://

171 Dec 05, 2022
Self Governing Neural Networks (SGNN): the Projection Layer

Self Governing Neural Networks (SGNN): the Projection Layer A SGNN's word projections preprocessing pipeline in scikit-learn In this notebook, we'll u

Guillaume Chevalier 22 Nov 06, 2022
Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers Results results on COCO val Backbone Method Lr Schd PQ Config Download

155 Dec 20, 2022
The project of phase's key role in complex and real NN

Phase-in-NN This is the code for our project at Princeton (co-authors: Yuqi Nie, Hui Yuan). The paper title is: "Neural Network is heterogeneous: Phas

YuqiNie-lab 1 Nov 04, 2021
Our implementation used for the MICCAI 2021 FLARE Challenge titled 'Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements'.

Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements Our implementation used for the MICCAI 2021 FLARE C

Franz Thaler 3 Sep 27, 2022
Code for the paper "M2m: Imbalanced Classification via Major-to-minor Translation" (CVPR 2020)

M2m: Imbalanced Classification via Major-to-minor Translation This repository contains code for the paper "M2m: Imbalanced Classification via Major-to

79 Oct 13, 2022
Point Cloud Registration Network

PCRNet: Point Cloud Registration Network using PointNet Encoding Source Code Author: Vinit Sarode and Xueqian Li Paper | Website | Video | Pytorch Imp

ViNiT SaRoDe 59 Nov 19, 2022
Net2net - Network-to-Network Translation with Conditional Invertible Neural Networks

Net2Net Code accompanying the NeurIPS 2020 oral paper Network-to-Network Translation with Conditional Invertible Neural Networks Robin Rombach*, Patri

CompVis Heidelberg 206 Dec 20, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which reaches a median HNS of 205.7 after only 10M frames (the original Rainbow from Hessel et al. 2017 re

Dominik Schmidt 31 Dec 21, 2022
Code for the preprint "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"

This is a repository for the paper of "Well-classified Examples are Underestimated in Classification with Deep Neural Networks" The implementation and

LancoPKU 25 Dec 11, 2022
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

83 Jan 01, 2023