PyTorch Implementation for "ForkGAN with SIngle Rainy NIght Images: Leveraging the RumiGAN to See into the Rainy Night"

Overview

ForkGAN with Single Rainy Night Images: Leveraging the RumiGAN to See into the Rainy Night

By Seri Lee, Department of Engineering, Seoul National University

This repository contains the code for training and testing the SinForkGAN model. This project was conducted as a final project for the course "Topics in Artificial Intelligence: Advanced GANs" in Seoul National University. The paper was submitted for 2021 ACML. For more information about the course, please refer to our instructor's github page.

Dependency

We use python3 (3.6), and python2 is not supported.

Table of contents

  1. Overview
  2. Dataset
  3. SinForkGAN Model
  4. Dependency
  5. Install
  6. How to use
  7. Evaluation Metric
  8. Downstream Tasks
  9. Reference
  10. Contact

Overview

Pipeline

Dataset

SinForkGAN model is built upon 4 different night/rainy dataset.

  1. Dark Zurich Dataset (ICCV 2019): provides 2,416 nighttime images along with the respective GPS coordinates of the camera for each image used to construct cross-time correspondences for evaluation on localization task.
  • Screen Shot 2021-06-02 at 5 16 43 AM
  1. RaidaR (CVPR 2020): a rich annotated dataset of rainy street scenes. 5,000 images provide semantic segmentations and 3,658 provide object instance segementations.
  • Screen Shot 2021-06-02 at 5 18 49 AM
  1. BDD100K (CVPR2017): 100,000 video clips in multiple cities, weathers and multiple times of day. 27,971 night images for training and 3,929 night images for evaluation.
  • Screen Shot 2021-06-02 at 5 20 07 AM
  1. ExDark (CVIU 2018): 7,7863 low-light images from very low-light environments to twilight with 12 object classes annotated on local object bounding boxes.
  • Screen Shot 2021-06-02 at 5 21 12 AM

SinForkGAN Model

SinForkGAN model effectively learns and tests nighttime rainy images and translates them into standard daytime images in an unsupervised way. Note that this model is designed for subsequent computer vision task (e.g. image retrieval, localization, semantic segmentation, object detection) rather than human vision. Some noise that are crucially degrading for machine vision might not be for the natural eye.

It also differs from single image dehazing/denoising methods in that it is trained and tested on real-world dataset. Unsupervised single image dehazing/denoising methods tend to fail under real-world circumstances where noises are different from synthetic dataset, and our problem setting (e.g. rainy night) is a much more challenging setting than just simple image denoising.

figure2

Dependency

Python (3.6) is used for training and testing.

Install

For Linux System

git clone --recurse-submodules (this repo)
cd $REPO_NAME/code
(use python >= 3.6)
python3 -m venv sinforkgan-env
source sinforkgan-env/bin/activate
pip3 install -r requirements.txt

Place the data folder at `$REPO_NAME/datasets'

Data Folder Structure

Please place the data folder like the following structure. We change and modify the structure of each dataset using only nighttime/rainy images. For example, for RaidaR dataset, we only use 0.Rainy dataset for testing and do away with the folder 1.Sunny.

  • How it looks when you download each dataset
code/
  translation/
    train.py
  ...
datasets/
  bdd100k/
   train/
    class_color/
     ...
    raw_images/
     0a1a0c5d-8098f13f.jpg
     ...
   val/
    class_color/
     ...
    raw_images/
     ...
  dark-zurich/
   train/
   val/
    ...
    GOPRO0356_000488_rgb_anon.png
  ex-dark/
    ...
    Bicycle/
    ...
     2015_06850.jpg
    Boat/
    ...
  raidar/
   Part1/
    Part1.1/
     00001593/
      00001593.jpg
   ...
   Part2/
   ...
  • How you should change it
code/
  translation/
    train.py
datasets/
  bdd100k/
    train/
      0a1z0c5d-8098f13f.jpg
      ...
    val/
    test/
  dark-zurich/
    train/
      GOPRO0356_000488_rgb_anon.png
      ...
    val/
    test/
  ex-dark/
   train/
     2015_06850.jpg
     ...
    val/
    test/
  raidar/
    train/
      00001593.jpg
      ...
    val/
    test/

(More information will be provided soon)

How to use

Training

cd code/translation 
python3 cli.py train

Evaluation

All the pretrained weights are planned to be provided. If you don't have the pretrained weights provided or trained in the ./ckpt directory, please download them here

cd code/translatino
python3 cli.py evaluate --ckpt_name=$CKPT_NAME

Demo

For simple image translation demo, run

cd code/translation
python3 cli.py infer --ckpt_name=$CKPT_NAME

You can view the translated file in the terminal using imgcat in ./test directory.

cd test
./imgcat results/(name/of/file.png)

Evaluation Metric

  • mIoU: Intersection-over-Union(IoU) measures the overlap between predicted segmentation map and the ground truth, divided by their union. In the case of multiple classes, we take the average of IoU of all classes (i.e., mIoU) to indicate the overall performance of the model.

Downstream Tasks

Image Localization/Retrieval

We use SIFT algorithm for keypoint detection. Opencv provides a ready-to-use SIFT module. More information about cv::SIFT can be found here. The SIFT detector uses DoG and 4 octaves starting with a two times up-sampled version of the original image, 3 scales per octave, a peak threshold of , an edge threshold of 10, and a maximum of 2 detected orientations per keypoint location. These values have been optimized for the purpose of SFM and are, e.g., used as defaults in COLMAP.

Pipeline

  1. Detect keypoints using SIFT Detector, compute the descriptors
  2. Matching descriptor vectors with a BF based matcher
  3. Filter matches using the Lowe's ratio test (ratio_thresh = 0.7)
  4. draw matches

figure3

Semantic Segmentation

DeepLabV3 model pretrained on the Cityscapes dataset is used for the semantic segmentation task. The source code that we used for this task has been deleted, unfortunately. We will soon find an alternative for testing.

Raidar dataset can be downloaded here figure4

BDD100K dataset can be downloaded here figure5

Object Detection

YOLOv3-tiny model pretrained on the PASCAL VOC 2007 + 2012 dataset is used for the object detection task. Source code can be found here. mAP is measured at .5 IOU. The author of YOLOv3 notes that you can easily tradeoff between speed and accuracy by changing the size of the model. We choose the YOLOv3-tiny for our purpose. We set the detection threshold to 0.5.

figure6

Reference

@article{enlighten,
  author={Jiang, Yifan and Gong, Xinyu and Liu, Ding and Cheng, Yu and Fang, Chen and Shen, Xiaohui and Yang, Jianchao and Zhou, Pan and Wang, Zhangyang},
  journal={IEEE Transactions on Image Processing}, 
  title={EnlightenGAN: Deep Light Enhancement Without Paired Supervision}, 
  year={2021}
}


@article{wei2018deep,
  title={Deep retinex decomposition for low-light enhancement},
  author={Wei, Chen and Wang, Wenjing and Yang, Wenhan and Liu, Jiaying},
  journal={arXiv preprint arXiv:1808.04560},
  year={2018}
}

@article{goodfellow2014,
  title={Generative adversarial networks},
  author={Goodfellow, Ian J and Pouget-Abadie, Jean and Mirza, Mehdi and Xu, Bing and Warde-Farley, David and Ozair, Sherjil and Courville, Aaron and Bengio, Yoshua},
  journal={arXiv preprint arXiv:1406.2661},
  year={2014}
}

@inproceedings{srgan2017,
  title={Photo-realistic single image super-resolution using a generative adversarial network},
  author={Ledig, Christian and Theis, Lucas and Husz{\'a}r, Ferenc and Caballero, Jose and Cunningham, Andrew and Acosta, Alejandro and Aitken, Andrew and Tejani, Alykhan and Totz, Johannes and Wang, Zehan and others},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={4681--4690},
  year={2017}
}

@article{wu2021contrastive,
  title={Contrastive Learning for Compact Single Image Dehazing},
  author={Wu, Haiyan and Qu, Yanyun and Lin, Shaohui and Zhou, Jian and Qiao, Ruizhi and Zhang, Zhizhong and Xie, Yuan and Ma, Lizhuang},
  journal={arXiv preprint arXiv:2104.09367},
  year={2021}
}

@inproceedings{johnson2016perceptual,
  title={Perceptual losses for real-time style transfer and super-resolution},
  author={Johnson, Justin and Alahi, Alexandre and Fei-Fei, Li},
  booktitle={European conference on computer vision},
  pages={694--711},
  year={2016},
  organization={Springer}
}

@inproceedings{mao2017least,
  title={Least squares generative adversarial networks},
  author={Mao, Xudong and Li, Qing and Xie, Haoran and Lau, Raymond YK and Wang, Zhen and Paul Smolley, Stephen},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={2794--2802},
  year={2017}
}

@inproceedings{liu2019unsupervised,
  title={Unsupervised Single Image Dehazing via Disentangled Representation},
  author={Liu, Qian},
  booktitle={Proceedings of the 3rd International Conference on Video and Image Processing},
  pages={106--111},
  year={2019}
}

@article{zheng2020forkgan,
  title={ForkGAN: Seeing into the rainy night},
  author={Zheng, Ziqiang and Wu, Yang and Han, Xinran and Shi, Jianbo},
  year={2020}
}

@inproceedings{tsai2018learning,
  title={Learning to adapt structured output space for semantic segmentation},
  author={Tsai, Yi-Hsuan and Hung, Wei-Chih and Schulter, Samuel and Sohn, Kihyuk and Yang, Ming-Hsuan and Chandraker, Manmohan},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={7472--7481},
  year={2018}
}

@article{asokan2020teaching,
  title={Teaching a GAN What Not to Learn},
  author={Asokan, Siddarth and Seelamantula, Chandra Sekhar},
  journal={arXiv preprint arXiv:2010.15639},
  year={2020}
}

@inproceedings{zhu2017unpaired,
  title={Unpaired image-to-image translation using cycle-consistent adversarial networks},
  author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={2223--2232},
  year={2017}
}

@inproceedings{krull2019,
  title={Noise2void-learning denoising from single noisy images},
  author={Krull, Alexander and Buchholz, Tim-Oliver and Jug, Florian},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={2129--2137},
  year={2019}
}

@inproceedings{noise2self,
  title={Noise2self: Blind denoising by self-supervision},
  author={Batson, Joshua and Royer, Loic},
  booktitle={International Conference on Machine Learning},
  pages={524--533},
  year={2019},
  organization={PMLR}
}

@article{neighbor2neighbor,
  title={Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images},
  author={Huang, Tao and Li, Songjiang and Jia, Xu and Lu, Huchuan and Liu, Jianzhuang},
  journal={arXiv preprint arXiv:2101.02824},
  year={2021}
}

@article{versatile,
  title={Versatile auxiliary classifier with generative adversarial network (vac+ gan), multi class scenarios},
  author={Bazrafkan, Shabab and Corcoran, Peter},
  journal={arXiv preprint arXiv:1806.07751},
  year={2018}
}

@inproceedings{conditional,
  title={Conditional image synthesis with auxiliary classifier gans},
  author={Odena, Augustus and Olah, Christopher and Shlens, Jonathon},
  booktitle={International conference on machine learning},
  pages={2642--2651},
  year={2017},
  organization={PMLR}
}

@inproceedings{mao2017least,
  title={Least squares generative adversarial networks},
  author={Mao, Xudong and Li, Qing and Xie, Haoran and Lau, Raymond YK and Wang, Zhen and Paul Smolley, Stephen},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={2794--2802},
  year={2017}
}

@inproceedings{zhu2017unpaired,
  title={Unpaired image-to-image translation using cycle-consistent adversarial networks},
  author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={2223--2232},
  year={2017}
}

@misc{jin2018unsupervised,
      title={Unsupervised Single Image Deraining with Self-supervised Constraints}, 
      author={Xin Jin and Zhibo Chen and Jianxin Lin and Zhikai Chen and Wei Zhou},
      year={2018}
}
      
,@misc{sakaridis2019guided,
      eprint={1811.08575},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{dark-zurich,
      title={Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation}, 
      author={Christos Sakaridis and Dengxin Dai and Luc Van Gool},
      year={2019},
      eprint={1901.05946},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{raidar,
      title={RaidaR: A Rich Annotated Image Dataset of Rainy Street Scenes}, 
      author={Jiongchao Jin and Arezou Fatemi and Wallace Lira and Fenggen Yu and Biao Leng and Rui Ma and Ali Mahdavi-Amiri and Hao Zhang},
      year={2021},
      eprint={2104.04606},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{bdd100k,
      title={BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning}, 
      author={Fisher Yu and Haofeng Chen and Xin Wang and Wenqi Xian and Yingying Chen and Fangchen Liu and Vashisht Madhavan and Trevor Darrell},
      year={2020},
      eprint={1805.04687},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{exdark,
      title={Getting to Know Low-light Images with The Exclusively Dark Dataset}, 
      author={Yuen Peng Loh and Chee Seng Chan},
      year={2018},
      eprint={1805.11227},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Contact Me

To contact me, send an email to [email protected]

Owner
Seri Lee
graduate student @cmalab
Seri Lee
Few-Shot Graph Learning for Molecular Property Prediction

Few-shot Graph Learning for Molecular Property Prediction Introduction This is the source code and dataset for the following paper: Few-shot Graph Lea

Zhichun Guo 94 Dec 12, 2022
The fastai deep learning library

Welcome to fastai fastai simplifies training fast and accurate neural nets using modern best practices Important: This documentation covers fastai v2,

fast.ai 23.2k Jan 07, 2023
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)

S2-BNN (Self-supervised Binary Neural Networks Using Distillation Loss) This is the official pytorch implementation of our paper: "S2-BNN: Bridging th

Zhiqiang Shen 52 Dec 24, 2022
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation

Contents Local and Global GAN Cross-View Image Translation Semantic Image Synthesis Acknowledgments Related Projects Citation Contributions Collaborat

Hao Tang 131 Dec 07, 2022
《LXMERT: Learning Cross-Modality Encoder Representations from Transformers》(EMNLP 2020)

The Most Important Thing. Our code is developed based on: LXMERT: Learning Cross-Modality Encoder Representations from Transformers

53 Dec 16, 2022
Data pipelines for both TensorFlow and PyTorch!

rapidnlp-datasets Data pipelines for both TensorFlow and PyTorch ! If you want to load public datasets, try: tensorflow/datasets huggingface/datasets

1 Dec 08, 2021
Application of K-means algorithm on a music dataset after a dimensionality reduction with PCA

PCA for dimensionality reduction combined with Kmeans Goal The Goal of this notebook is to apply a dimensionality reduction on a big dataset in order

Arturo Ghinassi 0 Sep 17, 2022
PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)

About PyTorch 1.2.0 Now the master branch supports PyTorch 1.2.0 by default. Due to the serious version problem (especially torch.utils.data.dataloade

Sanghyun Son 2.1k Jan 01, 2023
Nerf pl - NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning

nerf_pl Update: an improved NSFF implementation to handle dynamic scene is open! Update: NeRF-W (NeRF in the Wild) implementation is added to nerfw br

AI葵 1.8k Dec 30, 2022
Python code to generate art with Generative Adversarial Network

GAN_Canvas_Maker Generating Art using Generative Adversarial Network (GAN) Python code to generate art with Generative Adversarial Network: https://to

Jonny Banana 10 Aug 22, 2022
code for paper -- "Seamless Satellite-image Synthesis"

Seamless Satellite-image Synthesis by Jialin Zhu and Tom Kelly. Project site. The code of our models borrows heavily from the BicycleGAN repository an

Light 14 Apr 05, 2022
Segmentation models with pretrained backbones. Keras and TensorFlow Keras.

Python library with Neural Networks for Image Segmentation based on Keras and TensorFlow. The main features of this library are: High level API (just

Pavel Yakubovskiy 4.2k Jan 09, 2023
Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection"

M-LSD: Towards Light-weight and Real-time Line Segment Detection Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line

NAVER/LINE Vision 357 Jan 04, 2023
All of the figures and notebooks for my deep learning book, for free!

"Deep Learning - A Visual Approach" by Andrew Glassner This is the official repo for my book from No Starch Press. Ordering the book My book is called

Andrew Glassner 227 Jan 04, 2023
Official implementation for: Blended Diffusion for Text-driven Editing of Natural Images.

Blended Diffusion for Text-driven Editing of Natural Images Blended Diffusion for Text-driven Editing of Natural Images Omri Avrahami, Dani Lischinski

328 Dec 30, 2022
EqGAN - Improving GAN Equilibrium by Raising Spatial Awareness

EqGAN - Improving GAN Equilibrium by Raising Spatial Awareness Improving GAN Equilibrium by Raising Spatial Awareness Jianyuan Wang, Ceyuan Yang, Ying

GenForce: May Generative Force Be with You 149 Dec 19, 2022
The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding, by Chuhan Zhang, Ankush Gupta and Andrew Zisserman.

Temporal Query Networks for Fine-grained Video Understanding 📋 This repository contains the implementation of CVPR2021 paper Temporal_Query_Networks

55 Dec 21, 2022
pytorch implementation of ABC : Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning

ABC:Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning, NeurIPS 2021 pytorch implementation of ABC : Auxiliary Balanced Class

Hyuck Lee 25 Dec 22, 2022
Code for Talking Face Generation by Adversarially Disentangled Audio-Visual Representation (AAAI 2019)

Talking Face Generation by Adversarially Disentangled Audio-Visual Representation (AAAI 2019) We propose Disentangled Audio-Visual System (DAVS) to ad

Hang_Zhou 750 Dec 23, 2022
The author's officially unofficial PyTorch BigGAN implementation.

BigGAN-PyTorch The author's officially unofficial PyTorch BigGAN implementation. This repo contains code for 4-8 GPU training of BigGANs from Large Sc

Andy Brock 2.6k Jan 02, 2023