Official Pytorch implementation of 'GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network' (NeurIPS 2020)

Related tags

Deep LearningGOCor
Overview

Official implementation of GOCor

This is the official implementation of our paper :

GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network.
Authors: Prune Truong *, Martin Danelljan *, Luc Van Gool, Radu Timofte

[Paper][Website][Video]

The feature correlation layer serves as a key neural network module in numerous computer vision problems that involve dense correspondences between image pairs. It predicts a correspondence volume by evaluating dense scalar products between feature vectors extracted from pairs of locations in two images. However, this point-to-point feature comparison is insufficient when disambiguating multiple similar regions in an image, severely affecting the performance of the end task. This work proposes GOCor, a fully differentiable dense matching module, acting as a direct replacement to the feature correlation layer. The correspondence volume generated by our module is the result of an internal optimization procedure that explicitly accounts for similar regions in the scene. Moreover, our approach is capable of effectively learning spatial matching priors to resolve further matching ambiguities.

alt text

Also check out our related work GLU-Net and the code here !


In this repo, we only provide code to test on image pairs as well as the pre-trained weights of the networks evaluated in GOCor paper. We will not release the training code. However, since GOCor module is a plug-in replacement for the feature correlation layer, it can be integrated into any architecture and trained using the original training code. We will release general training and evaluation code in a general dense correspondence repo, coming soon here.


For any questions, issues or recommendations, please contact Prune at [email protected]

Citation

If our project is helpful for your research, please consider citing :

@inproceedings{GOCor_Truong_2020,
      title = {{GOCor}: Bringing Globally Optimized Correspondence Volumes into Your Neural Network},
      author    = {Prune Truong 
                   and Martin Danelljan 
                   and Luc Van Gool 
                   and Radu Timofte},
      year = {2020},
      booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information
                   Processing Systems 2020, {NeurIPS} 2020}
}

1. Installation

Note that the models were trained with torch 1.0. Torch versions up to 1.7 were tested for inference but NOT for training, so I cannot guarantee that the models train smoothly for higher torch versions.

  • Create and activate conda environment with Python 3.x
conda create -n GOCor_env python=3.7
conda activate GOCor_env
  • Install all dependencies (except for cupy, see below) by running the following command:
pip install -r requirements.txt

Note: CUDA is required to run the code. Indeed, the correlation layer is implemented in CUDA using CuPy, which is why CuPy is a required dependency. It can be installed using pip install cupy or alternatively using one of the provided binary packages as outlined in the CuPy repository. The code was developed using Python 3.7 & PyTorch 1.0 & CUDA 9.0, which is why I installed cupy for cuda90. For another CUDA version, change accordingly.

pip install cupy-cuda90==7.8.0 --no-cache-dir 

There are some issues with latest versions of cupy. So for all cuda, install cupy version 7.8.0. For example, on cuda10,

pip install cupy-cuda100==7.8.0 --no-cache-dir 
  • Download an archive with pre-trained models click and extract it to the project folder

2. Models

Pre-trained weights can be downloaded from here. We provide the pre-trained weights of:

  • GLU-Net trained on the static data, these are given for reference, they correspond to the weights 'GLUNet_DPED_CityScape_ADE.pth' that we provided here
  • GLU-Net-GOCor trained on the static data, corresponds to network in the GOCor paper
  • GLU-Net trained on the dynamic data
  • GLU-Net-GOCor trained on the dynamic data, corresponds to network in the GOCor paper
  • PWC-Net finetuned on chairs-things (by us), they are given for reference
  • PWC-Net-GOCor finetuned on chair-things, corresponds to network in the GOCor paper
  • PWC-Net further finetuned on sintel (by us), for reference
  • PWC-Net-GOCor further finetuned on sintel, corresponds to network in the GOCor paper

For reference, you can also use the weights from the original PWC-Net repo, where the networks are trained on chairs-things and further finetuned on sintel. As explained in the paper, for training our PWC-Net-based models, we initialize the network parameters with the pre-trained weights trained on chairs-things.

All networks are created in 'model_selection.py'

3. Test on your own images

You can test the networks on a pair of images using test_models.py and the provided trained model weights. You must first choose the model and pre-trained weights to use. The inputs are the paths to the query and reference images. The images are then passed to the network which outputs the corresponding flow field relating the reference to the query image. The query is then warped according to the estimated flow, and a figure is saved.

For this pair of images (provided to check that the code is working properly) and using GLU-Net-GOCor trained on the dynamic dataset, the output is:

python test_models.py --model GLUNet_GOCor --pre_trained_model dynamic --path_query_image images/eth3d_query.png --path_reference_image images/eth3d_reference.png --write_dir evaluation/

additional optional arguments:
--pre_trained_models_dir (default is pre_trained_models/)

alt text

For baseline GLU-Net, the output is instead:

python test_models.py --model GLUNet --pre_trained_model dynamic --path_query_image images/eth3d_query.png --path_reference_image images/eth3d_reference.png --write_dir evaluation/

alt text

And for PWC-Net-GOCor and baseline PWC-Net:

python test_models.py --model PWCNet_GOCor --pre_trained_model chairs_things --path_query_image images/kitti2015_query.png --path_reference_image images/kitti2015_reference.png --write_dir evaluation/

alt text

python test_models.py --model PWCNet --pre_trained_model chairs_things --path_query_image images/kitti2015_query.png --path_reference_image images/kitti2015_reference.png --write_dir evaluation/

alt text


Possible model choices are : GLUNet, GLUNet_GOCor, PWCNet, PWCNet_GOCor

Possible pre-trained model choices are: static, dynamic, chairs_things, chairs_things_ft_sintel

4. Acknowledgement

We borrow code from public projects, such as pytracking, GLU-Net, DGC-Net, PWC-Net, NC-Net, Flow-Net-Pytorch, RAFT ...

Owner
Prune Truong
PhD Student in Computer Vision Lab of ETH Zurich
Prune Truong
learned_optimization: Training and evaluating learned optimizers in JAX

learned_optimization: Training and evaluating learned optimizers in JAX learned_optimization is a research codebase for training learned optimizers. I

Google 533 Dec 30, 2022
TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection

TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection; Accepted by ICCV2021. Note: The complete code (including training and t

S.X.Zhang 84 Dec 13, 2022
Container : Context Aggregation Network

Container : Context Aggregation Network If you use this code for a paper please cite: @article{gao2021container, title={Container: Context Aggregati

AI2 47 Dec 16, 2022
[CVPR2021 Oral] FFB6D: A Full Flow Bidirectional Fusion Network for 6D Pose Estimation.

FFB6D This is the official source code for the CVPR2021 Oral work, FFB6D: A Full Flow Biderectional Fusion Network for 6D Pose Estimation. (Arxiv) Tab

Yisheng (Ethan) He 201 Dec 28, 2022
PyTorch implementation for "Mining Latent Structures with Contrastive Modality Fusion for Multimedia Recommendation"

MIRCO PyTorch implementation for paper: Latent Structures Mining with Contrastive Modality Fusion for Multimedia Recommendation Dependencies Python 3.

Big Data and Multi-modal Computing Group, CRIPAC 9 Dec 08, 2022
Convolutional neural network that analyzes self-generated images in a variety of languages to find etymological similarities

This project is a convolutional neural network (CNN) that analyzes self-generated images in a variety of languages to find etymological similarities. Specifically, the goal is to prove that computer

1 Feb 03, 2022
Select, weight and analyze complex sample data

Sample Analytics In large-scale surveys, often complex random mechanisms are used to select samples. Estimates derived from such samples must reflect

samplics 37 Dec 15, 2022
The Fundamental Clustering Problems Suite (FCPS) summaries 54 state-of-the-art clustering algorithms, common cluster challenges and estimations of the number of clusters as well as the testing for cluster tendency.

FCPS Fundamental Clustering Problems Suite The package provides over sixty state-of-the-art clustering algorithms for unsupervised machine learning pu

9 Nov 27, 2022
A scientific and useful toolbox, which contains practical and effective long-tail related tricks with extensive experimental results

Bag of tricks for long-tailed visual recognition with deep convolutional neural networks This repository is the official PyTorch implementation of AAA

Yong-Shun Zhang 181 Dec 28, 2022
Predict Breast Cancer Wisconsin (Diagnostic) using Naive Bayes

Naive-Bayes Predict Breast Cancer Wisconsin (Diagnostic) using Naive Bayes Downloading Data Set Use our Breast Cancer Wisconsin Data Set Also you can

Faeze Habibi 0 Apr 06, 2022
Head2Toe: Utilizing Intermediate Representations for Better OOD Generalization

Head2Toe: Utilizing Intermediate Representations for Better OOD Generalization Code for reproducing our results in the Head2Toe paper. Paper: arxiv.or

Google Research 62 Dec 12, 2022
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
CCNet: Criss-Cross Attention for Semantic Segmentation (TPAMI 2020 & ICCV 2019).

CCNet: Criss-Cross Attention for Semantic Segmentation Paper Links: Our most recent TPAMI version with improvements and extensions (Earlier ICCV versi

Zilong Huang 1.3k Dec 27, 2022
Udacity Suse Cloud Native Foundations Scholarship Course Walkthrough

SUSE Cloud Native Foundations Scholarship Udacity is collaborating with SUSE, a global leader in true open source solutions, to empower developers and

Shivansh Srivastava 34 Oct 18, 2022
Disentangled Face Attribute Editing via Instance-Aware Latent Space Search, accepted by IJCAI 2021.

Instance-Aware Latent-Space Search This is a PyTorch implementation of the following paper: Disentangled Face Attribute Editing via Instance-Aware Lat

67 Dec 21, 2022
GAN example for Keras. Cuz MNIST is too small and there should be something more realistic.

Keras-GAN-Animeface-Character GAN example for Keras. Cuz MNIST is too small and there should an example on something more realistic. Some results Trai

160 Sep 20, 2022
Recognize Handwritten Digits using Deep Learning on the browser itself.

MNIST on the Web An attempt to predict MNIST handwritten digits from my PyTorch model from the browser (client-side) and not from the server, with the

Harjyot Bagga 7 May 28, 2022
Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide range of illumination variants of a single image.

Deep Illuminator Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide

George Chogovadze 52 Nov 29, 2022
Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images

Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images This repository contains the implementation of the following paper

Seonggwan Ko 9 Jul 30, 2022
Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).

Adaptive Segmentation Mask Attack This repository contains the implementation of the Adaptive Segmentation Mask Attack (ASMA), a targeted adversarial

Utku Ozbulak 53 Jul 04, 2022