Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Overview

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation

In this repo, we provide the code for our paper : "Few-Shot Segmentation Without Meta-Learning: A Good Transductive Inference Is All You Need?", available at https://arxiv.org/abs/2012.06166:

Getting Started

Minimum requirements

  1. Software :
  • torch==1.7.0
  • numpy==1.18.4
  • cv2==4.2.0
  • pyyaml==5.3.1

For both training and testing, metrics monitoring is done through visdom_logger (https://github.com/luizgh/visdom_logger). To install this package with pip, use the following command:

pip install git+https://github.com/luizgh/visdom_logger.git
  1. Hardware : A 11 GB+ CUDA-enabled GPU

Download data

All pre-processed from Google Drive

We provide the versions of Pascal-VOC 2012 and MS-COCO 2017 used in this work at https://drive.google.com/file/d/1Lj-oBzBNUsAqA9y65BDrSQxirV8S15Rk/view?usp=sharing. You can download the full .zip and directly extract it at the root of this repo.

If the previous download failed

Here is the structure of the data folder for you to reproduce:

data
├── coco
│   ├── annotations
│   ├── train
│   ├── train2014
│   ├── val
│   └── val2014
└── pascal
|    ├── JPEGImages
|    └── SegmentationClassAug

Pascal : The JPEG images can be found in the PascalVOC 2012 toolkit to be downloaded at PascalVOC2012 and SegmentationClassAug (pre-processed ground-truth masks).

Coco : Coco 2014 train, validation images and annotations can be downloaded at Coco. Once this is done, you will have to generate the subfolders coco/train and coco/val (ground truth masks). Both folders can be generated by executing the python script data/coco/create_masks.py (note that the script uses the package pycocotools that can be found at https://github.com/cocodataset/cocoapi/tree/master/PythonAPI/pycocotools):

python

cd data/coco
python create_masks.py

About the train/val splits

The train/val splits are directly provided in lists/. How they were obtained is explained at https://github.com/Jia-Research-Lab/PFENet

Download pre-trained models

Pre-trained backbones

First, you will need to download the ImageNet pre-trained backbones at https://drive.google.com/drive/folders/1Hrz1wOxOZm4nIIS7UMJeL79AQrdvpj6v and put them under initmodel/. These will be used if you decide to train your models from scratch.

Pre-trained models

We directly provide the full pre-trained models at https://drive.google.com/file/d/1iuMAo5cJ27oBdyDkUI0JyGIEH60Ln2zm/view?usp=sharing. You can download them and directly extract them at the root of this repo. This includes Resnet50 and Resnet101 backbones on Pascal-5i, and Resnet50 on Coco-20i.

Overview of the repo

Data are located in data/. All the code is provided in src/. Default configuration files can be found in config_files/. Training and testing scripts are located in scripts/. Lists/ contains the train/validation splits for each dataset.

Training (optional)

If you want to use the pre-trained models, this step is optional. Otherwise, you can train your own models from scratch with the scripts/train.sh script, as follows.

bash scripts/train.sh {data} {fold} {[gpu_ids]} {layers}

For instance, if you want to train a Resnet50-based model on the fold-0 of Pascal-5i on GPU 1, use:

bash scripts/train.sh pascal 0 [1] 50

Note that this code supports distributed training. If you want to train on multiple GPUs, you may simply replace [1] in the previous examples with the list of gpus_id you want to use.

Testing

To test your models, use the scripts/test.sh script, the general synthax is:

bash scripts/test.sh {data} {shot} {[gpu_ids]} {layers}

This script will test successively on all folds of the current dataset. Below are presented specific commands for several experiments.

Pascal-5i

Results :

(1 shot/5 shot) Arch Fold-0 Fold-1 Fold-2 Fold-3 Mean
RePRI Resnet-50 59.8 / 64.6 68.3 / 71.4 62.1 / 71.1 48.5 / 59.3 59.7 / 66.6
Oracle-RePRI Resnet-50 72.4 / 75.1 78.0 / 80.8 77.1 / 81.4 65.8 / 74.4 73.3 / 77.9
RePRI Resnet-101 59.6 / 66.2 68.3 / 71.4 62.2 / 67.0 47.2 / 57.7 59.4 / 65.6
Oracle-RePRI Resnet-101 73.9 / 76.8 79.7 / 81.7 76.1 / 79.5 65.1 / 74.5 73.7 / 78.1

Command:

bash scripts/test.sh pascal 1 [0] 50  # 1-shot
bash scripts/test.sh pascal 5 [0] 50  # 5-shot

Coco-20i

Results :

(1 shot/5 shot) Arch Fold-0 Fold-1 Fold-2 Fold-3 Mean
RePRI Resnet-50 32.0 / 39.3 38.7 / 45.4 32.7 / 39.7 33.1 / 41.8 34.1/41.6
Oracle-RePRI Resnet-50 49.3 / 51.5 51.4 / 60.8 38.2 / 54.7 41.6 / 55.2 45.1 / 55.5

Command :

bash scripts/test.sh coco 1 [0] 50  # 1-shot
bash scripts/test.sh coco 5 [0] 50  # 5-shot

Coco-20i -> Pascal-VOC

The folds used for cross-domain experiments are presented in the image below:

Results :

(1 shot/5 shot) Arch Fold-0 Fold-1 Fold-2 Fold-3 Mean
RePRI Resnet-50 52.8 / 57.7 64.0 / 66.1 64.1 / 67.6 71.5 / 73.1 63.1 / 66.2
Oracle-RePRI Resnet-50 69.6 / 73.5 71.7 / 74.9 77.6 / 82.2 86.2 / 88.1 76.2 / 79.7

Command :

bash scripts/test.sh coco2pascal 1 [0] 50  # 1-shot
bash scripts/test.sh coco2pascal 5 [0] 50  # 5-shot

Monitoring metrics

For both training and testing, you can monitor metrics using visdom_logger (https://github.com/luizgh/visdom_logger). To install this package, simply clone the repo and install it with pip:

git clone https://github.com/luizgh/visdom_logger.git
pip install -e visdom_logger

Then, you need to start a visdom server with:

python -m visdom.server -port 8098

Finally, add the line visdom_port 8098 in the options in scripts/train.sh or scripts/test.sh, and metrics will be displayed at this port. You can monitor them through your navigator.

Contact

For further questions or details, please post an issue or directly reach out to Malik Boudiaf ([email protected])

Acknowledgments

We gratefully thank the authors of https://github.com/Jia-Research-Lab/PFENet, as well as https://github.com/hszhao/semseg from which some parts of our code are inspired.

Owner
Malik Boudiaf
Malik Boudiaf
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder

RAVE: Realtime Audio Variational autoEncoder Official implementation of RAVE: A variational autoencoder for fast and high-quality neural audio synthes

ACIDS 587 Jan 01, 2023
This is a repository with the code for the ACL 2019 paper

The Story of Heads This is the official repo for the following papers: (ACL 2019) Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy

231 Nov 15, 2022
Rule Based Classification Project

Kural Tabanlı Sınıflandırma ile Potansiyel Müşteri Getirisi Hesaplama İş Problemi: Bir oyun şirketi müşterilerinin bazı özelliklerini kullanaraknseviy

Şafak 1 Jan 12, 2022
HDMapNet: A Local Semantic Map Learning and Evaluation Framework

HDMapNet_devkit Devkit for HDMapNet. HDMapNet: A Local Semantic Map Learning and Evaluation Framework Qi Li, Yue Wang, Yilun Wang, Hang Zhao [Paper] [

Tsinghua MARS Lab 421 Jan 04, 2023
ICSS - Interactive Continual Semantic Segmentation

Presentation This repository contains the code of our paper: Weakly-supervised c

Alteia 9 Jul 23, 2022
PyTorch implementation of MuseMorphose, a Transformer-based model for music style transfer.

MuseMorphose This repository contains the official implementation of the following paper: Shih-Lun Wu, Yi-Hsuan Yang MuseMorphose: Full-Song and Fine-

Yating Music, Taiwan AI Labs 142 Jan 08, 2023
Posterior temperature optimized Bayesian models for inverse problems in medical imaging

Posterior temperature optimized Bayesian models for inverse problems in medical imaging Max-Heinrich Laves*, Malte Tölle*, Alexander Schlaefer, Sandy

Artificial Intelligence in Cardiovascular Medicine (AICM) 6 Sep 19, 2022
A Home Assistant custom component for Lobe. Lobe is an AI tool that can classify images.

Lobe This is a Home Assistant custom component for Lobe. Lobe is an AI tool that can classify images. This component lets you easily use an exported m

Kendell R 4 Feb 28, 2022
Unsupervised Learning of Video Representations using LSTMs

Unsupervised Learning of Video Representations using LSTMs Code for paper Unsupervised Learning of Video Representations using LSTMs by Nitish Srivast

Elman Mansimov 341 Dec 20, 2022
we propose a novel deep network, named feature aggregation and refinement network (FARNet), for the automatic detection of anatomical landmarks.

Feature Aggregation and Refinement Network for 2D Anatomical Landmark Detection Overview Localization of anatomical landmarks is essential for clinica

aoyueyuan 0 Aug 28, 2022
TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling

TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling This is the official code release for the paper 'TiP-Adapter: Training-fre

peng gao 189 Jan 04, 2023
A Data Annotation Tool for Semantic Segmentation, Object Detection and Lane Line Detection.(In Development Stage)

Data-Annotation-Tool How to Run this Tool? To run this software, follow the steps: git clone https://github.com/Autonomous-Car-Project/Data-Annotation

TiVRA AI 13 Aug 18, 2022
Using CNN to mimic the driver based on training data from Torcs

Behavioural-Cloning-in-autonomous-driving Using CNN to mimic the driver based on training data from Torcs. Approach First, the data was collected from

Sudharshan 2 Jan 05, 2022
Contrastively Disentangled Sequential Variational Audoencoder

Contrastively Disentangled Sequential Variational Audoencoder (C-DSVAE) Overview This is the implementation for our C-DSVAE, a novel self-supervised d

Junwen Bai 35 Dec 24, 2022
A Domain-Agnostic Benchmark for Self-Supervised Learning

DABS: A Domain Agnostic Benchmark for Self-Supervised Learning This repository contains the code for DABS, a benchmark for domain-agnostic self-superv

Alex Tamkin 81 Dec 09, 2022
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.

Tracking Code for the winner of track1 in MMP-Trakcing challenge This repository contains our tracking code for the Multi-camera Multiple People Track

DamoCV 29 Nov 13, 2022
Segmentation vgg16 fcn - cityscapes

VGGSegmentation Segmentation vgg16 fcn - cityscapes Priprema skupa skripta prepare_dataset_downsampled.py Iz slika cityscapesa izrezuje haubu automobi

6 Oct 24, 2020
PyTorch implementation of: Michieli U. and Zanuttigh P., "Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations", CVPR 2021.

Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations This is the official PyTorch implementation

Multimedia Technology and Telecommunication Lab 42 Nov 09, 2022
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

212 Dec 25, 2022
A New Approach to Overgenerating and Scoring Abstractive Summaries

We provide the source code for the paper "A New Approach to Overgenerating and Scoring Abstractive Summaries" accepted at NAACL'21. If you find the code useful, please cite the following paper.

Kaiqiang Song 4 Apr 03, 2022