Pytorch implementation of SenFormer: Efficient Self-Ensemble Framework for Semantic Segmentation

Overview

PWC

PWC

PWC

SenFormer: Efficient Self-Ensemble Framework for Semantic Segmentation

Efficient Self-Ensemble Framework for Semantic Segmentation by Walid Bousselham, Guillaume Thibault, Lucas Pagano, Archana Machireddy, Joe Gray, Young Hwan Chang and Xubo Song.

This repository contains the official Pytorch implementation of training & evaluation code and the pretrained models for SenFormer.


💾 Code Snippet (SenFormer)| ⌨️ Code Snippet (FPNT)| 📜 Paper | 论文

🔨 Installation

Conda environment

  • Clone this repository and enter it: git clone [email protected]:WalBouss/SenFormer.git && cd SenFormer.
  • Create a conda environment conda create -n senformer python=3.8, and activate it conda activate senformer.
  • Install Pytorch and torchvision conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.2 -c pytorch — (you may also switch to other version by specifying the version number).
  • Install MMCV library pip install mmcv-full==1.4.0
  • Install MMSegmentation library by running pip install -e . in SenFormer directory.
  • Install other requirements pip install timm einops

Here is a full script for setting up a conda environment to use SenFormer (with CUDA 10.2 and pytorch 1.7.1):

conda create -n senformer python=3.8
conda activate senformer
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.2 -c pytorch

git clone [email protected]:WalBouss/SenFormer.git && cd SenFormer
pip install mmcv-full==1.4.0
pip install -e .
pip install timm einops

Datasets

For datasets preparations please refer to MMSegmentation guidelines.

Pretrained weights

ResNet pretrained weights will be automatically downloaded before training.

For Swin Transformer ImageNet pretrained weights, you can either:

  • run bash tools/download_swin_weights.sh in SenFormer project to download all Swin Transformer pretrained weights (it will place weights under pretrain/ folder ).
  • download desired backbone weights here: Swin-T, Swin-S, Swin-B, Swin-L and place them under pretrain/ folder.
  • download weights from official repository then, convert them to mmsegmentation format following mmsegmentation guidelines.

🎯 Model Zoo

SenFormer models with ResNet and Swin's backbones and ADE20K, COCO-Stuff 10K, Pascal Context and Cityscapes.

ADE20K

Backbone mIoU mIoU (MS) #params FLOPs Resolution Download
ResNet-50 44.6 45.6 144M 179G 512x512 model config
ResNet-101 46.5 47.0 163M 199G 512x512 model config
Swin-Tiny 46.0 46.4 144M 179G 512x512 model config
Swin-Small 49.2 50.4 165M 202G 512x512 model config
Swin-Base 51.8 53.2 204M 242G 640x640 model config
Swin-Large 53.1 54.2 314M 546G 640x640 model config

COCO-Stuff 10K

Backbone mIoU mIoU (MS) #params Resolution Download
ResNet-50 39.0 39.7 144M 512x512 model config
ResNet-101 39.6 40.6 163M 512x512 model config
Swin-Large 49.1 50.1 314M 512x512 model config

Pascal Context

Backbone mIoU mIoU (MS) #params Resolution Download
ResNet-50 53.2 54.3 144M 480x480 model config
ResNet-101 55.1 56.6 163M 480x480 model config
Swin-Large 62.4 64.0 314M 480x480 model config

Cityscapes

Backbone mIoU mIoU (MS) #params Resolution Download
ResNet-50 78.8 80.1 144M 512x1024 model config
ResNet-101 80.3 81.4 163M 512x1024 model config
Swin-Large 82.2 83.3 314M 512x1024 model config

🔭 Inference

Download one checkpoint weights from above, for example SenFormer with ResNet-50 backbone on ADE20K:

Inference on a dataset

# Single-gpu testing
python tools/test.py senformer_configs/senformer/ade20k/senformer_fpnt_r50_512x512_160k_ade20k.py /path/to/checkpoint_file

# Multi-gpu testing
./tools/dist_test.sh senformer_configs/senformer/ade20k/senformer_fpnt_r50_512x512_160k_ade20k.py /path/to/checkpoint_file <GPU_NUM>

# Multi-gpu, multi-scale testing
tools/dist_test.sh senformer_configs/senformer/ade20k/senformer_fpnt_r50_512x512_160k_ade20k.py /path/to/checkpoint_file <GPU_NUM> --aug-test

Inference on custom data

To generate segmentation maps for your own data, run the following command:

python demo/image_demo.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE}

Run python demo/image_demo.py --help for additional options.

🔩 Training

Follow above instructions to download ImageNet pretrained weights for backbones and run one of the following command:

# Single-gpu training
python tools/train.py path/to/model/config 

# Multi-gpu training
./tools/dist_train.sh path/to/model/config <GPU_NUM>

For example to train SenFormer with a ResNet-50 as backbone on ADE20K:

# Single-gpu training
python tools/train.py senformer_configs/senformer/ade20k/senformer_fpnt_r50_512x512_160k_ade20k.py 

# Multi-gpu training
./tools/dist_train.sh senformer_configs/senformer/ade20k/senformer_fpnt_r50_512x512_160k_ade20k.py <GPU_NUM>

Note that the default learning rate and training schedule is for an effective batch size of 16, (e.g. 8 GPUs & 2 imgs/gpu).

Acknowledgement

This code is build using MMsegmentation library as codebase and uses timm and einops as well.

📚 Citation

If you find this repository useful, please consider citing our work 📝 and giving a star 🌟 :

@article{bousselham2021senformer,
  title={Efficient Self-Ensemble Framework for Semantic Segmentation},
  author={Walid Bousselham, Guillaume Thibault, Lucas Pagano, Archana Machireddy, Joe Gray, Young Hwan Chang, Xubo Song},
  journal={arXiv preprint arXiv:2111.13280},
  year={2021}
}
This is the 3D Implementation of 《Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical Image Segmentation》

CoraNet This is the 3D Implementation of 《Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical Image Segmentation》 Environment pytor

25 Nov 08, 2022
This repository contains project created during the Data Challenge module at London School of Hygiene & Tropical Medicine

LSHTM_RCS This repository contains project created during the Data Challenge module at London School of Hygiene & Tropical Medicine (LSHTM) in collabo

Lukas Kopecky 3 Jan 30, 2022
Code for Ditto: Building Digital Twins of Articulated Objects from Interaction

Ditto: Building Digital Twins of Articulated Objects from Interaction Zhenyu Jiang, Cheng-Chun Hsu, Yuke Zhu CVPR 2022, Oral Project | arxiv News 2022

UT Robot Perception and Learning Lab 78 Dec 22, 2022
IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020)

This repo is the official implementation of our paper "Instance Adaptive Self-training for Unsupervised Domain Adaptation". The purpose of this repo is to better communicate with you and respond to y

CVSM Group - email: <a href=[email protected]"> 84 Dec 12, 2022
[CVPR 2022] Deep Equilibrium Optical Flow Estimation

Deep Equilibrium Optical Flow Estimation This is the official repo for the paper Deep Equilibrium Optical Flow Estimation (CVPR 2022), by Shaojie Bai*

CMU Locus Lab 136 Dec 18, 2022
A Closer Look at Invalid Action Masking in Policy Gradient Algorithms

A Closer Look at Invalid Action Masking in Policy Gradient Algorithms This repo contains the source code to reproduce the results in the paper A Close

Costa Huang 73 Dec 24, 2022
Unofficial implement with paper SpeakerGAN: Speaker identification with conditional generative adversarial network

Introduction This repository is about paper SpeakerGAN , and is unofficially implemented by Mingming Huang ( 7 Jan 03, 2023

Natural Posterior Network: Deep Bayesian Predictive Uncertainty for Exponential Family Distributions

Natural Posterior Network This repository provides the official implementation o

Oliver Borchert 54 Dec 06, 2022
[ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang

Self-Damaging Contrastive Learning Introduction The recent breakthrough achieved by contrastive learning accelerates the pace for deploying unsupervis

VITA 51 Dec 29, 2022
3D HourGlass Networks for Human Pose Estimation Through Videos

3D-HourGlass-Network 3D CNN Based Hourglass Network for Human Pose Estimation (3D Human Pose) from videos. This was my summer'18 research project. Dis

Naman Jain 51 Jan 02, 2023
Network Enhancement implementation in pytorch

network_enahncement_pytorch Network Enhancement implementation in pytorch Research paper Network Enhancement: a general method to denoise weighted bio

Yen 1 Nov 12, 2021
Pseudo lidar - (CVPR 2019) Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving

Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving This paper has been accpeted by Conference o

Yan Wang 881 Dec 27, 2022
This is the official Pytorch implementation of the paper "Diverse Motion Stylization for Multiple Style Domains via Spatial-Temporal Graph-Based Generative Model"

Diverse Motion Stylization (Official) This is the official Pytorch implementation of this paper. Diverse Motion Stylization for Multiple Style Domains

Soomin Park 28 Dec 16, 2022
An Open-Source Toolkit for Prompt-Learning.

An Open-Source Framework for Prompt-learning. Overview • Installation • How To Use • Docs • Paper • Citation • What's New? Nov 2021: Now we have relea

THUNLP 2.3k Jan 07, 2023
Matching python environment code for Lux AI 2021 Kaggle competition, and a gym interface for RL models.

Lux AI 2021 python game engine and gym This is a replica of the Lux AI 2021 game ported directly over to python. It also sets up a classic Reinforceme

Geoff McDonald 74 Nov 03, 2022
CUAD

Contract Understanding Atticus Dataset This repository contains code for the Contract Understanding Atticus Dataset (CUAD), a dataset for legal contra

The Atticus Project 273 Dec 17, 2022
Codes for Causal Semantic Generative model (CSG), the model proposed in "Learning Causal Semantic Representation for Out-of-Distribution Prediction" (NeurIPS-21)

Learning Causal Semantic Representation for Out-of-Distribution Prediction This repository is the official implementation of "Learning Causal Semantic

Chang Liu 54 Dec 01, 2022
Find the Heart simple Python Game

This is a simple Python game for finding a heart emoji. There is a 3 x 3 matrix in which a heart emoji resides. The location of the heart is randomized and is not revealed. The player must guess the

p.katekomol 1 Jan 24, 2022
How to train a CNN to 99% accuracy on MNIST in less than a second on a laptop

Training a NN to 99% accuracy on MNIST in 0.76 seconds A quick study on how fast you can reach 99% accuracy on MNIST with a single laptop. Our answer

Tuomas Oikarinen 42 Dec 10, 2022
Deep learning library for solving differential equations and more

DeepXDE Voting on whether we should have a Slack channel for discussion. DeepXDE is a library for scientific machine learning. Use DeepXDE if you need

Lu Lu 1.4k Dec 29, 2022