An example to implement a new backbone with OpenMMLab framework.

Overview

Backbone example on OpenMMLab framework

English | 简体中文

Introduction

This is an template repo about how to use OpenMMLab framework to develop a new backbone for multiple vision tasks.

With OpenMMLab framework, you can easily develop a new backbone and use MMClassification, MMDetection and MMSegmentation to benchmark your backbone on classification, detection and segmentation tasks.

Setup environment

It requires PyTorch and the following OpenMMLab packages:

  • MIM: A command-line tool to manage OpenMMLab packages and experiments.
  • MMCV: OpenMMLab foundational library for computer vision.
  • MMClassification: OpenMMLab image classification toolbox and benchmark. Besides classification, it's also a repository to store various backbones.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.

Assume you have prepared your Python and PyTorch environment, just use the following command to setup the environment.

pip install openmim mmcls mmdet mmsegmentation
mim install mmcv-full

Data preparation

The data structure looks like below:

data/
├── imagenet
│   ├── train
│   ├── val
│   └── meta
│       ├── train.txt
│       └── val.txt
├── ade
│   └── ADEChallengeData2016
│       ├── annotations
│       └── images
└── coco
    ├── annotations
    │   ├── instance_train2017.json
    │   └── instance_val2017.json
    ├── train2017
    └── val2017

Here, we only list the minimal files for training and validation on ImageNet (classification), ADE20K (segmentation) and COCO (object detection).

If you want benchmark on more datasets or tasks, for example, panoptic segmentation with MMDetection, just organize your dataset according to MMDetection's requirements. For semantic segmentation task, you can organize your dataset according to this tutorial

Usage

Implement your backbone

In this example repository, we use the ConvNeXt as an example to show how to implement a backbone quickly.

  1. Create your backbone file and put it in the models folder. In this example, models/convnext.py.

    In this file, just implement your backbone with PyTorch with two modifications:

    1. The backbone and modules should inherits mmcv.runner.BaseModule. The BaseModule is almost the same as the torch.nn.Module, and supports using init_cfg to specify the initizalization method includes pre-trained model.

    2. Use one-line decorator as below to register the backbone class to the mmcls.models.BACKBONES registry.

      @BACKBONES.register_module(force=True)

      What is registry? Have a look at here!

  2. [Optional] If you want to add some extra components for specific task, you can also add it refers to models/det/layer_decay_optimizer_constructor.py.

  3. Add your backbone class and custom components to models/__init__.py.

Create config files

Add your config files for each task to configs/. If your are not familiar with config files, the tutorial can help you.

In a word, use base config files of model, dataset, schedule and runtime to compose your config files. Of course, you can also override some settings of base config in your config files, even write all settings in one file.

In this template, we provide a suit of popular base config files, you can also find more useful base configs from mmcls, mmdet and mmseg.

Training and testing

For training and testing, you can directly use mim to train and test the model

At first, you need to add the current folder the the PYTHONPATH, so that Python can find your model files.

export PYTHONPATH=`pwd`:$PYTHONPATH 

On local single GPU:

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)"

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself

On multiple GPUs (4 GPUs here):

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)" --launcher pytorch --gpus 4

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm --launcher pytorch --gpus 4

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4 

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU --launcher pytorch --gpus 4
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself

On multiple GPUs in multiple nodes with Slurm (total 16 GPUs here):

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)" --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself
  • PARTITION: the slurm partition you are using
Owner
Ma Zerun
Ma Zerun
Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance

Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution.

Ubiquitous Knowledge Processing Lab 22 Jan 02, 2023
HCQ: Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval

HCQ: Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval [toc] 1. Introduction This repository provides the code for our paper at

13 Dec 08, 2022
Code for the Convolutional Vision Transformer (ConViT)

ConViT : Vision Transformers with Convolutional Inductive Biases This repository contains PyTorch code for ConViT. It builds on code from the Data-Eff

Facebook Research 418 Jan 06, 2023
DeepSTD: Mining Spatio-temporal Disturbances of Multiple Context Factors for Citywide Traffic Flow Prediction

DeepSTD: Mining Spatio-temporal Disturbances of Multiple Context Factors for Citywide Traffic Flow Prediction This is the implementation of DeepSTD in

5 Sep 26, 2022
A framework for analyzing computer vision models with simulated data

3DB: A framework for analyzing computer vision models with simulated data Paper Quickstart guide Blog post Installation Follow instructions on: https:

3DB 112 Jan 01, 2023
PuppetGAN - Cross-Domain Feature Disentanglement and Manipulation just got way better! 🚀

Better Cross-Domain Feature Disentanglement and Manipulation with Improved PuppetGAN Quite cool... Right? Introduction This repo contains a TensorFlow

Giorgos Karantonis 5 Aug 25, 2022
Contains modeling practice materials and homework for the Computational Neuroscience course at Okinawa Institute of Science and Technology

A310 Computational Neuroscience - Okinawa Institute of Science and Technology, 2022 This repository contains modeling practice materials and homework

Sungho Hong 1 Jan 24, 2022
Implementation of Ag-Grid component for Streamlit

streamlit-aggrid AgGrid is an awsome grid for web frontend. More information in https://www.ag-grid.com/. Consider purchasing a license from Ag-Grid i

Pablo Fonseca 556 Dec 31, 2022
[CVPR'21] DeepSurfels: Learning Online Appearance Fusion

DeepSurfels: Learning Online Appearance Fusion Paper | Video | Project Page This is the official implementation of the CVPR 2021 submission DeepSurfel

Online Reconstruction 52 Nov 14, 2022
Categorical Depth Distribution Network for Monocular 3D Object Detection

CaDDN CaDDN is a monocular-based 3D object detection method. This repository is based off of [OpenPCDet]. Categorical Depth Distribution Network for M

Toronto Robotics and AI Laboratory 289 Jan 05, 2023
Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs

PhyCRNet Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs Paper link: [ArXiv] By: Pu Ren, Chengping Rao, Yang

Pu Ren 11 Aug 23, 2022
This example implements the end-to-end MLOps process using Vertex AI platform and Smart Analytics technology capabilities

MLOps with Vertex AI This example implements the end-to-end MLOps process using Vertex AI platform and Smart Analytics technology capabilities. The ex

Google Cloud Platform 238 Dec 21, 2022
Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.

Torch-template-for-deep-learning Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and

Li Shengyan 270 Dec 31, 2022
Repository for tackling Kaggle Ultrasound Nerve Segmentation challenge using Torchnet.

Ultrasound Nerve Segmentation Challenge using Torchnet This repository acts as a starting point for someone who wants to start with the kaggle ultraso

Qure.ai 46 Jul 18, 2022
Learning where to learn - Gradient sparsity in meta and continual learning

Learning where to learn - Gradient sparsity in meta and continual learning In this paper, we investigate gradient sparsity found by MAML in various co

Johannes Oswald 28 Dec 09, 2022
Ranger deep learning optimizer rewrite to use newest components

Ranger21 - integrating the latest deep learning components into a single optimizer Ranger deep learning optimizer rewrite to use newest components Ran

Less Wright 266 Dec 28, 2022
Reimplementation of the paper `Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? (ACL2020)`

Human Attention for Text Classification Re-implementation of the paper Human Attention Maps for Text Classification: Do Humans and Neural Networks Foc

Shunsuke KITADA 15 Dec 13, 2021
This is the source code for: Context-aware Entity Typing in Knowledge Graphs.

This is the source code for: Context-aware Entity Typing in Knowledge Graphs.

9 Sep 01, 2022
Public repository containing materials used for Feed Forward (FF) Neural Networks article.

Art041_NN_Feed_Forward Public repository containing materials used for Feed Forward (FF) Neural Networks article. -- Illustration of a very simple Fee

SolClover 2 Dec 29, 2021
Api's bulid in Flask perfom to manage Todo Task.

Citymall-task Api's bulid in Flask perfom to manage Todo Task. Installation Requrements : Python: 3.10.0 MongoDB create .env file with variables DB_UR

Aisha Tayyaba 1 Dec 17, 2021