LightningFSL: Pytorch-Lightning implementations of Few-Shot Learning models.

Overview

LightningFSL: Few-Shot Learning with Pytorch-Lightning

LICENSE Python last commit

In this repo, a number of pytorch-lightning implementations of FSL algorithms are provided, including two official ones

Boosting Few-Shot Classification with View-Learnable Contrastive Learning (ICME 2021)

Rectifying the Shortcut Learning of Background for Few-Shot Learning (NeurIPS 2021)

Contents

  1. Advantages
  2. Few-shot classification Results
  3. General Guide

Advantages:

This repository is built on top of LightningCLI, which is very convenient to use after being familiar with this tool.

  1. Enabling multi-GPU training
    • Our implementation of FSL framework allows DistributedDataParallel (DDP) to be included in the training of Few-Shot Learning, which is not available before to the best of our knowledge. Previous researches use DataParallel (DP) instead, which is inefficient and requires more computation storages. We achieve this by modifying the DDP sampler of Pytorch, making it possible to sample few-shot learning tasks among devices. See dataset_and_process/samplers.py for details.
  2. High reimplemented accuracy
    • Our reimplementations of some FSL algorithms achieve strong performance. For example, our ResNet12 implementation of ProtoNet and Cosine Classifier achieves 76+ and 80+ accuracy on 5w5s task of miniImageNet, respectively. All results can be reimplemented using pre-defined configuration files in config/.
  3. Quick and convenient creation of new algorithms
    • Pytorch-lightning provides our codebase with a clean and modular structure. Built on top of LightningCLI, our codebase unifies necessary basic components of FSL, making it easy to implement a brand-new algorithm. An impletation of an algorithm usually only requires three short additional files, one specifying the lightningModule, one specifying the classifer head, and the last one specifying all configurations. For example, see the code of ProtoNet (modules/PN.py, architectures/classifier/proto_head.py) and cosine classifier (modules/cosine_classifier.py, architectures/classifier/CC_head.py.
  4. Easy reproducability
    • Every time of running results in a full copy of yaml configuration file in the logging directory, enabling exact reproducability (by using the direct yaml file instead of creating a new one).
  5. Enabling both episodic/non-episodic algorithms
    • Switching with a single parameter is_meta in the configuration file.

Implemented Few-shot classification Results

Implemented results on few-shot classification datasets. The average results of 2,000 randomly sampled episodes repeated 5 times for 1/5-shot evaluation with 95% confidence interval are reported.

miniImageNet Dataset

Models Backbone 5-way 1-shot 5-way 5-shot pretrained models
Protypical Networks ResNet12 61.19+-0.40 76.50+-0.45 link
Cosine Classifier ResNet12 63.89+-0.44 80.94+-0.05 link
Meta-Baseline ResNet12 62.65+-0.65 79.10+-0.29 link
S2M2 WRN-28-10 58.85+-0.20 81.83+-0.15 link
S2M2+Logistic_Regression WRN-28-10 62.36+-0.42 82.01+-0.24
MoCo-v2(unsupervised) ResNet12 52.03+-0.33 72.94+-0.29 link
Exemplar-v2 ResNet12 59.02+-0.24 77.23+-0.16 link
PN+CL ResNet12 63.44+-0.44 79.42+-0.06 link
COSOC ResNet12 69.28+0.49 85.16+-0.42 link

General Guide

To understand the code correctly, it is highly recommended to first quickly go through the pytorch-lightning documentation, especially LightningCLI. It won't be a long journey since pytorch-lightning is built on the top of pytorch.

Installation

Just run the command:

pip install -r requirements.txt

running an implemented few-shot model

  1. Downloading Datasets:
  2. Training (Except for Meta-baseline and COSOC):
    • Choose the corresponding configuration file in 'config'(e.g.set_config_PN.py for PN model), set inside the parameter 'is_test' to False, set GPU ids (multi-GPU or not), dataset directory, logging dir as well as other parameters you would like to change.
    • modify the first line in run.sh (e.g., python config/set_config_PN.py).
    • To begin the running, run the command bash run.sh
  3. Training Meta-baseline:
    • This is a two-stage algorithm, with the first stage being CEloss-pretraining, followed by ProtoNet finetuning. So a two-stage training is need. The first training uses the configuration file config/set_config_meta_baseline_pretrain.py. The second uses config/set_config_meta_baseline_finetune.py, with pre-training model path from the first stage, specified by the parameterpre_trained_path in the configuration file.
  4. Training COSOC:
    • For pre-training Exemplar, choose configuration file config/set_config_MoCo.py and set parameter is_exampler to True.
    • For runing COS algorithm, run the command python COS.py --save_dir [save_dir] --pretrained_Exemplar_path [model_path] --dataset_path [data_path]. [save_dir] specifies the saving directory of all foreground objects, [model_path] and [data_path] specify the pathes of pre-trained model and datasets, respectively.
    • For runing a FSL algorithm with COS, choose configuration file config/set_config_COSOC.py and set parameter data["train_dataset_params"] to the directory of saved data of COS algorithm, pre_trained_path to the directory of pre-trained Exemplar.
  5. Testing:
    • Choose the same configuration file as training, set parameter is_test to True, pre_trained_path to the directory of checkpoint model (with suffix '.ckpt'), and other parameters (e.g. shot, batchsize) as you disire.
    • modify the first line in run.sh (e.g., python config/set_config_PN.py).
    • To begin the testing, run the command bash run.sh

Creating a new few-shot algorithm

It is quite simple to implement your own algorithm. most of algorithms only need creation of a new LightningModule and a classifier head. We give a breif description of the code structure here.

run.py

It is usually not needed to modify this file. The file run.py wraps the whole training and testing procedure of a FSL algorithm, for which all configurations are specified by an individual yaml file contained in the /config folder; see config/set_config_PN.py for example. The file run.py contains a python class Few_Shot_CLI, inherited from LightningCLI. It adds new hyperpameters (Also specified in configuration file) as well as testing process for FSL.

FewShotModule

Need modification. The folder modules contains LightningModules for FSL models, specifying model components, optimizers, logging metrics and train/val/test processes. Notably, modules/base_module.py contains the template module for all FSL models. All other modules inherit the base module; see modules/PN.py and modules/cosine_classifier.py for how episodic/non-episodic models inherit from the base module.

architectures

Need modification. We divide general FSL architectures into feature extractor and classification head, specified respectively in architectures/feature_extractor and architectures/classifier. These are just common nn modules in pytorch, which shall be embedded in LightningModule mentioned above. The recommended feature extractor is ResNet12, which is popular and shows promising performance. The classification head, however, varies with algorithms and need specific designs.

Datasets and DataModule

It is usually not needed for modification. Pytorch-lightning unifies data processing across training, val and testing into a single LightningDataModule. We disign such a datamodule in dataset_and_process/datamodules/few_shot_datamodule.py for FSL, enabling episodic/non-episodic sampling and DDP for multi-GPU fast training. The definition of Dataset itself is in dataset_and_process/datasets, specified as common pytorch datasets class. There is no need to modify the dataset module unless new datasets are involved.

Callbacks and Plugins

It is usually not needed for modification. See documentation of pytorch-lightning for detailed introductions of callbacks and Plugins. They are additional functionalities added to the system in a modular fashion.

Configuration

Need modification. See LightningCLI for how a yaml configuration file works. For each algorithm, there needs one specific configuration file, though most of the configurations are the same across algorithms. Thus it is convenient to copy one configuration and change it for a new algorithm.

Owner
Xu Luo
M.S. student of SMILE Lab, UESTC
Xu Luo
Code for paper [ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot] (ICCV 2021, oral))

ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot This repository is the official PyTorch implementation of ICCV-21 pape

Jiarui 21 May 09, 2022
A MatConvNet-based implementation of the Fully-Convolutional Networks for image segmentation

MatConvNet implementation of the FCN models for semantic segmentation This package contains an implementation of the FCN models (training and evaluati

VLFeat.org 175 Feb 18, 2022
As a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).

HAKE-Action HAKE-Action (TensorFlow) is a project to open the SOTA action understanding studies based on our Human Activity Knowledge Engine. It inclu

Yong-Lu Li 94 Nov 18, 2022
The official PyTorch implementation of recent paper - SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training

This repository is the official PyTorch implementation of SAINT. Find the paper on arxiv SAINT: Improved Neural Networks for Tabular Data via Row Atte

Gowthami Somepalli 284 Dec 21, 2022
NeurIPS workshop paper 'Counter-Strike Deathmatch with Large-Scale Behavioural Cloning'

Counter-Strike Deathmatch with Large-Scale Behavioural Cloning Tim Pearce, Jun Zhu Offline RL workshop, NeurIPS 2021 Paper: https://arxiv.org/abs/2104

Tim Pearce 169 Dec 26, 2022
Curated list of awesome GAN applications and demo

gans-awesome-applications Curated list of awesome GAN applications and demonstrations. Note: General GAN papers targeting simple image generation such

Minchul Shin 4.5k Jan 07, 2023
Multi Agent Reinforcement Learning for ROS in 2D Simulation Environments

IROS21 information To test the code and reproduce the experiments, follow the installation steps in Installation.md. Afterwards, follow the steps in E

11 Oct 29, 2022
frida工具的缝合怪

fridaUiTools fridaUiTools是一个界面化整理脚本的工具。新人的练手作品。参考项目ZenTracer,觉得既然可以界面化,那么应该可以把功能做的更加完善一些。跨平台支持:win、mac、linux 功能缝合怪。把一些常用的frida的hook脚本简单统一输出方式后,整合进来。并且

diveking 997 Jan 09, 2023
Robustness via Cross-Domain Ensembles

Robustness via Cross-Domain Ensembles [ICCV 2021, Oral] This repository contains tools for training and evaluating: Pretrained models Demo code Traini

Visual Intelligence & Learning Lab, Swiss Federal Institute of Technology (EPFL) 27 Dec 23, 2022
Lightweight Cuda Renderer with Python Wrapper.

pyRender Lightweight Cuda Renderer with Python Wrapper. Compile Change compile.sh line 5 to the glm library include path. This library can be download

Jingwei Huang 53 Dec 02, 2022
Space-event-trace - Tracing service for spaceteam events

space-event-trace Tracing service for TU Wien Spaceteam events. This service is

TU Wien Space Team 2 Jan 04, 2022
[ICCV '21] In this repository you find the code to our paper Keypoint Communities

Keypoint Communities In this repository you will find the code to our ICCV '21 paper: Keypoint Communities Duncan Zauss, Sven Kreiss, Alexandre Alahi,

Duncan Zauss 262 Dec 13, 2022
Official PyTorch code for WACV 2022 paper "CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows"

CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows WACV 2022 preprint:https://arxiv.org/abs/2107.1

Denis 156 Dec 28, 2022
Code for ACM MM2021 paper "Complementary Trilateral Decoder for Fast and Accurate Salient Object Detection"

CTDNet The PyTorch code for ACM MM2021 paper "Complementary Trilateral Decoder for Fast and Accurate Salient Object Detection" Requirements Python 3.6

CVTEAM 28 Oct 20, 2022
Code for paper "A Critical Assessment of State-of-the-Art in Entity Alignment" (https://arxiv.org/abs/2010.16314)

A Critical Assessment of State-of-the-Art in Entity Alignment This repository contains the source code for the paper A Critical Assessment of State-of

Max Berrendorf 16 Oct 14, 2022
Model Zoo for MindSpore

Welcome to the Model Zoo for MindSpore In order to facilitate developers to enjoy the benefits of MindSpore framework, we will continue to add typical

MindSpore 226 Jan 07, 2023
DeepGNN is a framework for training machine learning models on large scale graph data.

DeepGNN Overview DeepGNN is a framework for training machine learning models on large scale graph data. DeepGNN contains all the necessary features in

Microsoft 45 Jan 01, 2023
Face Mask Detection system based on computer vision and deep learning using OpenCV and Tensorflow/Keras

Face Mask Detection Face Mask Detection System built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect

Chandrika Deb 1.4k Jan 03, 2023
Vignette is a face tracking software for characters using osu!framework.

Vignette is a face tracking software for characters using osu!framework. Unlike most solutions, Vignette is: Made with osu!framework, the game framewo

Vignette 412 Dec 28, 2022
Import Python modules from dicts and JSON formatted documents.

Paker Paker is module for importing Python packages/modules from dictionaries and JSON formatted documents. It was inspired by httpimporter. Important

Wojciech Wentland 1 Sep 07, 2022