PyContinual (An Easy and Extendible Framework for Continual Learning)

Overview

PyContinual (An Easy and Extendible Framework for Continual Learning)

Easy to Use

You can sumply change the baseline, backbone and task, and then ready to go. Here is an example:

	python run.py \  
	--bert_model 'bert-base-uncased' \  
	--backbone bert_adapter \ #or other backbones (bert, w2v...)  
	--baseline ctr \  #or other avilable baselines (classic, ewc...)
	--task asc \  #or other avilable task/dataset (dsc, newsgroup...)
	--eval_batch_size 128 \  
	--train_batch_size 32 \  
	--scenario til_classification \  #or other avilable scenario (dil_classification...)
	--idrandom 0  \ #which random sequence to use
	--use_predefine_args #use pre-defined arguments

Easy to Extend

You only need to write your own ./dataloader, ./networks and ./approaches. You are ready to go!

Introduction

Recently, continual learning approaches have drawn more and more attention. This repo contains pytorch implementation of a set of (improved) SoTA methods using the same training and evaluation pipeline.

This repository contains the code for the following papers:

Features

  • Datasets: It currently supports Language Datasets (Document/Sentence/Aspect Sentiment Classification, Natural Language Inference, Topic Classification) and Image Datasets (CelebA, CIFAR10, CIFAR100, FashionMNIST, F-EMNIST, MNIST, VLCS)
  • Scenarios: It currently supports Task Incremental Learning and Domain Incremental Learning
  • Training Modes: It currently supports single-GPU. You can also change it to multi-node distributed training and the mixed precision training.

Architecture

./res: all results saved in this folder.
./dat: processed data
./data: raw data ./dataloader: contained dataloader for different data ./approaches: code for training
./networks: code for network architecture
./data_seq: some reference sequences (e.g. asc_random) ./tools: code for preparing the data

Setup

  • If you want to run the existing systems, please see run_exist.md
  • If you want to expand the framework with your own model, please see run_own.md
  • If you want to see the full list of baselines and variants, please see baselines.md

Reference

If using this code, parts of it, or developments from it, please consider cite the references bellow.

@inproceedings{ke2021achieve,
  title={Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning},
  author={Ke, Zixuan and Liu, Bing and Ma, Nianzu and Xu, Hu, and Lei Shu},
  booktitle={NeurIPS},
  year={2021}
}

@inproceedings{ke2021contrast,
  title={CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks},
  author={Ke, Zixuan and Liu, Bing and Xu, Hu, and Lei Shu},
  booktitle={EMNLP},
  year={2021}
}

@inproceedings{ke2021adapting,
  title={Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks},
  author={Ke, Zixuan and Xu, Hu and Liu, Bing},
  booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
  pages={4746--4755},
  year={2021}
}

@inproceedings{ke2020continualmixed,
author= {Ke, Zixuan and Liu, Bing and Huang, Xingchang},
title= {Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks},
booktitle = {Advances in Neural Information Processing Systems},
volume={33},
year = {2020}}

@inproceedings{ke2020continual,
author= {Zixuan Ke and Bing Liu and Hao Wang and Lei Shu},
title= {Continual Learning with Knowledge Transfer for Sentiment Classification},
booktitle = {ECML-PKDD},
year = {2020}}

Contact

Please drop an email to Zixuan Ke, Xingchang Huang or Nianzu Ma if you have any questions regarding to the code. We thank Bing Liu, Hu Xu and Lei Shu for their valuable comments and opinioins.

利用python脚本实现微信、支付宝账单的合并,并保存到excel文件实现自动记账,可查看可视化图表。

KeepAccounts_v2.0 KeepAccounts.exe和其配套表格能够实现微信、支付宝官方导出账单的读取合并,为每笔帐标记类型,并按月份和类型生成可视化图表。再也不用消费一笔记一笔,每月仅需10分钟,记好所有的帐。 作者: MickLife Bilibili: https://spac

159 Jan 01, 2023
Implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

CrossViT : Cross-Attention Multi-Scale Vision Transformer for Image Classification This is an unofficial PyTorch implementation of CrossViT: Cross-Att

Rishikesh (ऋषिकेश) 103 Nov 25, 2022
D²Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos

D²Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos This repository contains the implementation for "D²Conv3D: Dynamic Dilated Co

17 Oct 20, 2022
This is an implementation for the CVPR2020 paper "Learning Invariant Representation for Unsupervised Image Restoration"

Learning Invariant Representation for Unsupervised Image Restoration (CVPR 2020) Introduction This is an implementation for the paper "Learning Invari

GarField 88 Nov 07, 2022
A collection of random and hastily hacked together scripts for investigating EU-DCC

A collection of random and hastily hacked together scripts for investigating EU-DCC

Ryan Barrett 8 Mar 01, 2022
[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

BCMI 49 Jul 27, 2022
The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop.

AICITY2021_Track2_DMT The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop. Introduction

Hao Luo 91 Dec 21, 2022
A Python package to process & model ChEMBL data.

insilico: A Python package to process & model ChEMBL data. ChEMBL is a manually curated chemical database of bioactive molecules with drug-like proper

Steven Newton 0 Dec 09, 2021
Tensors and neural networks in Haskell

Hasktorch Hasktorch is a library for tensors and neural networks in Haskell. It is an independent open source community project which leverages the co

hasktorch 920 Jan 04, 2023
GPOEO is a micro-intrusive GPU online energy optimization framework for iterative applications

GPOEO GPOEO is a micro-intrusive GPU online energy optimization framework for iterative applications. We also implement ODPP [1] as a comparison. [1]

瑞雪轻飏 8 Sep 10, 2022
MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification

MixText This repo contains codes for the following paper: Jiaao Chen, Zichao Yang, Diyi Yang: MixText: Linguistically-Informed Interpolation of Hidden

GT-SALT 309 Dec 12, 2022
MoCap-Solver: A Neural Solver for Optical Motion Capture Data

MoCap-Solver is a data-driven-based robust marker denoising method, which takes raw mocap markers as input and outputs corresponding clean markers and skeleton motions.

55 Dec 28, 2022
Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral]

Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral] Learning to Disambiguate Strongly In

Zicong Fan 40 Dec 22, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
Author's PyTorch implementation of TD3 for OpenAI gym tasks

Addressing Function Approximation Error in Actor-Critic Methods PyTorch implementation of Twin Delayed Deep Deterministic Policy Gradients (TD3). If y

Scott Fujimoto 1.3k Dec 25, 2022
yolov5目标检测模型的知识蒸馏(基于响应的蒸馏)

代码地址: https://github.com/Sharpiless/yolov5-knowledge-distillation 教师模型: python train.py --weights weights/yolov5m.pt \ --cfg models/yolov5m.ya

52 Dec 04, 2022
Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources.

Illumination_Decomposition Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources. This code implements the

QAY 7 Nov 15, 2020
DTCN SMP Challenge - Sequential prediction learning framework and algorithm

DTCN This is the implementation of our paper "Sequential Prediction of Social Me

Bobby 2 Jan 24, 2022
9th place solution

AllDataAreExt-Galixir-Kaggle-HPA-2021-Solution Team Members Qishen Ha is Master of Engineering from the University of Tokyo. Machine Learning Engineer

daishu 5 Nov 18, 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation

Prefix Tuning Files: . ├── gpt2 # Code for GPT2 style autoregressive LM │ ├── train_e2e.py # high-level script

530 Jan 04, 2023