Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

Overview


            

COResets and Data Subset selection

GitHub Decile Documentation GitHub Stars GitHub Forks

Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

In this README

What is CORDS?

CORDS is COReset and Data Selection library for making machine learning time, energy, cost, and compute efficient. CORDS is built on top of pytorch. Deep Learning systems are extremely compute intensive today with large turn around times, energy inefficiencies, higher costs and resourse requirements [1,2]. CORDS is an effort to make deep learning more energy, cost, resource and time efficient while not sacrificing accuracy. The following are the goals CORDS tries to achieve:

Data Efficiency

Reducing End to End Training Time

Reducing Energy Requirement

Faster Hyper-parameter tuning

Reducing Resource (GPU) Requirement and Costs

The primary purpose of CORDS is to select the right representative data subsets from massive datasets, and it does so iteratively. CORDS uses some recent advances in data subset selection and particularly, ideas of coresets and submodularity select such subsets. CORDS implements a number of state of the art data subset selection algorithms and coreset algorithms. Some of the algorithms currently implemented with CORDS include:

We are continuously incorporating newer and better algorithms into CORDS. Some of the features of CORDS includes:

  • Reproducability of SOTA in Data Selection and Coresets: Enable easy reproducability of SOTA described above. We are trying to also add more algorithms so if you have an algorithm you would like us to include, please let us know,.
  • Benchmarking: We have benchmarked CORDS (and the algorithms present right now) on several datasets including CIFAR-10, CIFAR-100, MNIST, SVHN and ImageNet.
  • Ease of Use: One of the main goals of CORDS is that it is easy to use and add to CORDS. Feel free to contribute to CORDS!
  • Modular design: The data selection algorithms are separate from the training loop, thereby enabling modular design and also varied scenarios of utility.
  • Broad number of usecases: CORDS is currently implemented for simple image classification tasks and hyperparameter tuning, but we are working on integrating a number of additional use cases like object detection, speech recognition, semi-supervised learning, Auto-ML, etc.

Installation

  1. To install latest version of CORDS package using PyPI:

    pip install -i https://test.pypi.org/simple/ cords
  2. To install using source:

    git clone https://github.com/decile-team/cords.git
    cd cords
    pip install -r requirements/requirements.txt

Next Steps

Tutorials

Documentation

The documentation for the latest version of CORDS can always be found here.

Comments
  • Logistic Regression support for Gradmatch

    Logistic Regression support for Gradmatch

    Logistic Regression model throws errors when we do back propagation. The fix for this is perhaps making freeze=False in forward function of utils/models/logreg_net.py

    opened by nlokeshiisc 4
  • [Bug] Got weight with same value when running examples.

    [Bug] Got weight with same value when running examples.

    Hi, I tested the example with Supervised learning and Glister strategy. https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/python_notebooks/CORDS_SL_CIFAR10_Custom_Train.ipynb But when I print the weight of the train loader, they are all 1.0. I believe that by using Glister strategy, we will get different weights.

    tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
            1., 1.], device='cuda:0')
    

    Is that a bug or something special? Thanks.

    opened by HaoKang-Timmy 3
  • Segmentation fault (core dumped)

    Segmentation fault (core dumped)

    Hi,

    I was trying to deploy CORDS selection to my training, but this error popped out Segmentation fault (core dumped).

    I imitated code from https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/python_notebooks/CORDS_SL_CIFAR10_Custom_Train.ipynb.

    So basically I put my training and testing loader into GLISTERDataLoader, and switched this part into my code

    for _, (inputs, targets, weights) in enumerate(dataloader): inputs = inputs.to(device) targets = targets.to(device, non_blocking=True) weights = weights.to(device) optimizer.zero_grad() outputs = model(inputs) losses = criterion_nored(outputs, targets) loss = torch.dot(losses, weights/(weights.sum())) loss.backward()

    before modifying my code was running fine, so I believe there is an error inside the CORDS, my dataset is CIFAR10.

    Thanks

    opened by chengwuxinlin 2
  • Replace apricot with submodlib

    Replace apricot with submodlib

    Fixes #16
    submodlib is now used for the CRAIG strategy/dataloader as well as the submodular strategy/dataloader. Please let me know if you have any feedback!

    Notes:

    • I am not sure if sum redundancy (a submodular function implemented in apricot) has an analogue in submodlib, so it is disabled as an option for now.
    • It doesn't seem like submodularselectionstrategy.py is used in the corresponding dataloader. This may be a good opportunity to refactor, so that behavior is consistent between the two.
    • Any existing code that specifies the "optimizer" (greedy algorithm) used by apricot will break, since the names used by submodlib are different than those used by apricot (e.g. 'LazyGreedy' instead of 'Lazy'). This includes configs that use this option.
    opened by ghost 1
  • Typo in cords_cifar10_glister_train.ipynb

    Typo in cords_cifar10_glister_train.ipynb

    There is a typo in the cords_cifar10_glister_train.ipynb notebook : https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/cords_cifar10_glister_train.ipynb

    glister_trn.configdata.train_args.print_every = 1
    glister_trn.configdata.train_args.device = 'cuda'
    glister_trn.configdata.dss_args.fraction = fraction
    

    instead of

    glister_trn.cfg.train_args.print_every = 1
    glister_trn.cfg.train_args.device = 'cuda'
    glister_trn.cfg.dss_args.fraction = fraction
    
    opened by eendee 1
  • Evaluation on ImageNet

    Evaluation on ImageNet

    Hello, thanks for a very interesting and useful project.

    Could you mind providing an evaluation method for ImageNet? I tried to, adding loader for ImageNet to custom_dataset.py, but failed due to a GPU memory issue during subset selection.

    Many thanks!

    opened by Hayoung93 1
  • For GRAD_MATCH method, the weights associated with each data point in X(subset of training set)

    For GRAD_MATCH method, the weights associated with each data point in X(subset of training set)

    1. For GRAD-MATCH method, there are weights associated with each data point in X(subset of training set). Do the weights have physical significance? for example, if the value of the weight is higher, the relevant selected data has the greater contribution to the residual?
    2. During the iteration, the selective index is in the selected indices, so the iteration break. why this happen? [email protected]
    opened by lishaguo 1
  • Questions about accuracy logging

    Questions about accuracy logging

    Hello! Thanks for your great work.

    I'm currently working on this code and I want to ask a question about accuracy logging.

    https://github.com/decile-team/cords/blob/ff629ff15fac911cd3b82394ffd278c42dacd874/train.py#L530-L541

    In line 541 of train.py, val_acc contains cumulative accuracies over input batches. For example, if the loader contains 4500 examples and the batch size is 1000, then tst_acc has 5 accuracies per each evaluation. (the first element of tst_acc will be the accuracy over the first 1000 examples)

    https://github.com/decile-team/cords/blob/ff629ff15fac911cd3b82394ffd278c42dacd874/train.py#L631-L633

    In line 633, it prints the best value in tst_acc. In this case, the resulted best accuracies over different algorithms and seeds might be the values evaluated on different test samples.

    Is this what you intended? In my experience, I think evaluating algorithms on an identical test dataset is a convention. In addition, is the reported test accuracies in the GRAD-MATCH paper the best values as above or the last test accuracy?

    Best, Jang-Hyun

    opened by Janghyun1230 1
  • CORDS gradient calculations for different loss functions

    CORDS gradient calculations for different loss functions

    a) Implement gradient calculation for Squared Loss, Negative logistic loss, General loss function gradient computation, Hinge loss.

    b) Integrate the new gradient calculation with different selection strategies

    enhancement 
    opened by krishnatejakk 1
  • Refactor the folders in the repo

    Refactor the folders in the repo

    • Add a folder called benchmarks which has all the results/benchmarks for the various cases. We should remove the results from the main readme and point them to that folder. Also, add the notebooks to reproduce the benchmark results
    • Rename notebooks to tutorials. Add different tutorials based on use-cases (NLP, Vision, SSL, Hyper-parameter tunings, NAS, etc.)
    opened by rishabhk108 0
  • Inquiry about performance of gradmatch

    Inquiry about performance of gradmatch

    Hello, I ran some experiments with gradmatch and randomonline, and find these two actually reach similar performances after 300 epochs, which is around 93, is there something important to note for reproducing the results? Thanks for your help!

    opened by pipilurj 0
  • Implement faster version of OMP

    Implement faster version of OMP

    Implement the following versions of OMP:

    1. FNNOMP (https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7012095)
    2. SNNOMP (https://hal.univ-lorraine.fr/hal-01585253/document)
    high priority in progress 
    opened by krishnatejakk 0
  • Gradmatch Data subset selection method making training slow

    Gradmatch Data subset selection method making training slow

    I tried to run some experiments as follows:

    • Ran full cifar10 without any subset selection method to train resnet50 which took around 32m 31s.
    • Ran Gradmatch cifar10 subset selection with 0.1 fractions taking longer time than full cifar10 i.e 22h 48m 40s.
    • Ran Gradmatch cifar10 subset selection with 0.3 fractions taking longer time than 0.1 Gradmatch selection method.

    I am using scaled resolution images of cifar10 i.e 224x224 resolution and accordingly defined resnet50 architecture. Can you let me know how to speed up experiments 2 and 3? In general subset selection method should faster the whole training process right?

    opened by animesh-007 9
  • Implement CRUST Algorithm

    Implement CRUST Algorithm

    1. Implement the CRUST strategy in the supervised learning setting.
    2. Create the CRUST data loader class building it on top of adaptive_dataloader class.
    enhancement 
    opened by krishnatejakk 0
Releases(v0.0.1)
  • v0.0.1(Mar 24, 2022)

    What's Changed

    • Selcon sahasra by @sahasrarjn in https://github.com/decile-team/cords/pull/73
    • Selcon sahasra by @sahasrarjn in https://github.com/decile-team/cords/pull/74

    New Contributors

    • @sahasrarjn made their first contribution in https://github.com/decile-team/cords/pull/73

    Full Changelog: https://github.com/decile-team/cords/compare/v0.0.0...v0.0.1

    Source code(tar.gz)
    Source code(zip)
  • v0.0.0(Mar 4, 2022)

    Pre-release of CORDS

    What's Changed

    • Dev by @krishnatejakk in https://github.com/decile-team/cords/pull/9
    • CONFIG Files Pull by @krishnatejakk in https://github.com/decile-team/cords/pull/10
    • New Gradient Computation Code by @krishnatejakk in https://github.com/decile-team/cords/pull/11
    • Feature: add support for hyperparameter tuning with subset selection by @savan77 in https://github.com/decile-team/cords/pull/12
    • Added checkpoints to save the model and updated documentation by @dheerajnbhat in https://github.com/decile-team/cords/pull/15
    • test CI and dual tests by @noilreed in https://github.com/decile-team/cords/pull/29
    • Dual CI flow merge to main by @noilreed in https://github.com/decile-team/cords/pull/30
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/36
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/40
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/66

    New Contributors

    • @krishnatejakk made their first contribution in https://github.com/decile-team/cords/pull/9
    • @savan77 made their first contribution in https://github.com/decile-team/cords/pull/12
    • @dheerajnbhat made their first contribution in https://github.com/decile-team/cords/pull/15
    • @noilreed made their first contribution in https://github.com/decile-team/cords/pull/29

    Full Changelog: https://github.com/decile-team/cords/commits/v0.0.0

    Source code(tar.gz)
    Source code(zip)
Owner
decile-team
DECILE: Data EffiCient machIne LEarning
decile-team
Python package for downloading ECMWF reanalysis data and converting it into a time series format.

ecmwf_models Readers and converters for data from the ECMWF reanalysis models. Written in Python. Works great in combination with pytesmo. Citation If

TU Wien - Department of Geodesy and Geoinformation 31 Dec 26, 2022
Implementation for HFGI: High-Fidelity GAN Inversion for Image Attribute Editing

HFGI: High-Fidelity GAN Inversion for Image Attribute Editing High-Fidelity GAN Inversion for Image Attribute Editing Update: We released the inferenc

Tengfei Wang 371 Dec 30, 2022
Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021)

Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021) This repository contains the code for our ICCV2021 paper by Jia-Ren Cha

Jia-Ren Chang 40 Dec 27, 2022
Clean and readable code for Decision Transformer: Reinforcement Learning via Sequence Modeling

Minimal implementation of Decision Transformer: Reinforcement Learning via Sequence Modeling in PyTorch for mujoco control tasks in OpenAI gym

Nikhil Barhate 104 Jan 06, 2023
An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

Decoupled-Contrastive-Learning This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper. Requirements P

Ramin Nakhli 71 Dec 04, 2022
Distributed Asynchronous Hyperparameter Optimization in Python

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

6.5k Jan 01, 2023
The official PyTorch code for NeurIPS 2021 ML4AD Paper, "Does Thermal data make the detection systems more reliable?"

MultiModal-Collaborative (MMC) Learning Framework for integrating RGB and Thermal spectral modalities This is the official code for NeurIPS 2021 Machi

NeurAI 12 Nov 02, 2022
ML-based medical imaging using Azure

Disclaimer This code is provided for research and development use only. This code is not intended for use in clinical decision-making or for any other

Microsoft Azure 68 Dec 23, 2022
Roadmap to becoming a machine learning engineer in 2020

Roadmap to becoming a machine learning engineer in 2020, inspired by web-developer-roadmap.

Chris Hoyean Song 1.7k Dec 29, 2022
Easy-to-use micro-wrappers for Gym and PettingZoo based RL Environments

SuperSuit introduces a collection of small functions which can wrap reinforcement learning environments to do preprocessing ('microwrappers'). We supp

Farama Foundation 357 Jan 06, 2023
Official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"

RealBasicVSR [Paper] This is the official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution, arXiv". This repository contain

Kelvin C.K. Chan 566 Dec 28, 2022
Stacked Generative Adversarial Networks

Stacked Generative Adversarial Networks This repository contains code for the paper "Stacked Generative Adversarial Networks", CVPR 2017. Part of the

Xun Huang 241 May 07, 2022
NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

5 Nov 03, 2022
E2VID_ROS - E2VID_ROS: E2VID to a real-time system

E2VID_ROS Introduce We extend E2VID to a real-time system. Because Python ROS ca

Robin Shaun 7 Apr 17, 2022
Dashboard for the COVID19 spread

COVID-19 Data Explorer App A streamlit Dashboard for the COVID-19 spread. The app is live at: [https://covid19.cwerner.ai]. New data is queried from G

Christian Werner 22 Sep 29, 2022
Official implementation of "An Image is Worth 16x16 Words, What is a Video Worth?" (2021 paper)

An Image is Worth 16x16 Words, What is a Video Worth? paper Official PyTorch Implementation Gilad Sharir, Asaf Noy, Lihi Zelnik-Manor DAMO Academy, Al

213 Nov 12, 2022
A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

Emma 1 Jan 18, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 04, 2023
Code repository for our paper "Learning to Generate Scene Graph from Natural Language Supervision" in ICCV 2021

Scene Graph Generation from Natural Language Supervision This repository includes the Pytorch code for our paper "Learning to Generate Scene Graph fro

Yiwu Zhong 64 Dec 24, 2022
Stacked Recurrent Hourglass Network for Stereo Matching

SRH-Net: Stacked Recurrent Hourglass Introduction This repository is supplementary material of our RA-L submission, which helps reviewers to understan

28 Jan 03, 2023