SOTA easy to use PyTorch-based DL training library

Overview


Easily train or fine-tune SOTA computer vision models from one training repository.


SuperGradients

Introduction

Welcome to SuperGradients, a free open-source training library for PyTorch-based deep learning models. SuperGradients allows you to train models of any computer vision tasks or import pre-trained SOTA models, such as object detection, classification of images, and semantic segmentation for videos and images.

Whether you are a beginner or an expert it is likely that you already have your own training script, model, loss function implementation, etc., and thus you experienced with how difficult it is to develop a production ready deep learning model, the overhead of integrating with existing training tools with very different and stiff formats and conventions, how much effort it is to find a suitable architecture for your needs when every repo is focusing on just one task.

With SuperGradients you can:


Table of Content:

Getting Started

Quick Start Notebook

Get started with our quick start notebook on Google Colab for a quick and easy start using free GPU hardware

SuperGradients Quick Start in Google Colab Download notebook View source on GitHub


SuperGradients Walkthrough Notebook

Learn more about SuperGradients training components with our walkthrough notebook on Google Colab for an easy to use tutorial using free GPU hardware

SuperGradients Walkthrough in Google Colab Download notebook View source on GitHub


Installation Methods

Prerequisites

General requirements:

To train on nvidia GPUs:

Quick Installation of stable version

Not yet avilable in PyPi

  pip install super-gradients

That's it !

Installing from GitHub

pip install git+https://github.com/Deci-AI/[email protected]

Computer Vision Models' Pretrained Checkpoints

Pretrained Classification PyTorch Checkpoints

Model Dataset Resolution Top-1 Top-5 Latency b1T4 Throughout b1T4
EfficientNet B0 ImageNet 224x224 77.62 93.49 1.16ms 862fps
RegNetY200 ImageNet 224x224 70.88 89.35 1.07ms 928.3fps
RegNetY400 ImageNet 224x224 74.74 91.46 1.22ms 816.5fps
RegNetY600 ImageNet 224x224 76.18 92.34 1.19ms 838.5fps
RegNetY800 ImageNet 224x224 77.07 93.26 1.18ms 841.4fps
ResNet18 ImageNet 224x224 70.6 89.64 0.599ms 1669fps
ResNet34 ImageNet 224x224 74.13 91.7 0.89ms 1123fps
ResNet50 ImageNet 224x224 76.3 93.0 0.94ms 1063fps
MobileNetV3_large-150 epochs ImageNet 224x224 73.79 91.54 0.87ms 1149fps
MobileNetV3_large-300 epochs ImageNet 224x224 74.52 91.92 0.87ms 1149fps
MobileNetV3_small ImageNet 224x224 67.45 87.47 0.75ms 1333fps
MobileNetV2_w1 ImageNet 224x224 73.08 91.1 0.58ms 1724fps

NOTE: Performance measured on T4 GPU with TensorRT, using FP16 precision and batch size 1

Pretrained Object Detection PyTorch Checkpoints

Model Dataset Resolution mAPval
0.5:0.95
Latency b1T4 Throughout b64T4
YOLOv5 nano COCO 640x640 27.7 6.55ms 177.62fps
YOLOv5 small COCO 640x640 37.3 7.13ms 159.44fps
YOLOv5 medium COCO 640x640 45.2 8.95ms 121.78fps
YOLOv5 large COCO 640x640 48.0 11.49ms 95.99fps

NOTE: Performance measured on T4 GPU with TensorRT, using FP16 precision and batch size 1 (latency) and batch size 64 (througput)

Pretrained Semantic Segmentation PyTorch Checkpoints

Model Dataset Resolution mIoU Latency b1T4 Throughout b64T4
DDRNet23 Cityscapes 1024x2048 78.65 25.48ms 37.4fps
DDRNet23 slim Cityscapes 1024x2048 76.6 22.24ms 45.7fps
ShelfNet_LW_34 COCO Segmentation (21 classes from PASCAL including background) 512x512 65.1 - -

NOTE: Performance measured on T4 GPU with TensorRT, using FP16 precision and batch size 1 (latency) and batch size 64 (througput)

Contributing

To learn about making a contribution to SuperGradients, please see our Contribution page.

Our awesome contributors:


Made with contrib.rocks.

Citation

If you use SuperGradients library or benchmark in your research, please cite SuperGradients deep learning training library.

Community

If you want to be a part of SuperGradients growing community, hear about all the exciting news and updates, need help, request for advanced features, or want to file a bug or issue report, we would love to welcome you aboard!

License

This project is released under the Apache 2.0 license.

Comments
  • Add a `conda` install option for `super-gradients`

    Add a `conda` install option for `super-gradients`

    It will be helpful to have super-gradients added to conda-forge channel. I have started the work already in the following PR.

    • https://github.com/conda-forge/staged-recipes/pull/20167

    But there seems to be a problem with one of its dependencies:

    • deci-lab-client PyPI

      Neither does it have any source file (*.tar.gz) on PyPI, nor any release on a public GitHub repository.

    Please provide (preferably) the source file for deci-lab-client on PyPI.

    :fire: CONSTRAINT: To add any package on conda-forge channel, you need ALL its dependencies on conda-forge as well

    opened by sugatoray 9
  • Add ignore option for dataloder workers on windows

    Add ignore option for dataloder workers on windows

    I added a decorator for making sure a function does not run on dataloder workers. This is needed to protect functions called when importing super_gradients on windows, because it seems that dataloader workers on windows re-import all the packages (see https://pytorch.org/docs/stable/data.html#multi-process-data-loading)

    Especially: image

    Still need to be tested on windows

    size/S 
    opened by Louis-Dupont 5
  • interact with the platform through the lab client SDK

    interact with the platform through the lab client SDK

    This only addresses the immediate fix.

    This is still not ideal. From this small fix it felt not intuitive to start building the mode name by myself. What if the #train returned a training context - one that returns data about the process that just ran. Something like

      class ResultsContext(Enum):
        TRAINNING = 'training'
        CONVERSION = 'conversion'
        LAB_UPLOAD = 'lab_upload'
      
      class SgModel:
         ...
         def train(self, training_params: dict = dict()):  -> Dict[ResultsContext, dict]
    

    That way, I'd just have to access the corresponding context in order to get the model name that the server returned....otherwise, what happens when tomorrow the server changes its logic and _1_1 suffixes won't be a thing anymore?

    size/M 
    opened by daniel-deci 5
  • Crash tip

    Crash tip

    When starting we get this message: image

    ERRORS:

    Wrong format when creating a object with factory: image

    DDP not setup: image

    Relies on https://github.com/Deci-AI/super-gradients/pull/496 (We might want to handle exceptions from within this decorator)

    include in feature update 
    opened by Louis-Dupont 4
  • There seems to be a problem with the MobilNetV3 structure

    There seems to be a problem with the MobilNetV3 structure

    Hello, the network structure of MobileNetV3 seems to be different from that of the original author. The SE module of V3 is not a module that uses SENet. The author changed nn.Line to nn.Conv

    bug 
    opened by Daming-TF 4
  • CoCoSegmentationDataSet._generate_samples_and_targets() doesn't call super (= no caching)

    CoCoSegmentationDataSet._generate_samples_and_targets() doesn't call super (= no caching)

    Describe the bug

    CoCoSegmentationDataSet._generate_samples_and_targets() doesn't call the corresponding parent class method, and therefore image and label caching is doesn't work for this class. The solution is to add super()._generate_samples_and_targets() as the last line in CoCoSegmentationDataSet._generate_samples_and_targets(). Excuse me for not making this as a pull request this time.

    bug 
    opened by michaelitvin 3
  • Feature/SG-245 Support for register model in model's factory

    Feature/SG-245 Support for register model in model's factory

    Added @register decorator to easily map models to the global ARCHITECTURES variable. Input your desired architecture in the config via the function name that creates it, for example, architecture: ResNet18Cifar

    • This is a draft commit. Please confirm that you like the solution. If accepted, the next step would be to manually add @register decorators to all models to get back to feature-parity.
    • There is no elegant way to support an in-between step of supporting both (register + dict) methods.
    size/L 
    opened by danbochman 3
  • Hydra outputs moved to checkpoints/experiment_name dir

    Hydra outputs moved to checkpoints/experiment_name dir

    Change behaviour of Hydra to output logs to user's checkpoints/experiment_name directory Current behaviour spams output directories with unique timestamps for every run

    This solves the bug where datasets such as cifar10, mnist, etc. were downloaded and extracted from scratch in each run. In addition, this allows for easy loading of trained model, since now checkpoints are stored together with the YAML config that created them. A simple script can rebuild the model and load the weighs to it.

    • flake8 cleanup
    • environment_config.py: line 6 added package checkpoints dir as environment variable
    • env_helpers.py: line 60 added argument for Hydra output dir init
    • checkpoint_utils: line 51 used global PKG_CHECKPOINTS_DIR since it already exists

    I recommend removing support for "external_checkpoint_path"; users can create symbolic links instead for now

    size/L 
    opened by danbochman 3
  • YOLOv5 Tutorial Assistance

    YOLOv5 Tutorial Assistance

    @shaydeci @oferbaratz @ofrimasad hi I'm working on our new Deci.ai + YOLOv5 partnership tutorial and need some help. The tutorial is at https://github.com/ultralytics/yolov5/wiki/YOLOv5-Deci-AI-Tutorial and is based on a word document provided by Rachel than I've converted to Markdown.

    I'd like you guys at Deci to review the content and help supply the 6 additional images (denoted by IMAGE_HERE). To streamline this I've pasted the markdown content directly here in this issue. If you simply edit this issue with the appropriate changes I can then transfer those over to the YOLOv5 repo.

    For the images I've provided one example hyperlinked image myself. The guidelines are that they should be 1920 pixel wide JPG screenshots at <500kB each.

    Thanks for the help and let me know if you have any questions!

    Markdown content below

    📚 This guide explains how to streamline the process of compiling and quantizing YOLOv5 🚀 to achieve better performance with the Deci platform. UPDATED 6 August 2022.

    Content

    • About the Deci Platform
    • First-time setup
    • Runtime optimization and benchmarking of your model

    About Deci Platform

    The Deci platform includes free tools for easily managing, optimizing, and deploying models in any production environment. Deci supports all popular DL frameworks, such as TensorFlow, PyTorch, Keras and ONNX. All you need is our web-based platform or our Python client to run it from your code.

    With Deci you can:

    • Improve Inference performance by up to 10X
      Automatically compile and quantize your models and evaluate different production settings to achieve better latency, throughout, reduce model size and memory footprint on your hardware.

    • Find the best inference hardware for your application
      Benchmark your model's performance on various hardware (including edge) devices with a click of a button. Eliminate the need to manually setup and test various hardware and production settings.

    • Deploy with a Few Lines of Code
      Leverage Deci's python-based inference engine. Compatible with multiple frameworks and hardware types.

    For more information about the Deci platform please visit Deci's website.

    First-time setup

    Step 1:

    Go to https://console.deci.ai/sign-up and open your free account.

    Deci AI signup page

    Step 2:

    In order to start optimizing your pre-trained YOLOv5 model, you will need to convert it into an ONNX format. Please follow these simple instructions on this link to convert your model to ONNX format.

    Step 3:

    Go to "Lab" tab and click the "New Model" button in the top right part of the screen to upload your model in the ONNX format.

    Deci AI Lab page

    Follow the steps of the model upload wizard to select your target hardware as well as desired batch size and quantization level for the model compilation.

    Deci AI Lab page

    After filling in the relevant information, click "Start". The Deci platform will automatically perform a runtime optimization of your YOLOv5 model for the hardware you selected as well as benchmark your model on various hardware types. This process takes approximately 10 minutes.

    Once done, a new row will appear on your screen underneath the baseline model you previously uploaded. Here you can see the optimized version of your pre-trained YOLOv5 model.

    Deci AI Lab page

    What's next?

    1. You can then download your optimized model by clicking on "Deploy" button
    Deci AI Lab page

    You will then be prompted to download your model and receive the instructions on how to install and use Infery - Deci's runtime inference engine.

    The use of Infery is optional. You can get the python raw files and use them with any other inference engine of your choice.

    Deci AI Lab page
    1. Explore the optimization and benchmark results on the "Insights" tab.
    Deci AI Lab page help wanted 
    opened by glenn-jocher 3
  • Add DiceCEEdge loss recipe

    Add DiceCEEdge loss recipe

    DiceEdge recipe results

    New recipe for STDC training, using Edge attention loss and Dice loss, replacing the not stable detail loss. Results: | model | mIoU | previous %mIoU diff | | ------ | ----- | --------------- | | STDC1Seg50 | 75.11 | 0.75 | | STDC2Seg50 | 76.44 | 1.17 |

    This PR includes:

    Target / one-hot to binary edge map util functions:

    create edge feature maps function from one hot tensor. using dilation erosion with convolution.

    • Edge widths can be adjusted with kernel_size argument, typical edge width is kernel_size -1
    • input can be a multi-class or one class one hot tensor with shape [B, C, H, W]
    • In case of multi-class the result tensor can be flattened to one channel edge map, or keep the channel dimension so each channel is the edge features per class. see below example.
    • Only odd kernel_size are valid to prevent dimension change or pixel shifting.

    Cityscapes example:

    • kernel_size: left k=3, right k=9.

    kernel_city

    • Flatten_channels = False: left: person class edges, right: car class edges

    cls_city

    Mask Loss (edge loss):

    Mask attention loss, for region enforced loss functions i.e edge loss.

    • support many criterion functions, such as, CrossEntropyLoss, BCEWithLogitsLoss, MSELoss, SL1Loss.
    • Losses is composed of regular loss and mask loss, which are weighted with the loss_weights argument.
    • unit tests added.

    DiceCEEdgeLoss main class and recipe:

    • DiceCEEdgeLoss: main loss class for losses combination of dice, edge-ce and auxiliaries.
    • yaml only recipe
    size/XL 
    opened by lkdci 3
  • How to accelerate regseg on tensorRT

    How to accelerate regseg on tensorRT

    I tried to use tensorrt with original regseg repository. However onnx had trouble with torch.split and also torch2trt unable to use with specific tensorrt version. Please let me know how did you use tensorrt with regseg model when measure the latency.

    opened by odyssey0529 3
  • Easy support of custom datasets

    Easy support of custom datasets

    Problem to solve:

    • Our transforms expect a Dict, with specific fields ("image", "target", ...)
    • We don't know the output format of custom dataset.

    Solution:

    • Wrapping the original dataset to take the output of the original dataset and build a sample dict.
    • Giving the option to provide "adapter" functions that takes the output of the user dataset and adapts it to fit the input format of our transforms
    opened by Louis-Dupont 1
  • Integration with 🤗 Hub

    Integration with 🤗 Hub

    Hi folks.

    Thanks for providing the pre-trained models along with pre-training scripts.

    At Hugging Face, the Hub is our house to serve models, datasets, spaces, etc. It facilitates easy artifact loading and usage, providing common and streamlined API access to your models, datasets, etc.

    Hugging Face supports third-party integrations too and I was wondering if you'd be up to exploring the integration. The integration will primarily facilitate easy model sharing and model downloading which could be beneficial for the vision community in general.

    Here's the main doc that'd be helpful for you for the integration: https://huggingface.co/docs/hub/models-adding-libraries. Let me know if you'd need any help.

    opened by sayakpaul 1
  • Refactored scheduler callbacks (epoch-based/step-based warmup)

    Refactored scheduler callbacks (epoch-based/step-based warmup)

    Changes:

    warmup_mode: linear_step will emit deprecation warning. But will continue to work without any changes.

    Two new modes with explicit meaning:

    • warmup_mode: linear_batch_step
    • warmup_mode: linear_epoch_step

    Per-batch warmup supports two modes:

    Fixed number of warmup steps (batches):

    warmup_mode: linear_batch_step
    num_warmup_steps: 100
    

    Warmup across N epochs (LR still updated on each batch)

    warmup_mode: linear_batch_step
    num_warmup_epochs: 4
    
    opened by BloodAxe 1
Releases(3.0.5)
  • 3.0.5(Dec 28, 2022)

    What's Changed

    • Added warning for drop_last when train_loader is not divisible by batch_size by @shaydeci in https://github.com/Deci-AI/super-gradients/pull/586
    • fix compute_detection_metrics_per_cls return value when no detection … by @ofrimasad in https://github.com/Deci-AI/super-gradients/pull/583
    • Quantization infra mods for different calibrators and learnable amax by @spsancti in https://github.com/Deci-AI/super-gradients/pull/537
    • PLFM-3331 Register experiments with model name by @roikoren755 in https://github.com/Deci-AI/super-gradients/pull/585
    • fix ignore index for DiceCEEdgeLoss by @lkdci in https://github.com/Deci-AI/super-gradients/pull/588

    Full Changelog: https://github.com/Deci-AI/super-gradients/compare/3.0.4...3.0.5

    Source code(tar.gz)
    Source code(zip)
  • 3.0.4(Dec 21, 2022)

    What's Changed

    • Reorganisation README by @Shani-Perl in https://github.com/Deci-AI/super-gradients/pull/526
    • Apply black formatting on pretrained model zoo by @BloodAxe in https://github.com/Deci-AI/super-gradients/pull/524
    • Remove imports from factory.init by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/516
    • Feature/sg 416 albumentations plugin for classification by @shaydeci in https://github.com/Deci-AI/super-gradients/pull/495
    • Feature/sg 404 ssd reproduce by @BloodAxe in https://github.com/Deci-AI/super-gradients/pull/525
    • Added indexing support for meshgrid by @BloodAxe in https://github.com/Deci-AI/super-gradients/pull/528
    • Feature/al 441 ptq detection by @spsancti in https://github.com/Deci-AI/super-gradients/pull/527
    • Add DDP doc by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/523
    • SG-448- dataset_params and arch_params logging by @shaydeci in https://github.com/Deci-AI/super-gradients/pull/532
    • Fix clearml comment by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/530
    • Bug/sg 000 torch version fix by @shaydeci in https://github.com/Deci-AI/super-gradients/pull/535
    • fix pplite-seg prep conversion by @lkdci in https://github.com/Deci-AI/super-gradients/pull/538
    • SGLogger fix - add warning and multiprocess_safe system_monitoring by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/534
    • Add self to recipes by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/539
    • replace head for UNet module by @lkdci in https://github.com/Deci-AI/super-gradients/pull/543
    • Rename gpu_mode to multi_gpu and setup_gpu_mode to setup_device by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/517
    • Add version_base to hydra to remove warning by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/541
    • Hotfix/sg 000 add data structure and link to cityscape desc by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/550
    • add stringcase by @ofrimasad in https://github.com/Deci-AI/super-gradients/pull/553
    • Hotfix/sg 000 add data structure to cityscape desc v2 by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/551
    • Improve PascalVOC detection error msg by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/556
    • Improve evaluate_from_recipe usability by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/558
    • Feature/sg 458 support for any user dataloader using dataset registry by @shaydeci in https://github.com/Deci-AI/super-gradients/pull/547
    • Features/sg 409 check all params used by @BloodAxe in https://github.com/Deci-AI/super-gradients/pull/546
    • Split and rename the modules from super_gradients.common.environment by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/548
    • Feature/sg 468 detection transform support for any number of channels by @shaydeci in https://github.com/Deci-AI/super-gradients/pull/559
    • Feature/sg 518 update evaluate from recipe to add output by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/560
    • error raised for torch version and formatting by @shaydeci in https://github.com/Deci-AI/super-gradients/pull/540
    • Feature/sg 356 ddp silent mode and multi process safe docs by @shaydeci in https://github.com/Deci-AI/super-gradients/pull/563
    • Apply black on some files by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/565
    • Fix runtime warning of accessing non-contiguous tensor in mAP metric by @BloodAxe in https://github.com/Deci-AI/super-gradients/pull/566
    • fix F1 Precision Recall to represent all the defined rage and not onl… by @ofrimasad in https://github.com/Deci-AI/super-gradients/pull/564
    • Fix Multigpu.OFF factory by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/569
    • Import AutoLoggerConfig and ConsoleSink separatly by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/567
    • remove user guide from docs by @ofrimasad in https://github.com/Deci-AI/super-gradients/pull/570
    • new generated docs by @ofrimasad in https://github.com/Deci-AI/super-gradients/pull/572
    • update documentation by @ofrimasad in https://github.com/Deci-AI/super-gradients/pull/573
    • Make ConvBNReLU a subclass of ConvBNAct to keep backward compatibililty by @BloodAxe in https://github.com/Deci-AI/super-gradients/pull/554
    • Update README.md by @ofrimasad in https://github.com/Deci-AI/super-gradients/pull/575
    • Apply black on losses by @Louis-Dupont in https://github.com/Deci-AI/super-gradients/pull/576
    • Feature/sg 516 support head replacement for local pretrained weights unknown dataset by @shaydeci in https://github.com/Deci-AI/super-gradients/pull/578
    • Fix bug of not respecting dilation argument in RepVGGBlock by @BloodAxe in https://github.com/Deci-AI/super-gradients/pull/555
    • Feature/sg 431 check classes by @BloodAxe in https://github.com/Deci-AI/super-gradients/pull/531
    • register unet cls module by @lkdci in https://github.com/Deci-AI/super-gradients/pull/580
    • feature: Accept an arbitrary WandB ID by @yurkovak in https://github.com/Deci-AI/super-gradients/pull/582
    • Bug/sg 512 shuffle bugfix in recipe datalaoders by @shaydeci in https://github.com/Deci-AI/super-gradients/pull/581
    • Added import of quantized modules so they get registered by @spsancti in https://github.com/Deci-AI/super-gradients/pull/542
    • Use unpack_batch_items on first batch (Respect additional batch items) by @BloodAxe in https://github.com/Deci-AI/super-gradients/pull/584

    Full Changelog: https://github.com/Deci-AI/super-gradients/compare/3.0.3...3.0.4

    Source code(tar.gz)
    Source code(zip)
  • 3.0.3(Nov 28, 2022)

    • ClearML integration.
    • Console logging + upload (Lab visibility).
    • Epoch summary (visual improvement).
    • Registry for Phase Callbacks and Transforms.
    • Infrastructure for crash tips.
    • Detection output adapter.
    • Train from recipe example- external dataset.
    • QAT infrastructure + examples.
    • System Logger (i.e device attributes etc).
    Source code(tar.gz)
    Source code(zip)
  • 3.0.2(Nov 13, 2022)

    • dataloaders.get() supports Dataset objects.
    • Behind the scenes sampler handling (warning + initialization).
    • Black formatter.
    • New DetectionBase.
    • Logs moved from ~/sg_log/{module_name}
    • CSP Resnet Backbone, new place for modules in repo.
    • Plenty new factories.
    • Message on effective batch size
    • External checkpoints resume- fix
    • Passing external classes list for CocoDetection
    • Fix for loading weights from the platform
    Source code(tar.gz)
    Source code(zip)
  • 3.0.1(Oct 11, 2022)

    What's new?

    • Eval recipe- perform validation by recipe name.
    • Supported strings class for convenient autocomplete in IDE's.
    • Registry: metrics, dataloaders, models, losses (so they can be passed as strings when using train_from_config).
    • Return model, results from train
    • Load backbone fix https://github.com/Deci-AI/super-gradients/pull/408/files
    • PPLiteSeg training recipes.
    • AWS env check removal
    • New resolvers support: "+" "if" for yaml recipes (i.e Hydra resolvers).
    • Support for custom STDC.
    • ShelfNet "classes_num" -> "num_classes" bug fix.
    • Improve cross-platform compatibility when parsing a readme description.
    • Added the ability to download and import external code for models from ADK.
    Source code(tar.gz)
    Source code(zip)
  • 3.0.0(Sep 20, 2022)

    • DatasetInterface class removal- refactored as torch.DataLoader objects configured by src/training/recipes, using super_gradients.dataloaders.get() (see new updated tutorials and snippets).

    • Trainer.build_model() removal- models initialisation refactored with super_gradients.models.get() (see updated tutorials and notebooks).

    • Coded DDP launch (no need for python -m torch.distributed.launch ...), see new snippets here .

    • Updated notebooks, tutorials and code snippets in readme.md.

    • Extract recipes training hyper_params config with super_gradients.training_hyperparams.get() (see updated tutorials and notebooks).

    • Simplfied resume- now passed through train_params in SgTrainer.train() (see updated snippets in readme.md).

    • Removal of "loss_loggging_items_names" from train_params in Trainer.train().

    • Trainer.init old, unnecessary args removed.

    • Add support for getting models from Deci's platform using super_gradients.models.get(), more info regarding Deci's platform in readme.md.

    Source code(tar.gz)
    Source code(zip)
  • 2.6.0(Sep 12, 2022)

  • 2.5.0(Sep 12, 2022)

  • 2.2.0(Aug 14, 2022)

  • 2.1.0(Jul 19, 2022)

    • YoloX architectures.
    • SSDLite Mobilenet V2 COCO recipe
    • QAT support with Nvidia's pytorch-quantization (optional dependency).
    • COCO mAP calculation support in DDP (torch metric object, supports "crowd" labels).
    • Pre_prediction_callback- support for input and targets manipulation right before forward pass + multiscaling pre_prediction_callbacks that work out of the box in DDP (classification and Object detection).
    • Training stage switch callback to support multi-stage training.
    Source code(tar.gz)
    Source code(zip)
  • 2.0.1(Jun 12, 2022)

  • 2.0.0(Jun 9, 2022)

    Features:

    • KD trainer and resnet50 recipe (81.92 % accuracy)
    • Repeated augmentation sampler
    • Cooldown epochs
    • Beit architecture
    • Lamb Optimizer
    • Passing torch data loaders directly to SgModel.

    Refactoring:

    • Checkpoint and architecture params decoupled.
    Source code(tar.gz)
    Source code(zip)
  • 1.7.5(Apr 19, 2022)

  • 1.7.4(Apr 17, 2022)

  • 1.7.3(Apr 6, 2022)

  • 1.7.2(Apr 5, 2022)

  • 1.7.1(Mar 10, 2022)

    What's new ?

    • BCE with Dice loss.
    • Binary IOU metric object (I.e IOU only for target class).
    • Binary segmentation visualisation callback.
    • Supervisely dataset interface.
    • Different lr assignment for head and backbone for RegSeg.
    • Google Colab notebook for semantic segmentation quick start - Check it out in our GitHub repo README.md
    • Google Colab notebook for semantic segmentation transfer learning - Check it out in our GitHub repo README.md
    Source code(tar.gz)
    Source code(zip)
  • 1.6.0(Feb 8, 2022)

    • Added RegSeg model, recipe, and pre-trained checkpoints.
    • Updated EfficientNet recipe.
    • Updated Resnet50 recipe + pre-trained checkpoint (Top-1=79.47)
    Source code(tar.gz)
    Source code(zip)
  • 1.5.2(Jan 26, 2022)

  • 1.5.1(Jan 25, 2022)

  • 1.5.0(Jan 20, 2022)

    What’s new?

    • STDC family - new recipes added with even higher mIoU:muscle:

    • Google Colab notebook for transfer learning / fine-tuning (COCO pre-trained YOLOv5 nano into PASCAL VOC sub dataset) - Check it out in our GitHub repo README.md

    • Factories for yaml string interpolation

    Source code(tar.gz)
    Source code(zip)
  • 1.4.0(Jan 16, 2022)

  • 1.3.1(Jan 6, 2022)

    Checkpoints root directory fix- allowing users to explicitly define checkpoints root directory instead of depriving it from content root.

    Source code(tar.gz)
    Source code(zip)
  • 1.3.0(Jan 5, 2022)

  • 1.2.0(Jan 3, 2022)

    • Added an option to upload any file from the checkpoint dir.
    • Added a new callback phase and calling it on best metric.
    • Automatic GPU mode for training recipes.
    • Readme updates.
    • Sphinx documentation updates.
    Source code(tar.gz)
    Source code(zip)
  • 1.1.0(Dec 30, 2021)

  • 0.1.0(Dec 22, 2021)

Pytorch implementation of MaskFlownet

MaskFlownet-Pytorch Unofficial PyTorch implementation of MaskFlownet (https://github.com/microsoft/MaskFlownet). Tested with: PyTorch 1.5.0 CUDA 10.1

Daniele Cattaneo 84 Nov 02, 2022
Learning Synthetic Environments and Reward Networks for Reinforcement Learning

Learning Synthetic Environments and Reward Networks for Reinforcement Learning We explore meta-learning agent-agnostic neural Synthetic Environments (

AutoML-Freiburg-Hannover 16 Sep 02, 2022
Super Resolution for images using deep learning.

Neural Enhance Example #1 — Old Station: view comparison in 24-bit HD, original photo CC-BY-SA @siv-athens. As seen on TV! What if you could increase

Alex J. Champandard 11.7k Dec 29, 2022
Lightweight, Python library for fast and reproducible experimentation :microscope:

Steppy What is Steppy? Steppy is a lightweight, open-source, Python 3 library for fast and reproducible experimentation. Steppy lets data scientist fo

minerva.ml 134 Jul 10, 2022
This code is an unofficial implementation of HiFiSinger.

HiFiSinger This code is an unofficial implementation of HiFiSinger. The algorithm is based on the following papers: Chen, J., Tan, X., Luan, J., Qin,

Heejo You 87 Dec 23, 2022
Pytorch Lightning Distributed Accelerators using Ray

Distributed PyTorch Lightning Training on Ray This library adds new PyTorch Lightning accelerators for distributed training using the Ray distributed

166 Dec 27, 2022
Semi-SDP Semi-supervised parser for semantic dependency parsing.

Semi-SDP Semi-supervised parser for semantic dependency parsing. This repo contains the code used for the semi-supervised semantic dependency parser i

12 Sep 17, 2021
GLIP: Grounded Language-Image Pre-training

GLIP: Grounded Language-Image Pre-training Updates 12/06/2021: GLIP paper on arxiv https://arxiv.org/abs/2112.03857. Code and Model are under internal

Microsoft 862 Jan 01, 2023
Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling

RHGN Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling Dependencies torch==1.6.0 torchvision==0.7.0 dgl==0.7.1

Big Data and Multi-modal Computing Group, CRIPAC 6 Nov 29, 2022
Pytorch implementation of SimSiam Architecture

SimSiam-pytorch A simple pytorch implementation of Exploring Simple Siamese Representation Learning which is developed by Facebook AI Research (FAIR)

Saeed Shurrab 1 Oct 20, 2021
OSLO: Open Source framework for Large-scale transformer Optimization

O S L O Open Source framework for Large-scale transformer Optimization What's New: December 21, 2021 Released OSLO 1.0. What is OSLO about? OSLO is a

TUNiB 280 Nov 24, 2022
Easy way to add GoogleMaps to Flask applications. maintainer: @getcake

Flask Google Maps Easy to use Google Maps in your Flask application requires Jinja Flask A google api key get here Contribute To contribute with the p

Flask Extensions 611 Dec 05, 2022
A full pipeline AutoML tool for tabular data

HyperGBM Doc | 中文 We Are Hiring! Dear folks,we are offering challenging opportunities located in Beijing for both professionals and students who are k

DataCanvas 240 Jan 03, 2023
OCR-D wrapper for detectron2 based segmentation models

ocrd_detectron2 OCR-D wrapper for detectron2 based segmentation models Introduction Installation Usage OCR-D processor interface ocrd-detectron2-segm

Robert Sachunsky 13 Dec 06, 2022
Toward Spatially Unbiased Generative Models (ICCV 2021)

Toward Spatially Unbiased Generative Models Implementation of Toward Spatially Unbiased Generative Models (ICCV 2021) Overview Recent image generation

Jooyoung Choi 88 Dec 01, 2022
Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized

VQGAN-CLIP-Docker About Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized This is a stripped and minimal dependency repository for running loca

Kevin Costa 73 Sep 11, 2022
CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks

CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks

Facebook Research 721 Jan 03, 2023
Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI

Hourglass Transformer - Pytorch (wip) Implementation of Hourglass Transformer, in Pytorch. It will also contain some of my own ideas about how to make

Phil Wang 61 Dec 25, 2022
Trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI

Introduction This script trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI. In order to run this

Momin Haider 0 Jan 02, 2022
An MQA (Studio, originalSampleRate) identifier for lossless flac files written in Python.

An MQA (Studio, originalSampleRate) identifier for "lossless" flac files written in Python.

Daniel 10 Oct 03, 2022