High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

Overview
image image imageimage image image
image image image image image
image image image
image image image image Twitter
image link

TL;DR

Ignite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

PyTorch-Ignite teaser

Click on the image to see complete code

Features

  • Less code than pure PyTorch while ensuring maximum control and simplicity

  • Library approach and no program's control inversion - Use ignite where and when you need

  • Extensible API for metrics, experiment managers, and other components

Table of Contents

Why Ignite?

Ignite is a library that provides three high-level features:

  • Extremely simple engine and event system
  • Out-of-the-box metrics to easily evaluate models
  • Built-in handlers to compose training pipeline, save artifacts and log parameters and metrics

Simplified training and validation loop

No more coding for/while loops on epochs and iterations. Users instantiate engines and run them.

Example
from ignite.engine import Engine, Events, create_supervised_evaluator
from ignite.metrics import Accuracy


# Setup training engine:
def train_step(engine, batch):
    # Users can do whatever they need on a single iteration
    # E.g. forward/backward pass for any number of models, optimizers etc
    # ...

trainer = Engine(train_step)

# Setup single model evaluation engine
evaluator = create_supervised_evaluator(model, metrics={"accuracy": Accuracy()})

def validation():
    state = evaluator.run(validation_data_loader)
    # print computed metrics
    print(trainer.state.epoch, state.metrics)

# Run model's validation at the end of each epoch
trainer.add_event_handler(Events.EPOCH_COMPLETED, validation)

# Start the training
trainer.run(training_data_loader, max_epochs=100)

Power of Events & Handlers

The cool thing with handlers is that they offer unparalleled flexibility (compared to say, callbacks). Handlers can be any function: e.g. lambda, simple function, class method etc. Thus, we do not require to inherit from an interface and override its abstract methods which could unnecessarily bulk up your code and its complexity.

Execute any number of functions whenever you wish

Examples
trainer.add_event_handler(Events.STARTED, lambda _: print("Start training"))

# attach handler with args, kwargs
mydata = [1, 2, 3, 4]
logger = ...

def on_training_ended(data):
    print(f"Training is ended. mydata={data}")
    # User can use variables from another scope
    logger.info("Training is ended")


trainer.add_event_handler(Events.COMPLETED, on_training_ended, mydata)
# call any number of functions on a single event
trainer.add_event_handler(Events.COMPLETED, lambda engine: print(engine.state.times))

@trainer.on(Events.ITERATION_COMPLETED)
def log_something(engine):
    print(engine.state.output)

Built-in events filtering

Examples
# run the validation every 5 epochs
@trainer.on(Events.EPOCH_COMPLETED(every=5))
def run_validation():
    # run validation

# change some training variable once on 20th epoch
@trainer.on(Events.EPOCH_STARTED(once=20))
def change_training_variable():
    # ...

# Trigger handler with customly defined frequency
@trainer.on(Events.ITERATION_COMPLETED(event_filter=first_x_iters))
def log_gradients():
    # ...

Stack events to share some actions

Examples

Events can be stacked together to enable multiple calls:

@trainer.on(Events.COMPLETED | Events.EPOCH_COMPLETED(every=10))
def run_validation():
    # ...

Custom events to go beyond standard events

Examples

Custom events related to backward and optimizer step calls:

from ignite.engine import EventEnum


class BackpropEvents(EventEnum):
    BACKWARD_STARTED = 'backward_started'
    BACKWARD_COMPLETED = 'backward_completed'
    OPTIM_STEP_COMPLETED = 'optim_step_completed'

def update(engine, batch):
    # ...
    loss = criterion(y_pred, y)
    engine.fire_event(BackpropEvents.BACKWARD_STARTED)
    loss.backward()
    engine.fire_event(BackpropEvents.BACKWARD_COMPLETED)
    optimizer.step()
    engine.fire_event(BackpropEvents.OPTIM_STEP_COMPLETED)
    # ...

trainer = Engine(update)
trainer.register_events(*BackpropEvents)

@trainer.on(BackpropEvents.BACKWARD_STARTED)
def function_before_backprop(engine):
    # ...

Out-of-the-box metrics

Example
precision = Precision(average=False)
recall = Recall(average=False)
F1_per_class = (precision * recall * 2 / (precision + recall))
F1_mean = F1_per_class.mean()  # torch mean method
F1_mean.attach(engine, "F1")

Installation

From pip:

pip install pytorch-ignite

From conda:

conda install ignite -c pytorch

From source:

pip install git+https://github.com/pytorch/ignite

Nightly releases

From pip:

pip install --pre pytorch-ignite

From conda (this suggests to install pytorch nightly release instead of stable version as dependency):

conda install ignite -c pytorch-nightly

Docker Images

Using pre-built images

Pull a pre-built docker image from our Docker Hub and run it with docker v19.03+.

docker run --gpus all -it -v $PWD:/workspace/project --network=host --shm-size 16G pytorchignite/base:latest /bin/bash

Available pre-built images are :

  • pytorchignite/base:latest | pytorchignite/hvd-base:latest
  • pytorchignite/apex:latest | pytorchignite/hvd-apex:latest | pytorchignite/msdp-apex:latest
  • pytorchignite/vision:latest | pytorchignite/hvd-vision:latest
  • pytorchignite/apex-vision:latest | pytorchignite/hvd-apex-vision:latest | pytorchignite/msdp-apex-vision:latest
  • pytorchignite/nlp:latest | pytorchignite/hvd-nlp:latest
  • pytorchignite/apex-nlp:latest | pytorchignite/hvd-apex-nlp:latest | pytorchignite/msdp-apex-nlp:latest

For more details, see here.

Getting Started

Few pointers to get you started:

Documentation

Additional Materials

Examples

Complete list of examples can be found here.

Tutorials

Reproducible Training Examples

Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:

  • ImageNet - logs on Ignite Trains server coming soon ...
  • Pascal VOC2012 - logs on Ignite Trains server coming soon ...

Features:

Communication

User feedback

We have created a form for "user feedback". We appreciate any type of feedback and this is how we would like to see our community:

  • If you like the project and want to say thanks, this the right place.
  • If you do not like something, please, share it with us and we can see how to improve it.

Thank you !

Contributing

Please see the contribution guidelines for more information.

As always, PRs are welcome :)

Projects using Ignite

Research papers

Blog articles, tutorials, books

Toolkits

Others

See other projects at "Used by"

If your project implements a paper, represents other use-cases not covered in our official tutorials, Kaggle competition's code or just your code presents interesting results and uses Ignite. We would like to add your project in this list, so please send a PR with brief description of the project.

About the team & Disclaimer

This repository is operated and maintained by volunteers in the PyTorch community in their capacities as individuals (and not as representatives of their employers). See the "About us" page for a list of core contributors. For usage questions and issues, please see the various channels here. For all other questions and inquiries, please send an email to [email protected].

Comments
  • Multilabel Metrics

    Multilabel Metrics

    Related to #310 Description: Adds multilabel support for metrics.

    Check list:

    • [x] New tests are added (if a new feature is modified)
    • [x] New doc strings: text and/or example code are in RST format
    • [x] Documentation is updated (if required)
    opened by anmolsjoshi 62
  • distributed program hangs in SLURM

    distributed program hangs in SLURM

    🐛 Bug description

    Hi @vfdev-5 ,

    We got an urgent issue from MONAI and Clara users that the distributed program hangs in NVIDIA NSL-B platform, which is based on SLURM. You can reproduce the issue with this simple example: https://github.com/Project-MONAI/tutorials/blob/master/acceleration/distributed_training/unet_training_workflows.py It will hang when creating ignite Accurary metric, seems related to this line: https://github.com/pytorch/ignite/blob/v0.4.4.post1/ignite/distributed/comp_models/native.py#L107 After removing the Accurary metric from the example, it hangs when training started and hasn't timeout yet. Please note that this example can run successfully with ignite 0.4.2. And we also tried the pure PyTorch dist example in the same hardware and software env, it can run successfully: https://github.com/Project-MONAI/tutorials/blob/master/acceleration/distributed_training/unet_training_ddp.py

    Could you please help analyze the reason and give some advice? It blocks our cooperation with another team now.

    Thanks in advance.

    Environment

    • PyTorch Version (e.g., 1.4): 1.8.1
    • Ignite Version (e.g., 0.3.0): 0.4.4
    • OS (e.g., Linux): Ubuntu 18.04
    • How you installed Ignite (conda, pip, source): pip
    bug question 
    opened by Nic-Ma 60
  • Bug Related to Calculation of Binary Metrics

    Bug Related to Calculation of Binary Metrics

    Fixes #348

    Description: Bug in Binary Precision/Recall maps binary cases into 2 classes and then averages the metrics of both. This is an incorrect method of calculating precision and recall for Precision and Recall. It should be treated as a one person class only.

    I have included the following in the code:

    • Created _check_shape to process and check the shapes of y, y_pred
    • Created _check_type to determine the type of problem - binary or multiclass - based on y and y_pred, also raises error if the problem type changes during training. Type is decided on first update, and then checked for each subsequent update.
    • Calculates binary precision using threshold function, torch.round default
    • Includes check of binary output eg: torch.equal(y, y ** 2)
    • Only inputs torch.round as default is problem is binary
    • Appropriate checks for threshold_function
    • Added better tests - improved binary tests, incorrect threshold function, incorrect y, changing type in between updates.

    Check list:

    • [x] New tests are added (if a new feature is modified)
    • [x] New doc strings: text and/or example code are in RST format
    • [x] Documentation is updated (if required)
    0.1.2 
    opened by anmolsjoshi 57
  • GH Action for docker builds

    GH Action for docker builds

    Related to #1644 and #1721

    Description:

    Check list:

    • [ ] New tests are added (if a new feature is added)
    • [ ] New doc strings: description and/or example code are in RST format
    • [ ] Documentation is updated (if required)
    opened by trsvchn 54
  • Update Precision, Recall, add Accuracy (Binary and Categorical combined)

    Update Precision, Recall, add Accuracy (Binary and Categorical combined)

    Fixes #262

    Description: This PR updates Precision and Recall, and adds Accuracy to handle binary and categorical cases for different types of input.

    Check list:

    • [x] New tests are added.
    • [x] Updated doc string RST format
    • [x] Edited metrics.rst to add information about Accuracy.
    opened by anmolsjoshi 48
  • Add GH Action to build and publish Docker images

    Add GH Action to build and publish Docker images

    Fixes #1305

    Description:

    Adds GitHub Action that triggers on push to master docker folder or releases to build and publish Docker images

    Check list:

    • [ ] New tests are added (if a new feature is added)
    • [ ] New doc strings: description and/or example code are in RST format
    • [ ] Documentation is updated (if required)
    opened by trsvchn 40
  • Managing Deprecation using decorators

    Managing Deprecation using decorators

    This is a very stripped down version of the code, I have not written any tests yet. This is primarily for me to check if I am going in the correct direction. So please do tell everything that needs improvement/changing.

    Fixes #1479

    Description: Implemented till now

    • Make functions deprecated using the @deprecated decorator
    • Add arguments to the @deprecated decorator to customize it for each function
    • The deprecation messages also reflect in the documentation (written in Sphinx compatible format)

    Check list:

    • [x] New tests are added (if a new feature is added)
    • [ ] New doc strings: description and/or example code are in RST format
    • [ ] Documentation is updated (if required)

    Thank you to @ydcjeff for giving the idea to add the version update information :)

    opened by Devanshu24 36
  • add frequency metric to determine some average per-second metrics

    add frequency metric to determine some average per-second metrics

    Fixes # N/A

    Description:

    This code is to compute X per-second performance metrics (like words per second, images per second, etc). Likely this will be used in conjunction with ignite.metrics.RunningAverage for most utility.

    Check list:

    • [x] New tests are added (if a new feature is added)
    • [x] New doc strings: description and/or example code are in RST format
    • [x] Documentation is updated (if required)
    opened by erip 36
  • Adopt PyTorch's doc theme

    Adopt PyTorch's doc theme

    Fixes #625

    Description: As detailed in the issue, this is a proposal to switch the website's theme to PyTorch's. This illustrates Ecosystem membership and a cleaner theme. Additionally, existing Usercss plugins can darkify with almost no change.

    I'm not sure yet that the versions links block will be properly styled when displayed on read-the-docs, but let's iterate over that.

    Check list:

    • [ ] New tests are added (if a new feature is added)
    • [ ] New doc strings: description and/or example code are in RST format
    • [X] Documentation is updated (if required)
    opened by bosr 36
  • TrainsSaver doesn't respect Checkpoint's n_saved

    TrainsSaver doesn't respect Checkpoint's n_saved

    🐛 Bug description

    As the title says, it seems that TrainsSaver bypasses the Checkpoint n_saved parameter. That means that all models are saved and never updated / deleted.

    Consider this simple example:

            task.phases['train'].add_event_handler(
                Events.EPOCH_COMPLETED(every=1),
                Checkpoint(to_save, TrainsSaver(output_uri=output_uri), 'epoch', n_saved=1,
                           global_step_transform=global_step_from_engine(task.phases['train'])))
    

    The above saves every checkpoint. You end-up with

    epoch_checkpoint_1.pt
    epoch_checkpoint_2.pt
    epoch_checkpoint_3.pt
    ...
    

    Now if we do, the same with DiskSaver:

            task.phases['train'].add_event_handler(
                Events.EPOCH_COMPLETED(every=1),
                Checkpoint(to_save, DiskSaver(dirname=dirname), 'epoch', n_saved=1,
                           global_step_transform=global_step_from_engine(task.phases['train'])))
    

    We get only:

    epoch_checkpoint_3.pt
    

    as expected.

    Same behaviour if we save only best models using score_function, i.e. TrainsSaver saves every best model.

    Environment

    • PyTorch Version: 1.3.1
    • Ignite Version: 0.4.0.dev20200519 (EDIT: update to latest nightly, issue still exists)
    • OS: Linux
    • How you installed Ignite: pip nightly
    • Python version: 3.6
    • Any other relevant information: trains version: 0.14.3
    bug 
    opened by achigeor 34
  • [BC-breaking] Make Metrics accumulate values on device specified by user (#1232)

    [BC-breaking] Make Metrics accumulate values on device specified by user (#1232)

    • update accuracy to accumulate _num_correct in a tensor on the right device

    • update loss metric to accumulate _sum in a tensor on the right device

    • update mae metric to accumulate in a tensor on the right device

    • update mpd metric to accumulate in a tensor on the right device

    • update mse metric to accumulate in a tensor on the right device

    • update top k accuracy metric to accumulate in a tensor on the right device

    • update precision and recall metrics to accumulate in tensors on the right device

    • black formatting

    • reverted run*.sh

    • change all metrics default device to cpu except running_average

    • Update ignite/metrics/precision.py

    • remove Optional type from metric devices since default is cpu

    • add comment explaining lack of detach in accuracy metrics

    Fixes #1082

    Original PR #1232

    Description:

    • Merge PR to master

    Check list:

    • [x] New tests are added (if a new feature is added)
    • [ ] New doc strings: description and/or example code are in RST format
    • [ ] Documentation is updated (if required)

    cc @n2cholas

    opened by vfdev-5 32
  • Scheduled workflow failed

    Scheduled workflow failed

    Oh no, something went wrong in the scheduled workflow Nightly Releases with commit 8abf1340edd9e400afbf82625879b4b0bbb3ecf0. Please look into it:

    https://github.com/pytorch/ignite/actions/runs/3819312163

    Feel free to close this if this was just a one-off error.

    bug 
    opened by github-actions[bot] 0
  • Scheduled workflow failed

    Scheduled workflow failed

    Oh no, something went wrong in the scheduled workflow PyTorch version tests with commit 8abf1340edd9e400afbf82625879b4b0bbb3ecf0. Please look into it:

    https://github.com/pytorch/ignite/actions/runs/3814275421

    Feel free to close this if this was just a one-off error.

    bug 
    opened by github-actions[bot] 0
  • Scheduled workflow failed

    Scheduled workflow failed

    Oh no, something went wrong in the scheduled workflow Nightly Releases with commit 8abf1340edd9e400afbf82625879b4b0bbb3ecf0. Please look into it:

    https://github.com/pytorch/ignite/actions/runs/3814303393

    Feel free to close this if this was just a one-off error.

    bug 
    opened by github-actions[bot] 0
  • Scheduled workflow failed

    Scheduled workflow failed

    Oh no, something went wrong in the scheduled workflow PyTorch version tests with commit 8abf1340edd9e400afbf82625879b4b0bbb3ecf0. Please look into it:

    https://github.com/pytorch/ignite/actions/runs/3809705730

    Feel free to close this if this was just a one-off error.

    bug 
    opened by github-actions[bot] 0
  • Update @docsearch/css

    Update @docsearch/css

    Follow up of #2765 I forgot to update the css

    Description:

    Check list:

    • [ ] New tests are added (if a new feature is added)
    • [ ] New doc strings: description and/or example code are in RST format
    • [ ] Documentation is updated (if required)
    docs 
    opened by ydcjeff 0
  • Scheduled workflow failed

    Scheduled workflow failed

    Oh no, something went wrong in the scheduled workflow Nightly Releases with commit 8abf1340edd9e400afbf82625879b4b0bbb3ecf0. Please look into it:

    https://github.com/pytorch/ignite/actions/runs/3797332864

    Feel free to close this if this was just a one-off error.

    bug 
    opened by github-actions[bot] 0
Releases(v0.4.10)
  • v0.4.10(Sep 5, 2022)

    PyTorch-Ignite 0.4.10 - Release Notes

    New Features

    Engine

    • Added Engine interrupt/continue feature (#2699, #2682)

    Example:

    from ignite.engine import Engine, Events
    
    data = range(10)
    max_epochs = 3
    
    def check_input_data(e, b):
        print(f"Epoch {engine.state.epoch}, Iter {engine.state.iteration} | data={b}")
        i = (e.state.iteration - 1) % len(data)
        assert b == data[i]
    
    engine = Engine(check_input_data)
    
    @engine.on(Events.ITERATION_COMPLETED(every=11))
    def call_interrupt():
        engine.interrupt()
    
    print("Start engine run with interruptions:")
    state = engine.run(data, max_epochs=max_epochs)
    print("1 Engine run is interrupted at ", state.epoch, state.iteration)
    state = engine.run(data, max_epochs=max_epochs)
    print("2 Engine run is interrupted at ", state.epoch, state.iteration)
    state = engine.run(data, max_epochs=max_epochs)
    print("3 Engine ended the run at ", state.epoch, state.iteration)
    
    Output
    Start engine run with interruptions:
    Epoch 1, Iter 1 | data=0
    Epoch 1, Iter 2 | data=1
    Epoch 1, Iter 3 | data=2
    Epoch 1, Iter 4 | data=3
    Epoch 1, Iter 5 | data=4
    Epoch 1, Iter 6 | data=5
    Epoch 1, Iter 7 | data=6
    Epoch 1, Iter 8 | data=7
    Epoch 1, Iter 9 | data=8
    Epoch 1, Iter 10 | data=9
    Epoch 2, Iter 11 | data=0
    1 Engine run is interrupted at  2 11
    Epoch 2, Iter 12 | data=1
    Epoch 2, Iter 13 | data=2
    Epoch 2, Iter 14 | data=3
    Epoch 2, Iter 15 | data=4
    Epoch 2, Iter 16 | data=5
    Epoch 2, Iter 17 | data=6
    Epoch 2, Iter 18 | data=7
    Epoch 2, Iter 19 | data=8
    Epoch 2, Iter 20 | data=9
    Epoch 3, Iter 21 | data=0
    Epoch 3, Iter 22 | data=1
    2 Engine run is interrupted at  3 22
    Epoch 3, Iter 23 | data=2
    Epoch 3, Iter 24 | data=3
    Epoch 3, Iter 25 | data=4
    Epoch 3, Iter 26 | data=5
    Epoch 3, Iter 27 | data=6
    Epoch 3, Iter 28 | data=7
    Epoch 3, Iter 29 | data=8
    Epoch 3, Iter 30 | data=9
    3 Engine ended the run at  3 30
    
    • Deprecated and replaced Events.default_event_filter with None (#2644)
    • [BC-breaking] Rewritten Engine's terminate and terminate_epoch logic (#2645)
    • Improved logging time taken message showing milliseconds (#2650)

    Metrics and handlers

    • Added ZeRO built-in support to Checkpoint in a distributed configuration (#2658, #2642)
    • Added save_on_rank argument to DiskSaver and Checkpoint (#2641)
    • Added a handle_buffers option for EMAHandler (#2592)
    • Improved Precision and Recall metrics (#2573)

    Bug fixes

    • Median metrics (e.g median absolute error) are now using np.median-compatible torch median implementation (#2681)
    • Fixed issues when removing handlers on filtered events (#2690)
    • Few minor fixes in Engine and Event (#2680)
    • [BC-breaking] Fixed Engine.terminate() behaviour when resumed (#2678)

    Housekeeping (docs, CI, examples, tests, etc)

    • #2700, #2698, #2696, #2695, #2694, #2691, #2688, #2679, #2676, #2675, #2673, #2671, #2670, #2668, #2667, #2666, #2665, #2664, #2662, #2660, #2659, #2657, #2656, #2655, #2653, #2652, #2651, #2647, #2646, #2640, #2639, #2637, #2630, #2629, #2628, #2625, #2624, #2620, #2618, #2617, #2616, #2613, #2611, #2609, #2606, #2605, #2604, #2601, #2597, #2584, #2581, #2542

      • Metrics tests improvements in DDP configuration

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @BowmanChow, @daniellepintz, @haochunchang, @kamalojasv181, @puhuk, @sadra-barikbin, @sandylaker, @sdesrozis, @vfdev-5

    Source code(tar.gz)
    Source code(zip)
  • v0.4.9(May 4, 2022)

    PyTorch-Ignite 0.4.9 - Release Notes

    New Features

    • Added whitelist argument to log only desired weights/grads with experiment tracking system handlers: #2550, #2523
    • Added ReduceLROnPlateauScheduler parameter scheduler: #2449
    • Added filename components in Checkpoint: #2498
    • Added missing args to ModelCheckpoint, parity with Checkpoint: #2486
    • [BC-breaking] LRScheduler is now attachable to Events.ITERATION_STARTED: #2496

    Bug fixes

    • Fixed zero_grad place in create_supervised_trainer resulting in grad zero logs: #2560, #2559, #2555, #2547
    • Fixed bug in Checkpoint when loading a single non-nn.Module object: #2487
    • Removed warning in DDP if Metric.reset/update are not decorated: #2549
    • [BC-breaking] Fixed SSIM metric implementation and issue with variable batch inputs: #2564, #2563
      • compute method now returns float instead of torch.Tensor

    Housekeeping (docs, CI, examples, tests, etc)

    • #2552, #2543, #2541, #2534, #2531, #2530, #2529, #2528, #2526, #2525, #2521, #2518, #2512, #2509, #2507, #2506, #2497, #2494, #2493, #2490, #2485, #2483, #2477, #2476, #2474, #2473, #2469, #2463, #2461, #2460, #2457, #2454, #2450, #2448, #2446, #2445, #2442, #2440, #2439, #2435, #2433, #2431, #2430, #2428, #2427,

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @Davidportlouis, @DevPranjal, @Ishan-Kumar2, @KevinMusgrave, @Moh-Yakoub, @asmayer, @divo12, @gorarakelyan, @jreese, @leotac, @nishantb06, @nmcguire101, @sadra-barikbin, @sayantan1410, @sdesrozis, @vfdev-5, @yuta0821

    Source code(tar.gz)
    Source code(zip)
  • v0.4.8(Jan 17, 2022)

    PyTorch-Ignite 0.4.8 - Release Notes

    New Features

    • Added data as None option to Engine.run (#2369)
    • Now Checkpoint.load_objects can accept str and load the checkpoint internally (#2305)

    Bug fixes

    • Fixed issue with DeterministicEngine.state_dict() (#2412)
    • Fixed EMAHandler warm-up behaviour (#2333)
    • Fixed _compute_nproc_per_node in case of bad dist configuration (#2288)
    • Fixed state parameter scheduler to work with EMAHandler (#2326)
    • Fixed a bug on StateParamScheduler.attach method (#2316)
    • Fixed ClearMLLogger to retrieve current task before trying to create a new one (#2344)
    • Added hashing a checkpoint utility: #2272, #2283, #2273
    • Fixed config check issue with multi-node spawn method (#2424)

    Housekeeping (docs, CI, examples, tests, etc)

    • Added doctests for docstrings: #2241, #2402, #2400, #2399, #2395, #2394, #2391, #2389, #2384, #2352, #2351, #2349, #2348, #2347, #2346, #2345, #2341, #2340, #2336, #2335, #2332, #2327, #2324, #2323, #2321, #2317, #2311, #2307, #2290, #2284, #2280
    • #2420, #2411, #2409, #2404, #2392, #2382, #2380, #2378, #2377, #2374, #2371, #2370, #2365, #2362, #2360, #2359, #2357, #2355, #2334, #2331, #2329, #2308, #2297, #2292, #2285, #2279, #2278, #2277, #2270, #2264, #2261, #2252,

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @Abo7atm, @DevPranjal, @Eunjnnn, @FarehaNousheen, @H4dr1en, @Ishan-Kumar2, @KickItLikeShika, @Priyansi, @bibhabasumohapatra, @fco-dv, @louis-she, @sandylaker, @sdesrozis, @trsvchn, @vfdev-5, @ydcjeff

    Source code(tar.gz)
    Source code(zip)
  • v0.4.7(Oct 13, 2021)

    PyTorch-Ignite 0.4.7 - Release Notes

    New Features

    • Enabled LRFinder to run multiple epochs (#2200)
    • save_handler automatically detects DiskSaver when path passed (#2198)
    • Improved Checkpoint to use score_name as metric's key (#2146)
    • Added State parameter scheduler (#2090)
    • Added state attributes for loggers (tqdm, Polyaxon, MLFlow, WandB, Neptune, Tensorboard, Visdom, ClearML) (#2162, #2161, #2160, #2154, #2153, #2152, #2151, #2148, #2140, #2137)
    • Added gradient accumulation to supervised training step functions (#2223)
    • Automatic jupyter environment detection (#2188)
    • Added an additional argument to auto_optim to allow gradient accumulation (#2169)
    • Added micro averaging for Bleu Score (#2179)
    • Expanded BLEU, ROUGE to be calculated on batch input (#2259, #2180)
    • Moved BasicTimeProfiler, HandlersTimeProfiler, ParamScheduler, LRFinder to core (#2136, #2135, #2132)

    Bug fixes

    • Fixed docstring examples with huge bottom padding (#2225)
    • Fixed NCCL warning caused by barrier if using idist (#2257, #2254)
    • Fixed hostname list expansion (#2208, #2204)
    • Fixed tcp error with PyTorch v1.9.1 (#2211)

    Housekeeping (docs, CI, examples, tests, etc)

    • #2243, #2242, #2228, #2164, #2222, #2221, #2220, #2219, #2218, #2217, #2216, #2173, #2164, #2207, #2236, #2190, #2256, #2196, #2177, #2166, #2155, #2149, #2234, #2206, #2186, #2176, #2246, #2231, #2182, #2192, #2165, #2227, #2253, #2247, #2250, #2226, #2201, #2184, #2142, #2232, #2238, #2174

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @Chandan-h-509, @Ishan-Kumar2, @KickItLikeShika, @Priyansi, @fco-dv, @gucifer, @kennethleungty, @logankilpatrick, @mfoglio, @sandylaker, @sdesrozis, @theory-in-progress, @toxa23, @trsvchn, @vfdev-5, @ydcjeff

    Source code(tar.gz)
    Source code(zip)
  • v0.4.6(Aug 2, 2021)

    PyTorch-Ignite 0.4.6 - Release Notes

    New Features

    • Added start_lr option to FastaiLRFinder (#2111)
    • Added Model's EMA handler (#2098, #2102)
    • Improved SLURM support: added hostlist expansion without using scontrol (#2092)

    Metrics

    • Added Inception Score (#2053)
    • Added FID metric (#2049, #2061, #2085, #2094, #2103)
      • Blog post "GAN Evaluation : the Frechet Inception Distance and Inception Score metrics" (https://pytorch-ignite.ai/posts/gan-evaluation-with-fid-and-is/)
    • Improved DDP support for metrics (#2096, #2083)
    • Improved MetricsLambda to work with reset/update/compute API (#2091)

    Bug fixes

    • Modified auto_dataloader to not wrap user provided DistributedSampler (#2119)
    • Raise error in DistributedProxySampler when sampler is already a DistributedSampler (#2120)
    • Improved LRFinder error message (#2127)
    • Added py.typed for type checkers (#2095)

    Housekeeping

    • #2123, #2117, #2116, #2110, #2093, #2086

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @01-vyom, @KickItLikeShika, @gucifer, @sandylaker, @schuhschuh, @sdesrozis, @trsvchn, @vfdev-5, @ydcjeff

    Source code(tar.gz)
    Source code(zip)
  • v0.4.5(Jun 24, 2021)

    PyTorch-Ignite 0.4.5 - Release Notes

    New Features

    Metrics

    • Added BLEU metric (#1834)
    • Added ROUGE metric (#1772)
    • Added MultiLabelConfusionMatrix metric (#1613)
    • Added Cohen Kappa metric (#1690)
    • Extended sync_all_reduce API (#1823)
    • Made EpochMetric more generic by extending the list of valid types (#1748)
    • Fixed issue with metric's output device (#2062)
    • Added support for list of tensors as metric input (#2055)
    • Implemented Jaccard Index shortcut for metrics (#1682)
    • Updated Loss metric to use required_output_keys (#2027)
    • Added classification report metric (#1887)
    • Added output detach for Canberra metric (#1820)
    • Improved ROC AUC (#1762)
    • Improved AveragePrecision metric and tests (#1756)
    • Uniformly handling of metric types for all loggers (#2021)
    • More DDP support for multiple contrib metrics (#1891, #1869, #1865, #1850, #1830, #1829, #1806, #1805, #1803)

    Engine

    • Added native torch.cuda.amp and apex automatic mixed precision for create_supervised_trainer and create_supervised_evaluator (#1714, #1589)
    • Updated state.batch/state.output lifespan in Engine (#1919)

    Distributed module

    • Handled IterableDataset with auto_dataloader (#2028)
    • Updated Loss metric to use required_output_keys (#2027)
    • Enabled gpu support for gloo backend (#2016)
    • Added safe_mode for idist broadcast (#1839)
    • Improved idist to support different init_methods (#1767)

    Other improvements

    • Added LR finder improvements, moved to core (#2045, #1998, #1996, #1987, #1981, #1961, #1951, #1930)
    • Moved param handler to core (#1988)
    • Added an option to store EpochOutputStore data on engine.state, moved to core (#1982, #1974)
    • Set seed for xla in ignite.utils.manual_seed (#1970)
    • Fixed case for Precision/Recall in multi_label, not averaged configuration for DDP (#1646)
    • Updated PolyaxonLogger to handle v1 and v0 (#1625)
    • Added Arguments *args, **kwargs to BaseLogger.attach method (#2034)
    • Enabled metric ordering on ProgressBar (#1937)
    • Updated wandb logger (#1896)
    • Fixed type hint for ProgressBar (#2079)

    Bug fixes

    • BC-breaking: Improved loggers to keep configuration (#1945)
    • Fixed warnings in CI (#2023)
    • Fixed Precision for all zero predictions (#2017)
    • Renamed the default logger (#2006)
    • Fixed Accumulation metric with Nvidia/Apex (#1978)
    • Updated code to raise an error if SLURM is used with torch dist launcher (#1976)
    • Updated nltk-smooth2 for BLEU metric (#1911)
    • Added full read permissions to saved file (1876) (#1880)
    • Fixed a bug with horovod _do_manual_all_reduce (#1848)
    • Fixed small bug in "Finetuning EfficientNet-B0 on CIFAR100" tutorial (#2073)
    • Fixed f-string in mnist_save_resume_engine.py example (#2077)
    • Fixed an issue when rng states accidentaly on cuda for DeterministicEngine (#2081)

    Housekeeping

    A lot of PRs
    • Test improvements (#2061, #2057, #2047, #1962, #1957, #1946, #1943, #1928, #1927, #1915, #1914, #1908, #1906, #1905, #1903, #1902, #1899, #1899, #1882, #1870, #1866, #1860, #1846, #1832, #1828, #1821, #1816, #1815, #1814, #1812, #1811, #1809, #1808, #1807, #1804, #1802, #1801, #1799, #1798, #1797, #1796, #1795, #1793, #1791, #1785, #1784, #1783, #1781, #1776, #1774, #1769, #1768, #1760, #1755, #1746, #1741, #1718, #1717, #1713, #1631)
    • Documentation improvements and updates (#2058, #2024, #2005, #2003, #2001, #1993, #1990, #1933, #1893, #1849, #1780, #1770, #1727, #1726, #1722, #1686, #1685, #1672, #1671, #1661)
    • Example improvements (#1924, #1918, #1890, #1827, #1771, #1669, #1658, #1656, #1652, #1642, #1633, #1632)
    • CI updates (#2075, #2070, #2069, #2068, #2067, #2064, #2044, #2039, #2037, #2023, #1985, #1979, #1940, #1907, #1892, #1888, #1878, #1877, #1873, #1867, #1861, #1847, #1841, #1838, #1837, #1835, #1831, #1818, #1773, #1764, #1761, #1759, #1752, #1745, #1743, #1742, #1739, #1738, #1736, #1724, #1706, #1705, #1667, #1664, #1647)
    • Code style improvements (#2050, #2014, #1817, #1749, #1747, #1740, #1734, #1732, #1731, #1707, #1703)
    • Added docker image test script (#1733)

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @01-vyom, @Devanshu24, @Juddd, @KickItLikeShika, @Moh-Yakoub, @Muktan, @OBITORASU, @Priyansi, @afzal442, @ahmedo42, @aksg87, @aniezurawski, @cozek, @devrimcavusoglu, @fco-dv, @gucifer, @log-layer, @mouradmourafiq, @radekosmulski, @sahilg06, @sdesrozis, @sparkingdark, @thomasjpfan, @touqir14, @trsvchn, @vfdev-5, @ydcjeff

    Source code(tar.gz)
    Source code(zip)
  • v0.4.4.post1(Mar 3, 2021)

    PyTorch-Ignite 0.4.4 - Release Notes

    Bug fixes:

    • BC-breaking Moved detach outside of loss function computation (#1675, #1692)
    • Added eps to avoid nans in canberra error (#1699)
    • Removed size limitation for str on collective ops (#1702)
    • Fixed imports in docker images and now install Pillow-SIMD (#1638, #1639, #1628, #1711)

    Doc improvements

    • #1645, #1653, #1654, #1671, #1672, #1691, #1687, #1686, #1685, #1684, #1676, #1688

    Other improvements

    • Fixed artifacts urls for pypi (#1629)

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @Devanshu24, @KickItLikeShika, @Moh-Yakoub, @OBITORASU, @ahmedo42, @fco-dv, @sparkingdark, @touqir14, @trsvchn, @vfdev-5, @y0ast, @ydcjeff

    Source code(tar.gz)
    Source code(zip)
  • v0.4.3(Feb 7, 2021)

    PyTorch-Ignite 0.4.3 - Release Notes

    🎉 Since september we have a new logo (#1324) 🎉

    Core

    Metrics

    • [BC-breaking] Made Metrics accumulate values on device specified by user (#1238)
    • Fixes BC if custom metric returns a dict (#1478)
    • Added PSNR metric (#1570, #1595)

    Handlers

    • Checkpoint can save model with same filename (#1423)
    • Add greater_or_equal option to Checkpoint handler (#1597)
    • Update handlers to use setup_logger (#1617)
    • Added TimeLimit handler (#1611)

    Distributed helper module

    • Distributed cpu tests on windows (#1429)
    • Added kwargs to idist.auto_model (#1552)
    • Improved horovod initializer (#1559)

    Others

    • Dropped python 3.5 support (#1500)
    • Added torch.cuda.manual_seed_all to ignite.utils.manual_seed (#1444)
    • Fixed to_onehot function to be torch scriptable (#1592)
    • Introduced standard stream for logger setup helper (#1601)

    Docker images

    • Removed Entrypoint from Dockerfile and images (#1475)

    Examples

    • Added [Cifar10 QAT example](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10_qat (#1556)

    Contrib

    Metrics

    • Improved Canberra metric for DDP (#1314)
    • Improve ManhattanDistance metric for DDP (#1320)
    • Improve R2Score metric for DDP (#1318)

    Handlers

    • Added new time profiler HandlersTimeProfiler which allows per handler time profiling (#1398, #1474)
    • Fixed attach_opt_params_handler to return RemovableEventHandle (#1502)
    • Renamed TrainsLogger to ClearMLLogger keeping BC (#1557, #1560)

    Documentation improvements

    • #1330, #1337, #1338, #1353, #1360, #1374, #1373, #1394, #1393, #1401, #1435, #1460, #1461, #1465, #1536, #1542 ...
    • Update Shpinx to v3.2.1. (#1356, #1372)

    Codebase is MyPy checked

    • #1349, #1351, #1352, #1355, #1362, #1363, #1370, #1379, #1418, #1419, #1416, #1447, #1484

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @1nF0rmed, @Amab, @BanzaiTokyo, @Devanshu24, @Nic-Ma, @RaviTezu, @SamuelMarks, @abdulelahsm, @afzal442, @ahmedo42, @dgarth, @fco-dv, @gruebel, @harsh8398, @ibotdotout, @isabela-pf, @jkhenning, @josselineperdomo, @jrieke, @n2cholas, @ramesht007, @rzats, @sdesrozis, @shngt, @sroy8091, @theodumont, @thescripted, @timgates42, @trsvchn, @uribgp, @vcarpani, @vfdev-5, @ydcjeff, @zhxxn

    Source code(tar.gz)
    Source code(zip)
  • v0.4.2(Sep 20, 2020)

    PyTorch-Ignite 0.4.2 - Release Notes

    Core

    New Features and bug fixes

    • Added SSIM metric (#1217)

    • Added prebuilt Docker images (#1218)

    • Added distributed support for EpochMetric and related metrics (#1229)

    • Added required_output_keys public attribute (#1291)

    • Pre-built docker images for computer vision and nlp tasks powered with Nvidia/Apex, Horovod, MS DeepSpeed (#1304 #1248 #1218 )

    Handlers and utils

    • Allow passing keyword arguments to save function on Checkpoint (#1245)

    Distributed helper module

    • Added support of Horovod (#1195)
    • Added idist.broadcast (#1237)
    • Added sync_bn option to idist.auto_model (#1265)

    Contrib

    New Features and bug fixes

    • Added EpochOutputStore handler (#1226)
    • Improved displayed tag for tqdm progress bar (#1279)
    • Fixed bug with ParamGroupScheduler with schedulers based on different optimizers (#1274)

    And a lot of house-keeping Pre-September Hacktoberfest contributions

    • Added initial Mypy check at CI step (#1296)
    • Fixed typo in docs (concepts) (#1295)
    • Fixed link to pytorch documents (#1294)
    • Removed prints from tests (#1292)
    • Downgraded tqdm version to stabilize the CI (#1293)

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @M3L6H, @Tawishi, @WrRan, @ZhiliangWu, @benji011, @fco-dv, @kamahori, @kenjihiraoka, @kilsenp, @n2cholas, @nzare, @sdesrozis, @theodumont, @vfdev-5, @ydcjeff,

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Jul 23, 2020)

    PyTorch-Ignite 0.4.1 - Release Notes

    Core

    New Features and bug fixes

    • Improved docs for custom events (#1179)

    Handlers and utils

    • Added custom filename pattern for saving checkpoints (#1127)

    Distributed helper module

    • Improved namings in _XlaDistModel (#1173)
    • Minor optimization for idist.get_* methods (#1196)
    • Fixed distributed proxy sampler runtime error (#1192)
    • Fixes bug using idist with "nccl" backend and torch cuda is not available (#1166)
    • Fixed issue with logging XLA tensors (#1207)

    Contrib

    New Features and bug fixes

    • Fixes warning about "TrainsLogger output_handler can not log metrics value" (#1170)
    • Improved usage of contrib common methods with other save handlers (#1171)

    Examples

    • Improved Pascal Voc example (#1193)

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @Joel-hanson, @WrRan, @jspisak, @marload, @ryanwongsa, @sdesrozis, @vfdev-5

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0.post1(Jun 26, 2020)

    PyTorch-Ignite 0.4.0 - Release Notes

    Core

    BC breaking changes

    • Simplified engine - BC breaking change (#940 #939 #938)
      • no more internal patching of torch DataLoader.
      • seed argument of Engine.run is deprecated.
      • previous behaviour can be achieved with DeterministicEngine, introduced in #939.
    • Make all Events be CallableEventsWithFilter (#788).
    • Make ignite compatible only with pytorch >=1.3 (#1016, #1150).
      • ignite is tested on the latest and nightly versions of pytorch.
      • exact compatibility with previous versions can be checked here.
    • Remove deprecated arguments from BaseLogger (#1051).
    • Deprecated CustomPeriodicEvent (#984).
    • RunningAverage now computes output quantity average instead of a sum in DDP (#991).
    • Checkpoint stores now files with .pt extension instead of .pth (#873).
    • Arguments archived of Checkpoint and ModelCheckpoint are deprecated (#873).
    • Now create_supervised_trainer and create_supervised_evaluator do not move model to device (#910).

    See also migration note for details on how to update your code.

    New Features and bug fixes

    Ignite Distributed [Experimental]

    • Introduction of ignite.distributed as idist module (#1045)
      • common interface for distributed applications and helper methods, e.g. get_world_size(), get_rank(), ...
      • supports native torch distributed configuration, XLA devices.
      • metrics computation works in all supported distributed configurations: GPUs and TPUs.
      • Parallel utility and auto module (#1014).

    Engine & Events

    • Add flexibility on event handlers by packing triggering events (#868).
    • Engine argument is now optional in event handlers (#889, #919).
    • We initialize engine.state before calling engine.run (#1028).
    • Engine can run on dataloader based on IterableDataset and without specifying epoch_length (#1077).
    • Added user keys into Engine's state dict (#914).
    • Bug fixes in Engine class (#1048, #994).
    • Now epoch_length argument is optional (#985)
      • suitable to work with finite-unknown-length iterators.
    • Added times in engine.state (#958).

    Metrics

    • Add Frequency metric for ops/s calculations (#760, #783, #976).
    • Metrics computation can be customized with introduced MetricUsage (#979, #1054)
      • batch-wise/epoch-wise or customly programmed metric's update and compute methods.
    • Metric can be detached (#827).
    • Fixed bug in RunningAverage when output is torch tensor (#943).
    • Improved computation performance of EpochMetric (#967).
    • Fixed average recall value of ConfusionMatrix (#846).
    • Now metrics can be serialized using dill (#930).
    • Added support for nested metric values (#968).

    Handlers and utils

    • Checkpoint : improved filename when score value is Integer (#758).
    • Checkpoint : fix returning worst model of the saved models. (#745).
    • Checkpoint : load_objects can load single object checkpoints (#772).
    • Checkpoint : we now save only one checkpoint per priority (#847).
    • Checkpoint : added kwargs to Checkpoint.load_objects (#861).
    • Checkpoint : now saves model.module.state_dict() for DDP and DP (#1086).
    • Checkpoint and related: other improvements (#937).
    • Checkpoint and EarlyStopping become stateful (#1156)
    • Support namedtuple for convert_tensor (#740).
    • Added decorator one_rank_only (#882).
    • Update common.py (#904).

    Contrib

    • Added FastaiLRFinder (#596).

    Metrics

    • Added Roc Curve and Precision/Recall Curve to the metrics (#875).

    Parameters scheduling

    • Enabled multi params group for LRScheduler (#1027).
    • Parameters scheduling improvements (#1072, #859).
    • Parameters scheduler can work on torch optimizer and any object with attribute param_groups (#1163).

    Support of experiment tracking systems

    • Add NeptuneLogger (#730, #821, #951, #954).
    • Add TrainsLogger (#1020, #1036, #1043).
    • Add WandbLogger (#926).
    • Added visdom_logger to common module (#796).
    • TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
    • Simplified BaseLogger attach APIs (#1006).
    • Added kwargs to loggers' constructors and respective setup functions (#1015).

    Time profiling

    • Added basic time profiler to contrib.handlers (#729).

    Bug fixes (some of PRs)

    • ProgressBar output not in sync with epoch counts (#773).
    • Fixed ProgressBar.log_message (#768).
    • Progressbar now accounts for epoch_length argument (#785).
    • Fixed broken ProgressBar if data is iterator without epoch length (#995).
    • Improved setup_logger for multiple calls (#962).
    • Fixed incorrect log position (#1099).
    • Added missing colon to logging message (#1101).
    • Fixed order of checkpoint saving and candidate removal (#1117)

    Examples

    • Basic example of FastaiLRFinder on MNIST (#838).
    • CycleGAN auto-mixed precision training example with NVidia/Apex or native torch.cuda.amp (#888).
    • Added setup_logger to mnist examples (#953).
    • Added MNIST example on TPU (#956).
    • Benchmark amp on Cifar100 (#917).
    • Updated ImageNet and Pascal VOC12 examples (#1125 #1138)

    Housekeeping

    • Documentation updates (#711, #727, #734, #736, #742, #743, #759, #798, #780, #808, #817, #826, #867, #877, #908, #909, #911, #928, #942, #986, #989, #1002, #1031, #1035, #1083, #1092, ...).
    • Offerings to the CI gods (#713, #761, #762, #776, #791, #801, #803, #879, #885, #890, #894, #933, #981, #982, #1010, #1026, #1046, #1084, #1093, #1113, ...).
    • Test improvements (#779, #807, #854, #891, #975, #1021, #1033, #1041, #1058, ...).
    • Added Serializable in mixins (#1000).
    • Merge of EpochMetric in _BaseRegressionEpoch (#970).
    • Adding typing to ignite (#716, #751, #800, #844, #944, #1037).
    • Drop Python 2 support finalized (#806).
    • Splits engine into multiple parts (#724).
    • Add Python 3.8 to Conda builds (#781).
    • Black formatted codebase with pre-commit files (#792).
    • Activate dpl v2 for Travis CI (#804).
    • AutoPEP8 (#805).
    • Fixed device conversion method (#887).
    • Refactored deps installation (#931).
    • Return handler in helpers (#997).
    • Fixes #833 (#1001).
    • Disable propagation of loggers to ancestrors (#1013).
    • Consistent PEP8-compliant imports layout (#901).

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @ItamarWilf, @Joxis, @Muhamob, @Yevgnen, @amatsukawa @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards

    Source code(tar.gz)
    Source code(zip)
  • v0.4rc.0.post1(Jun 6, 2020)

    PyTorch-Ignite 0.4.0 RC - Release Notes

    Core

    BC breaking changes

    • Simplified engine - BC breaking change (#940 #939 #938)
      • no more internal patching of torch DataLoader.
      • seed argument of Engine.run is deprecated.
      • previous behaviour can be achieved with DeterministicEngine, introduced in #939.
    • Make all Events be CallableEventsWithFilter (#788).
    • Make ignite compatible only with pytorch >1.0 (#1016).
      • ignite is tested on the latest and nightly versions of pytorch.
      • exact compatibility with previous versions can be checked here.
    • Remove deprecated arguments from BaseLogger (#1051).
    • Deprecated CustomPeriodicEvent (#984).
    • RunningAverage now computes output quantity average instead of a sum in DDP (#991).
    • Checkpoint stores now files with .pt extension instead of .pth (#873).
    • Arguments archived of Checkpoint and ModelCheckpoint are deprecated (#873).
    • Now create_supervised_trainer and create_supervised_evaluator do not move model to device (#910).

    New Features and bug fixes

    Ignite Distributed [Experimental]

    • Introduction of ignite.distributed as idist module (#1045)
      • common interface for distributed applications and helper methods, e.g. get_world_size(), get_rank(), ...
      • supports native torch distributed configuration, XLA devices.
      • metrics computation works in all supported distributed configurations: GPUs and TPUs.

    Engine & Events

    • Add flexibility on event handlers by packing triggering events (#868).
    • Engine argument is now optional in event handlers (#889, #919).
    • We initialize engine.state before calling engine.run (#1028).
    • Engine can run on dataloader based on IterableDataset and without specifying epoch_length (#1077).
    • Added user keys into Engine's state dict (#914).
    • Bug fixes in Engine class (#1048, #994).
    • Now epoch_length argument is optional (#985)
      • suitable to work with finite-unknown-length iterators.
    • Added times in engine.state (#958).

    Metrics

    • Add Frequency metric for ops/s calculations (#760, #783, #976).
    • Metrics computation can be customized with introduced MetricUsage (#979, #1054)
      • batch-wise/epoch-wise or customly programmed metric's update and compute methods.
    • Metric can be detached (#827).
    • Fixed bug in RunningAverage when output is torch tensor (#943).
    • Improved computation performance of EpochMetric (#967).
    • Fixed average recall value of ConfusionMatrix (#846).
    • Now metrics can be serialized using dill (#930).
    • Added support for nested metric values (#968).

    Handlers and utils

    • Checkpoint : improved filename when score value is Integer (#758).
    • Checkpoint : fix returning worst model of the saved models. (#745).
    • Checkpoint : load_objects can load single object checkpoints (#772).
    • Checkpoint : we now save only one checkpoint per priority (#847).
    • Checkpoint : added kwargs to Checkpoint.load_objects (#861).
    • Checkpoint : now saves model.module.state_dict() for DDP and DP (#1086).
    • Checkpoint and related: other improvements (#937).
    • Support namedtuple for convert_tensor (#740).
    • Added decorator one_rank_only (#882).
    • Update common.py (#904).

    Contrib

    • Added FastaiLRFinder (#596).

    Metrics

    • Added Roc Curve and Precision/Recall Curve to the metrics (#875).

    Parameters scheduling

    • Enabled multi params group for LRScheduler (#1027).
    • Parameters scheduling improvements (#1072, #859).

    Support of experiment tracking systems

    • Add NeptuneLogger (#730, #821, #951, #954).
    • Add TrainsLogger (#1020, #1036, #1043).
    • Add WandbLogger (#926).
    • Added visdom_logger to common module (#796).
    • TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
    • Simplified BaseLogger attach APIs (#1006).
    • Added kwargs to loggers' constructors and respective setup functions (#1015).

    Time profiling

    • Added basic time profiler to contrib.handlers (#729).

    Bug fixes (some of PRs)

    • ProgressBar output not in sync with epoch counts (#773).
    • Fixed ProgressBar.log_message (#768).
    • Progressbar now accounts for epoch_length argument (#785).
    • Fixed broken ProgressBar if data is iterator without epoch length (#995).
    • Improved setup_logger for multiple calls (#962).
    • Fixed incorrect log position (#1099).
    • Added missing colon to logging message (#1101).

    Examples

    • Basic example of FastaiLRFinder on MNIST (#838).
    • CycleGAN auto-mixed precision training example with NVidia/Apex or native torch.cuda.amp (#888).
    • Added setup_logger to mnist examples (#953).
    • Added MNIST example on TPU (#956).
    • Benchmark amp on Cifar100 (#917).
    • TrainsLogger semantic segmentation example (#1095).

    Housekeeping (some of PRs)

    • Documentation updates (#711, #727, #734, #736, #742, #743, #759, #798, #780, #808, #817, #826, #867, #877, #908, #909, #911, #928, #942, #986, #989, #1002, #1031, #1035, #1083, #1092).
    • Offerings to the CI gods (#713, #761, #762, #776, #791, #801, #803, #879, #885, #890, #894, #933, #981, #982, #1010, #1026, #1046, #1084, #1093).
    • Test improvements (#779, #807, #854, #891, #975, #1021, #1033, #1041, #1058).
    • Added Serializable in mixins (#1000).
    • Merge of EpochMetric in _BaseRegressionEpoch (#970).
    • Adding typing to ignite (#716, #751, #800, #844, #944, #1037).
    • Drop Python 2 support finalized (#806).
    • Dynamic typing (#723).
    • Splits engine into multiple parts (#724).
    • Add Python 3.8 to Conda builds (#781).
    • Black formatted codebase with pre-commit files (#792).
    • Activate dpl v2 for Travis CI (#804).
    • AutoPEP8 (#805).
    • Fixes nightly version bug (#809).
    • Fixed device conversion method (#887).
    • Refactored deps installation (#931).
    • Return handler in helpers (#997).
    • Fixes #833 (#1001).
    • Disable propagation of loggers to ancestrors (#1013).
    • Consistent PEP8-compliant imports layout (#901).

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @ItamarWilf, @Joxis, @Muhamob, @Yevgnen, @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jan 21, 2020)

    Core

    • Added State repr and input batch as engine.state.batch (#641)
    • Adapted core metrics only to be used in distributed configuration (#635)
    • Added fbeta metric as core metric (#653)
    • Added event filtering feature (e.g. every/once/event filter logic) (#656)
    • BC breaking change: Refactor ModelCheckpoint into Checkpoint + DiskSaver / ModelCheckpoint (#673)
      • Added option n_saved=None to store all checkpoints (#703)
    • Improved accumulation metrics (#681)
    • Early stopping min delta (#685)
    • Droped Python 2.7 support (#699)
    • Added feature: Metric can accept a dictionary (#689)
    • Added Dice Coefficient metric (#680)
    • Added helper method to simplify the setup of class loggers (#712)

    Engine refactoring (BC breaking change)

    Finally solved the issue #62 to resume training from an epoch or iteration

    • Engine refactoring + features (#640)
      • engine checkpointing
      • variable epoch lenght defined by epoch_length
      • two additional events: GET_BATCH_STARTED and GET_BATCH_COMPLETED
      • cifar10 example with save/resume in distributed conf

    Contrib

    • Improved create_lr_scheduler_with_warmup (#646)
    • Added helper method to plot param scheduler values with matplotlib (#650)
    • BC Breaking change: with multiple optimizer's param groups (#690)
      • Added state_dict/load_state_dict (#690)
    • BC Breaking change: Let the user specify tqdm parameters for log_message (#695)

    Examples

    • Added an example of hyperparameters tuning with Ax on CIFAR10 (#652)
    • Added CIFAR10 distributed example

    Reproducible trainings as "References"

    Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:

    Features:

    • Distributed training with mixed precision by nvidia/apex
    • Experiments tracking with MLflow or Polyaxon

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @anubhavashok, @kagrze, @maxfrei750, @vfdev-5

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Oct 3, 2019)

    Core

    Various improvements in the core part of the library:

    • Add epoch_bound parameter to RunningAverage (#488)

    • Bug fixes with Confusion matrix, new implementation (#572) - BC breaking

    • Added event_to_attr in register_events (#523)

    • Added accumulative single variable metrics (#524)

    • should_terminate is reset between runs (#525)

    • to_onehot returns tensor with uint8 dtype (#571) - may be BC breaking

    • Removable handle returned from Engine.add_event_handler() to enable single-shot events (#588)

    • New documentation style 🎉

    Distributed

    We removed mnist distrib example as being misleading and ~~provided distrib branch~~(XX/YY/2020: distrib branch merged to master) to adapt metrics for distributed computation. Code is working and is under testing. Please, try it in your use-case and leave us a feedback.

    Now in Contributions module

    • Added mlflow logger (#558)
    • R-Squared Metric in regression metrics module (#496)
    • Add tag field to OptimizerParamsHandler (#502)
    • Improved ProgressBar with TerminateOnNan (#506)
    • Support for layer freezing with Tensorboard integration (#515)
    • Improved OutputHandler API (#531)
    • Improved create_lr_scheduler_with_warmup (#556)
    • Added "all" option to metric_names in contrib loggers (#565)
    • Added GPU usage info as metric (#569)
    • Other bug fixes

    Notebook examples

    • Added Cycle-GAN notebook (#500)
    • Finetune EfficientNet-B0 on CIFAR100 (#544)
    • Added Fashion MNIST jupyter notebook (#549)

    Updated nighlty builds

    From pip:

    pip install --pre pytorch-ignite
    

    From conda (this suggests to install pytorch nightly release instead of stable version as dependency):

    conda install ignite -c pytorch-nightly
    

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    @ANUBHAVNATANI, @Bibonaut, @Evpok, @Hiroshiba, @JeroenDelcour, @Mxbonn, @anmolsjoshi, @asford, @bosr, @johnstill, @marrrcin, @vfdev-5, @willfrey

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 9, 2019)

    Core

    • We removed deprecated metric classes BinaryAccuracy and CategoricalAccuracy and which are replaced by Accuracy.

    • Multilabel option for Accuracy, Precision, Recall metrics.

    • Added other metrics:

    • Operations on metrics: p = Precision(average=False)

      • apply PyTorch operators: mean_precision = p.mean()
      • indexing: precision_no_bg = p[1:]
    • Improved our docs with more examples.

    • Added FAQ section with best practices.

    • Bug fixes

    Now in Contributions module

    Notebook examples

    • VAE on MNIST
    • CNN for text classification

    Nighlty builds with pytorch-nightly as dependency

    We also provide pip/conda nighlty builds with pytorch-nightly as dependency:

    pip install pytorch-ignite-nightly
    

    or

    conda install -c pytorch ignite-nightly 
    

    Acknowledgments

    🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

    Bibonaut, IlyaOvodov, TheCodez, anmolsjoshi, fabianschilling, maaario, snowyday, vfdev-5, willprice, zasdfgbnm, zippeurfou

    vfdev-5 would like also to thank his wife and newborn baby girl Nina for their support while working on this release !

    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Dec 14, 2018)

    • Improve and fix bug with binary accuracy, precision, recall
    • Metrics arithmetics
    • ParamScheduler to support multiple optimizers/multiple parameter groups

    Thanks to all our contributors !

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Nov 9, 2018)

    What's new in this release:

    • Contrib module with
      • Parameter schedule
      • TQDM ProgressBar
      • ROC/AUC, AP, MaxAE metrics
      • TBPTT Engine
    • New handlers:
      • Terminate on Nan
    • New metrics:
      • RunningAverage
      • Merged Categorical/Binary -> Accuracy
    • Refactor of examples
    • New examples:
      • Fast Neural Style
      • RL

    Thanks to all our contributors !

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jun 18, 2018)

    Introduced Engine, Handlers and Metrics.

    Metrics:

    • BinaryAccuracy
    • CategoricalAccuracy
    • Loss
    • Precision
    • Recall
    • etc

    Handlers:

    • ModelCheckpoint
    • EarlyStopping
    • Timer

    Features:

    • PyTorch 0.4 support

    Examples:

    • mnist.py
    • mnist_with_tensorboardx.py
    • mnist_with_visdom.py
    • dcgan.py
    Source code(tar.gz)
    Source code(zip)
Deep Learning for Morphological Profiling

Deep Learning for Morphological Profiling An end-to-end implementation of a ML System for morphological profiling using self-supervised learning to di

Danielh Carranza 0 Jan 20, 2022
Object detection and instance segmentation toolkit based on PaddlePaddle.

Object detection and instance segmentation toolkit based on PaddlePaddle.

9.3k Jan 02, 2023
Learning to Communicate with Deep Multi-Agent Reinforcement Learning in PyTorch

Learning to Communicate with Deep Multi-Agent Reinforcement Learning This is a PyTorch implementation of the original Lua code release. Overview This

Minqi 297 Dec 12, 2022
[NeurIPS 2021] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods

Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods Large Scale Learning on Non-Homophilous Graphs: New Benchmark

60 Jan 03, 2023
An API-first distributed deployment system of deep learning models using timeseries data to analyze and predict systems behaviour

Gordo Building thousands of models with timeseries data to monitor systems. Table of content About Examples Install Uninstall Developer manual How to

Equinor 26 Dec 27, 2022
BEGAN in PyTorch

BEGAN in PyTorch This project is still in progress. If you are looking for the working code, use BEGAN-tensorflow. Requirements Python 2.7 Pillow tqdm

Taehoon Kim 260 Dec 07, 2022
An efficient implementation of GPNN

Efficient-GPNN An efficient implementation of GPNN as depicted in "Drop the GAN: In Defense of Patches Nearest Neighbors as Single Image Generative Mo

7 Apr 16, 2022
Repository of our paper 'Refer-it-in-RGBD' in CVPR 2021

Refer-it-in-RGBD This is the repository of our paper 'Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD Images' in CVPR 2021 Pape

Haolin Liu 34 Nov 07, 2022
Towards Boosting the Accuracy of Non-Latin Scene Text Recognition

Convolutional Recurrent Neural Network + CTCLoss | STAR-Net Code for paper "Towards Boosting the Accuracy of Non-Latin Scene Text Recognition" Depende

Sanjana Gunna 7 Aug 07, 2022
The modify PyTorch version of Siam-trackers which are speed-up by TensorRT.

SiamTracker-with-TensorRT The modify PyTorch version of Siam-trackers which are speed-up by TensorRT or ONNX. [Updating...] Examples demonstrating how

9 Dec 13, 2022
FaRL for Facial Representation Learning

FaRL for Facial Representation Learning This repo hosts official implementation of our paper General Facial Representation Learning in a Visual-Lingui

Microsoft 19 Jan 05, 2022
You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors

You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors In this paper, we propose a novel local descriptor-based fra

Haiping Wang 80 Dec 15, 2022
Official pytorch code for "APP: Anytime Progressive Pruning"

APP: Anytime Progressive Pruning Diganta Misra1,2,3, Bharat Runwal2,4, Tianlong Chen5, Zhangyang Wang5, Irina Rish1,3 1 Mila - Quebec AI Institute,2 L

Landskape AI 12 Nov 22, 2022
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers Created by Xumin Yu*, Yongming Rao*, Ziyi Wang, Zuyan Liu, Jiwen Lu, Jie Zhou

Xumin Yu 317 Dec 26, 2022
CondenseNet: Light weighted CNN for mobile devices

CondenseNets This repository contains the code (in PyTorch) for "CondenseNet: An Efficient DenseNet using Learned Group Convolutions" paper by Gao Hua

Shichen Liu 690 Nov 30, 2022
Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph".

multilingual-mrc-isdg Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph". This r

Liyan 5 Dec 07, 2022
PolyGlot, a fuzzing framework for language processors

PolyGlot, a fuzzing framework for language processors Build We tested PolyGlot on Ubuntu 18.04. Get the source code: git clone https://github.com/s3te

Software Systems Security Team at Penn State University 79 Dec 27, 2022
Code for "Retrieving Black-box Optimal Images from External Databases" (WSDM 2022)

Retrieving Black-box Optimal Images from External Databases (WSDM 2022) We propose how a user retreives an optimal image from external databases of we

joisino 5 Apr 13, 2022
The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"

Hierarchical Token Semantic Audio Transformer Introduction The Code Repository for "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound

Knut(Ke) Chen 134 Jan 01, 2023
Large scale and asynchronous Hyperparameter Optimization at your fingertip.

Syne Tune This package provides state-of-the-art distributed hyperparameter optimizers (HPO) where trials can be evaluated with several backend option

Amazon Web Services - Labs 236 Jan 01, 2023