A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

Overview

Stoke

Add a little accelerant to your torch

License Python Style Lint Docs

About

stoke is a lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices (e.g. CPU, GPU), distributed modes, mixed-precision, and PyTorch extensions. This allows you to switch from local full-precision CPU to mixed-precision distributed multi-GPU with extensions (like optimizer state sharding) by simply changing a few declarative flags. Additionally, stoke exposes configuration settings for every underlying backend for those that want configurability and raw access to the underlying libraries.

In short, stoke is the best of PyTorch Lightning Accelerators disconnected from the rest of PyTorch Lightning. Write whatever PyTorch code you want, but leave device and backend context switching to stoke.

Supports

Benefits/Capabilities

  • Declarative style API -- allows you to declare or specify the desired state and let stoke handle the rest
  • Mirrors base PyTorch style forward, loss, backward, and step calls
  • Automatic device placement of model(s) and data
  • Universal interface for saving and loading regardless of backend(s) or device
  • Automatic handling of gradient accumulation and clipping
  • Common attrs interface for all backend configuration parameters (with docstrings)
  • Helper methods for printing synced losses, device specific print, number of model parameters
  • Extra(s) - Custom torch.utils.data.distributed.Sampler: BucketedDistributedSampler which buckets data by a sorted idx and then randomly samples from specific bucket(s) to prevent situations like grossly mismatched sequence length leading to wasted computational overhead (ie excess padding)

Installation

(Required for FP16 Support) Install NVIDIA Apex

If you are planning on using mixed-precision (aka FP16), please install Apex so that stoke supports all FP16 methods. If you are not planning on using mixed precision, this step can actually be skipped (as all imports are in a try/except and are only conditionally imported).

Follow the instructions here.

(Optional) OpenMPI Support

Follow the instructions here or here

Also, refer to the Dockerfile here

via PyPi

pip install stoke

via PyPi w/ Optional MPI Support

pip install stoke[mpi]

Documentation and Examples

Full documentation can be found here and examples are here.

Quick Start

Basic Definitions

Assuming some already existing common PyTorch objects (dataset: torch.utils.data.Dataset, model: torch.nn.Module, loss: torch.nn.(SomeLossFunction)):

import torch

# Some existing user defined dataset using torch.utils.data.Dataset
class RandomData(torch.utils.data.Dataset):
    pass

# An existing model defined with torch.nn.Module
class BasicNN(torch.nn.Module):
    pass

# Our existing dataset from above
dataset = RandomData(...)

# Our existing model from above 
model = BasicNN(...)

# A loss function
loss = torch.nn.BCEWithLogitsLoss()

Optimizer Setup

stoke requires a slightly different way to define the optimizer (as it handles instantiation internally) by using StokeOptimizer. Pass in the uninstantiated torch.optim.* class object and any **kwargs that need to be passed to the __init__ call:

from stoke import StokeOptimizer
from torch.optim import Adam

# Some ADAM parameters
lr = 0.001
beta1 = 0.9
beta2 = 0.98
epsilon = 1E-09

# Create the StokeOptimizer
opt = StokeOptimizer(
    optimizer=Adam,
    optimizer_kwargs={
        "lr": lr,
        "betas": (beta1, beta2),
        "eps": epsilon
    }
)

Create Stoke Object

Now create the base stoke object. Pass in the model, loss(es), and StokeOptimizer from above as well as any flags/choices to set different backends/functionality/extensions and any necessary configurations. As an example, we set the device type to GPU, use the PyTorch DDP backend for distributed multi-GPU training, toggle native PyTorch AMP mixed precision, add Fairscale optimizer-state-sharding (OSS), and turn on automatic gradient accumulation and clipping (4 steps and clip-by-norm). In addition, let's customize PyTorch DDP, PyTorch AMP and Fairscale OSS with some of our own settings but leave all the others as default configurations.

import os
from stoke import AMPConfig
from stoke import ClipGradNormConfig
from stoke import DDPConfig
from stoke import DistributedOptions
from stoke import FairscaleOSSConfig
from stoke import FP16Options
from stoke import Stoke

# Custom AMP configuration
# Change the initial scale factor of the loss scaler
amp_config = AMPConfig(
    init_scale=2.**14
)

# Custom DDP configuration
# Automatically swap out batch_norm layers with sync_batch_norm layers
# Notice here we have to deal with the local rank parameter that DDP needs (from env or cmd line)
ddp_config = DDPConfig(
    local_rank=os.getenv('LOCAL_RANK'),
    convert_to_sync_batch_norm=True
)

# Custom OSS configuration
# activate broadcast_fp16 -- Compress the model shards in fp16 before sharing them in between ranks
oss_config = FairscaleOSSConfig(
    broadcast_fp16=True
)

# Configure gradient clipping using the configuration object
grad_clip = ClipGradNormConfig(
    max_norm=5.0,
    norm_type=2.0
)

# Build the object with the correct options/choices (notice how DistributedOptions and FP16Options are already provided
# to make choices simple) and configurations (passed to configs as a list)
stoke_obj = Stoke(
    model=model,
    optimizer=opt,
    loss=loss,
    batch_size_per_device=32,
    gpu=True,
    fp16=FP16Options.amp,
    distributed=DistributedOptions.ddp,
    fairscale_oss=True,
    grad_accum_steps=4,
    grad_clip=grad_clip,
    configs=[amp_config, ddp_config, oss_config]
)

Build PyTorch DataLoader

Next we need to create a torch.utils.data.DataLoader object. Similar to the optimizer definition this has to be done a little differently with stoke for it to correctly handle each of the different backends. stoke provides a mirrored wrapper to the native torch.utils.data.DataLoader class (as the DataLoader method) that will return a correctly configured torch.utils.data.DataLoader object. Since we are using a distributed backend (DDP) we need to provide a DistributedSampler or similar class to the DataLoader. Note that the Stoke object that we just created has the properties .rank and .world_size which provide common interfaces to this information regardless of the backend!

from torch.utils.data.distributed import DistributedSampler

# Create our DistributedSampler
# Note: dataset is the torch.utils.data.Dataset from the first section
sampler = DistributedSampler(
    dataset=dataset,
    num_replicas=stoke_obj.world_size,
    rank=stoke_obj.rank
)

# Call the DataLoader method on the stoke_obj to correctly create a DataLoader instance
data_loader = stoke_obj.DataLoader(
    dataset=dataset,
    collate_fn=lambda batch: dataset.collate_fn(batch),
    batch_size=32,
    sampler=sampler,
    num_workers=4
)

Run a Training Loop

At this point, we've successfully configured stoke! Since stoke handled wrapping/building your torch.nn.Module and torch.utils.data.DataLoader, device placement is handled automatically (in our example the model and data are moved to GPUs). The following simple training loop should look fairly standard, except that the model forward, loss, backward, and step calls are all called on the Stoke object instead of each individual component (as it internally maintains the model, loss, and optimizer and all necessary code for all backends/functionality/extensions). In addition, we use one of many helper functions built into stoke to print the synced and gradient accumulated loss across all devices (an all-reduce across all devices with ReduceOp.SUM and divided by world_size -- that is print only on rank 0 by default)

epoch = 0
# Iterate until number epochs
while epoch < 100:
    # Loop through the dataset
    for x, y in data_loader:
        # Use the Stoke wrapped version(s) of model, loss, backward, and step
        # Forward
        out = stoke_obj.model(x)
        # Loss
        loss = stoke_obj.loss(out, y.to(dtype=torch.float).unsqueeze(1))
        # Detach loss and sync across devices -- only after grad accum step has been called 
        stoke_obj.print_mean_accumulated_synced_loss()
        # Backward
        stoke_obj.backward(loss)
        # stoke_obj.dump_model_grads()
        # Step
        stoke_obj.step()
    epoch += 1

Save/Load

stoke provides a unified interface to save and load model checkpoints regardless of backend/functionality/extensions. Simply call the save or load methods on the Stoke object.

# Save the model w/ a dummy extra dict
path, tag = stoke_obj.save(
    path='/path/to/save/dir',
    name='my-checkpoint-name',
    extras={'foo': 'bar'}
    )

# Attempt to load a saved checkpoint -- returns the extras dictionary
extras = stoke_obj.load(
    path=path,
    tag=tag
)

Launchers

See the documentation here

Compatibility Matrix

Certain combinations of backends/functionality are not compatible with each other. The below table indicates which combinations should work together:

Backends/Devices CPU GPU PyTorch DDP Deepspeed DDP Horovod Deepspeed FP16 Native AMP NVIDIA APEX Deepspeed ZeRO Fairscale
CPU
GPU
PyTorch DDP
Deepspeed DDP
Horovod
DeepspeedFP16
Native AMP
NVIDIA APEX
Deepspeed ZeRO
Fairscale

stoke is developed and maintained by the Artificial Intelligence Center of Excellence at Fidelity Investments.

Comments
  • TypeError: intercept_args() got an unexpected keyword argument 'multiprocessing_context'

    TypeError: intercept_args() got an unexpected keyword argument 'multiprocessing_context'

    I think the term multiprocessing_context isn't being used anywhere concretely but still appears in the Dataloader object which causes the issue. This could be a simple bug as well but couldn't figure out the exact issue. The error logs are as below

    File "/home/..../stoke/stoke.py", line 835, in DataLoader persistent_workers=persistent_workers, File "/..../stoke/data.py", line 127, in __init__ persistent_workers=persistent_workers, TypeError: intercept_args() got an unexpected keyword argument 'multiprocessing_context' ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 16888) of binary: /anaconda/envs/py37_default/bin/python

    bug 
    opened by rushi-the-neural-arch 37
  • How to use a loss function which itself is a CNN!?? DDP training issue

    How to use a loss function which itself is a CNN!?? DDP training issue

    Describe the bug

    This is not a bug exactly but I need some context regarding how to implement this. I am trying to implement a novel loss function, ref - https://github.com/gfxdisp/mdf. The gist of it is to use a pre-trained neural architecture for low-level vision tasks like Image Denoising, SR etc. So, we would be using the discriminator (CNN) itself as a loss function here (The CNN accepts input the model's perdiction and gives out some metrics). But the issue is I couldn't implement it in a compatible way with Stoke which leads me to the standard error: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! ...... Can you please suggest me a way to mitigate this or how to efficiently handle this task??

    To Reproduce

    Steps to reproduce the behavior:

    1. Code/Pseudo-code is -
    from mdfloss import MDFLoss
    path_disc = "mdf/weights/Ds_SISR.pth"
    loss = MDFLoss(path_disc, cuda_available=True)
    
    stoke_model = Stoke(
        model=model,
        verbose=True,    
        optimizer=optimizer,
        loss=loss,
        batch_size_per_device= opt.batchSize,   
        gpu=True,   
        fp16= None, #FP16Options.amp.value, 
        distributed=DistributedOptions.ddp.value,
        fairscale_oss=True, 
        fairscale_sddp=True, 
        grad_accum_steps=1,
        configs= [amp_config, ddp_config, oss_config],     
        grad_clip=ClipGradNormConfig(max_norm = opt.grad_clip, norm_type=2.0),
    )
    
    
    def train(train_dataloader, stoke_model: Stoke, scheduler1, scheduler2, epoch: int):
        
        example_ct = 0  # number of examples seen
        batch_ct = 0
        sum_loss = 0
        
        stoke_model.print_on_devices(f"Starting Epoch {epoch + 1}")
        stoke_model.model_access.train()
        
        for idx, (inputs, targets) in enumerate(train_dataloader):
            
            # call the model through the stoke onkect interface
            outputs = stoke_model.model(inputs)
            train_loss = stoke_model.loss(outputs, targets)
            
            stoke_model.print_ema_loss(prepend_msg=f"Step {idx+1} -- EMA Loss")
            
            # Call backward through the stoke object interface
            stoke_model.backward(loss=train_loss)
            
            # Call step through the stoke object interface
            stoke_model.step()
            scheduler1.step()
            scheduler2.step
            
            sum_loss += stoke_model.detach_and_sync_loss(loss=train_loss)
    
            example_ct +=  len(inputs)
            batch_ct += 1
    
            # Report metrics every 50th batch
            if ((batch_ct + 1) % 50) == 0:
                train_log(train_loss, example_ct, epoch)
                #print(train_loss,  example_ct, epoch)
    
        avg_loss = sum_loss / len(train_dataloader)
        
        return avg_loss
    
    
    1. Ran config as - env CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 Stoke-DDP.py --projectName "Stoke-4K-2X-DDP" --batchSize 18 --nEpochs 2 --lr 1e-3 --weight_decay 1e-4 --grad_clip 0.1

    2. Error produced is - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper__cudnn_convolution)

    Environment:

    • OS: Ubuntu 18.04.5,
    • Python version - 3.7.7
    • PyTorch Version - 1.10:
    • Deepspeed Version: 0.5.4
    • Horovod Version: 0.23
    • Fairscale Version: 0.4.0
    • CUDA/cuDNN version: 11.2 / 7.6.2
    • Stoke configuration: 0.2.0

    Thanks!

    opened by rushi-the-neural-arch 3
  • attr.exceptions.NotAnAttrsClassError: <class 'float'> is not an attrs-decorated class.

    attr.exceptions.NotAnAttrsClassError: is not an attrs-decorated class.

    Describe the bug

    The by default, verbose=True option in Stoke class throws an attrs error while printing out the configuration details

    attr.exceptions.NotAnAttrsClassError: <class 'float'> is not an attrs-decorated class.

    To Reproduce

    The sample script is posted here - Stoke-DDP

    Just change the verbose=False parameter to verbose=True in the Stoke class argument to reproduce the bug

    python -m torch.distributed.launch Stoke-DDP.py --projectName "PyTorch-4K-2X" --batchSize 20 --nEpochs 2 --lr 1e-3 --threads 8

    Expected behavior

    Print out all the parameters info passed in the Stoke Class

    Screenshots/Code Snippets

    
        stoke_model = Stoke(
            model=model,
            verbose=True,     # verbose just prints out stuff, throws an error somewhere so disabled it
            optimizer=optimizer,
            loss=loss,
            batch_size_per_device=opt.batchSize,
            gpu=True,
            fp16= None, #FP16Options.amp,
            distributed= DistributedOptions.ddp, #"ddp", #DistributedOptions.ddp
            fairscale_oss=True,
            fairscale_sddp=True,
            grad_accum_steps=4,
            grad_clip=opt.grad_clip,
            configs=[amp_config, ddp_config, oss_config]
        )
    
    

    image

    Environment:

    • OS: Ubuntu 18.04.5,
    • Python version - 3.7.7
    • PyTorch Version - 1.10:
    • Deepspeed Version: 0.5.4
    • Horovod Version: 0.23
    • Fairscale Version: 0.4.0
    • CUDA/cuDNN version: 11.2 / 7.6.2
    • Stoke configuration: 0.2.0
    bug 
    opened by rushi-the-neural-arch 3
  • pip(deps): bump isort from 5.9.3 to 5.11.3

    pip(deps): bump isort from 5.9.3 to 5.11.3

    Bumps isort from 5.9.3 to 5.11.3.

    Release notes

    Sourced from isort's releases.

    5.11.3

    Changes

    :beetle: Fixes

    :construction_worker: Continuous Integration

    v5.11.3

    Changes

    :beetle: Fixes

    :construction_worker: Continuous Integration

    5.11.2

    Changes

    5.11.1

    Changes December 12 2022

    :beetle: Fixes

    5.11.0

    Changes December 12 2022

    ... (truncated)

    Changelog

    Sourced from isort's changelog.

    5.11.3 December 16 2022

    5.11.2 December 12 2022

    5.11.1 December 12 2022

    5.11.0 December 12 2022

    5.10.1 November 8 2021

    • Fixed #1819: Occasional inconsistency with multiple src paths.
    • Fixed #1840: skip_file ignored when on the first docstring line

    5.10.0 November 3 2021

    • Implemented #1796: Switch to tomli for pyproject.toml configuration loader.
    • Fixed #1801: CLI bug (--exend-skip-glob, overrides instead of extending).

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • pip(deps): bump pylint from 2.10.2 to 2.15.8

    pip(deps): bump pylint from 2.10.2 to 2.15.8

    Bumps pylint from 2.10.2 to 2.15.8.

    Commits
    • 1f84ed9 Bump pylint to 2.15.8, update changelog (#7899)
    • 6178e41 Define Protocol as abstract to prevent abstract-method FP (#7839) (#7879)
    • 438025d add test and expl for line-too-long useless-supp FP (#7887)
    • 19c0534 Fix missing-param-doc for escaped underscores (#7878)
    • e1856b2 [github actions] Reinstate tests and check on maintenance branch
    • 5fb17e0 multiple-statements no longer triggers for function stubs using inlined `...
    • 5a96370 Bump pylint to 2.15.7, update changelog (#7845)
    • 43109b6 Revert "Fix crash when using enumerate with start and a class attribu...
    • ff73282 Fix logging-fstring-interpolation false positive (#7846) (#7854)
    • 86b8c64 Fix crash when using enumerate with start and a class attribute (#7824)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • pip(deps): bump pylint from 2.10.2 to 2.15.7

    pip(deps): bump pylint from 2.10.2 to 2.15.7

    Bumps pylint from 2.10.2 to 2.15.7.

    Commits
    • 5a96370 Bump pylint to 2.15.7, update changelog (#7845)
    • 43109b6 Revert "Fix crash when using enumerate with start and a class attribu...
    • ff73282 Fix logging-fstring-interpolation false positive (#7846) (#7854)
    • 86b8c64 Fix crash when using enumerate with start and a class attribute (#7824)
    • ebf2824 Execute tests on maintenance branche's PR
    • 9ec1aa0 Do not crash if next() is called without arguments (#7831)
    • ac2da87 Upgrade the versions of astroid and dill (#7838)
    • 06d5d1a Add content: write rights for backporting job (#7826)
    • df5ebb5 Fix used-before-assignment for variable annotations guarded by TYPE_CHECKIN...
    • 1baf4be Deduplicate module file paths to prevent redundant scans. (#7747)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • pip(deps): bump pylint from 2.10.2 to 2.15.6

    pip(deps): bump pylint from 2.10.2 to 2.15.6

    Bumps pylint from 2.10.2 to 2.15.6.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • pip(deps): bump pylint from 2.10.2 to 2.15.5

    pip(deps): bump pylint from 2.10.2 to 2.15.5

    Bumps pylint from 2.10.2 to 2.15.5.

    Commits
    • bb17694 Merge pull request #7660 from cdce8p/release-2.15.5
    • fc7dc5e Bump pylint to 2.15.5, update changelog
    • 8def9a0 [doc] Upgrade the contributors list and CONTRIBUTORS.txt
    • 9c239c2 Sort examples/pylintrc for 2.15.5
    • 97ebe0b Sort --generate-rcfile output
    • 1579c43 Use relative paths in create_contributor_list.py (#7656)
    • c2d42ba Remove index from unnecessary-dunder-call check (#7650)
    • e8dc9b6 Swap plugin cache to pickle-able values when done (#7640)
    • b051fab Add regression test for no-member with empty AnnAssign (#7632)
    • 8cbc5a3 Upgrade astroid to 2.12.12 (#7649)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • pip(deps): bump pylint from 2.10.2 to 2.15.4

    pip(deps): bump pylint from 2.10.2 to 2.15.4

    Bumps pylint from 2.10.2 to 2.15.4.

    Commits
    • 20af036 Bump pylint to 2.15.4, update changelog
    • 78f8423 [towncrier] Add whitespaces between fragment in towncrier (#7431)
    • 49e15ab Disambiguate between str and enum member args to typing.Literal (#7414)
    • 07f484f Upgrade astroid version following 2.12.11 release
    • fa63d9b [doc] Upgrade the contributors list and CONTRIBUTORS.txt
    • a258854 Raise syntax-error correctly on invalid encodings (#7553)
    • 43ecd7d Fix handling of -- as separator between positional args and flags (#7551)
    • 66ae21c Check py-version for async unnecessary-dunder-call (#7549)
    • 983d5fc Fix crash in modified_iterating checker for set defined as a class attrib...
    • 5c22a79 Prevent redefined-outer-name for if t.TYPE_CHECKING
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • pip(deps): bump pylint from 2.10.2 to 2.15.3

    pip(deps): bump pylint from 2.10.2 to 2.15.3

    Bumps pylint from 2.10.2 to 2.15.3.

    Commits
    • 403dac6 Bump pylint to 2.15.3, update changelog
    • 38e2784 Bump astroid to 2.12.10
    • f5e168e Fix undefined-loop-variable with NoReturn and Never (#7476)
    • fbc9e66 Accept a comma-separated list of messages IDs in --help-msg (#7490)
    • fe3436e False positive global-variable-not-assigned (#7479)
    • 52cf631 [invalid-class-object] Fix crash when class is defined with a tuple
    • 8e05ff6 Fix a crash in the modified-iterating-dict checker involving instance attri...
    • 9b359ad Fix unhashable-member crash when lambda used as a dict key (#7454)
    • 5716ad1 Bump pylint to 2.15.2, update changelog
    • 49b5d5d Upgrade astroid version following 2.12.9 release
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • pip(deps): bump pylint from 2.10.2 to 2.15.2

    pip(deps): bump pylint from 2.10.2 to 2.15.2

    Bumps pylint from 2.10.2 to 2.15.2.

    Commits
    • 5716ad1 Bump pylint to 2.15.2, update changelog
    • 49b5d5d Upgrade astroid version following 2.12.9 release
    • 97b07f7 Add more cases that emit bad-plugin-value (#7284)
    • c5aefa2 Bump pylint to 2.15.1, update changelog
    • de613c2 Fix and refactors for docparams extension (#7398)
    • e63a352 Fix 2.15 changelog (#7369)
    • 5b85ecc Suppress OSError in config file discovery (#7423)
    • 262723a Make missing-yield/raises-doc respect no-docstring-rgx option
    • a21af6a Upgrade astroid version following 2.12.8 release
    • b047220 Make disable-next only consider the succeeding line (#7411)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • pip(deps): bump isort from 5.9.3 to 5.11.4

    pip(deps): bump isort from 5.9.3 to 5.11.4

    Bumps isort from 5.9.3 to 5.11.4.

    Release notes

    Sourced from isort's releases.

    5.11.4

    Changes

    :package: Dependencies

    5.11.3

    Changes

    :beetle: Fixes

    :construction_worker: Continuous Integration

    v5.11.3

    Changes

    :beetle: Fixes

    :construction_worker: Continuous Integration

    5.11.2

    Changes

    5.11.1

    Changes December 12 2022

    ... (truncated)

    Changelog

    Sourced from isort's changelog.

    5.11.4 December 21 2022

    5.11.3 December 16 2022

    5.11.2 December 12 2022

    5.11.1 December 12 2022

    5.11.0 December 12 2022

    ... (truncated)

    Commits
    • 98390f5 Merge pull request #2059 from PyCQA/version/5.11.4
    • df69a05 Bump version 5.11.4
    • f9add58 Merge pull request #2058 from PyCQA/deps/poetry-1.3.1
    • 36caa91 Bump Poetry 1.3.1
    • 3c2e2d0 Merge pull request #1978 from mgorny/toml-test
    • 45d6abd Remove obsolete toml import from the test suite
    • 3020e0b Merge pull request #2057 from mgorny/poetry-install
    • a6fdbfd Stop installing documentation files to top-level site-packages
    • ff306f8 Fix tag template to match old standard
    • 227c4ae Merge pull request #2052 from hugovk/main
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • pip(deps): bump pylint from 2.10.2 to 2.15.9

    pip(deps): bump pylint from 2.10.2 to 2.15.9

    Bumps pylint from 2.10.2 to 2.15.9.

    Commits
    • 1ded4d0 Bump pylint to 2.15.9, update changelog (#7952)
    • 785c629 [testutil] More information in output for functional test fail (#7948)
    • 3c3ab98 [pypy3.8] Disable multiple-statements false positive on affected functional t...
    • dca3940 Fix inconsistent argument exit code when argparse exit with its own error cod...
    • 494e514 Fix ModuleNotFoundError when using pylint_django (#7940) (#7941)
    • 83668de fix: bump dill to >= 0.3.6, prevents tests hanging with python3.11 (#7918)
    • eadc308 [github actions] Fix enchant's install in the spelling job
    • 391323e Avoid hanging forever after a parallel job was killed (#7834) (#7930)
    • 4655b92 Prevent used-before-assignment in pattern matching with a guard (#7922) (#7...
    • 1f84ed9 Bump pylint to 2.15.8, update changelog (#7899)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • Update contributor guide to include language from the template

    Update contributor guide to include language from the template

    Signed-off-by: Brian Warner [email protected]


    name: Update the contributor guide about: Merges in language from the new standard template


    What does this PR do?

    Fidelity is updating and standardizing our GitHub presence. This PR adds various guidelines from the default contributor guide.

    Checklist

    N/A for this PR (governance related)

    • [ ] Did you adhere to PEP-8 standards?
    • [ ] Did you run black and isort prior to submitting your PR?
    • [ ] Does your PR pass all existing unit tests?
    • [ ] Did you add associated unit tests for any additional functionality?
    • [ ] Did you provide code documentation (Google Docstring format) whenever possible, even for simple functions or classes?
    • [ ] Did you add necessary documentation to the website?

    Review

    Request will go to reviewers to approve for merge.

    opened by brianwarner 0
  • pip(deps): bump mpi4py from 3.0.3 to 3.1.4

    pip(deps): bump mpi4py from 3.0.3 to 3.1.4

    Bumps mpi4py from 3.0.3 to 3.1.4.

    Release notes

    Sourced from mpi4py's releases.

    3.1.4

    WARNING: This is the last release supporting Python 2.

    • Rebuild C sources with Cython 0.29.32 to support Python 3.11.

    • Fix contiguity check for DLPack and CAI buffers.

    • Workaround build failures with setuptools v60.

    3.1.3

    WARNING: This is the last release supporting Python 2.

    • Add missing support for MPI.BOTTOM to generalized all-to-all collectives.

    3.1.2

    WARNING: This is the last release supporting Python 2.

    • mpi4py.futures: Add _max_workers property to MPIPoolExecutor.

    • mpi4py.util.dtlib: Fix computation of alignment for predefined datatypes.

    • mpi4py.util.pkl5: Fix deadlock when using ssend() + mprobe().

    • mpi4py.util.pkl5: Add environment variable MPI4PY_PICKLE_THRESHOLD.

    • mpi4py.rc: Interpret "y" and "n" strings as boolean values.

    • Fix/add typemap/typestr for MPI.WCHAR/MPI.COUNT datatypes.

    • Minor fixes and additions to documentation.

    • Minor fixes to typing support.

    • Support for local version identifier (PEP-440).

    3.1.1

    WARNING: This is the last release supporting Python 2.

    • Fix typo in Requires-Python package metadata.

    • Regenerate C sources with Cython 0.29.24.

    3.1.0

    WARNING: This is the last release supporting Python 2.

    • New features:

      • mpi4py.util: New package collecting miscellaneous utilities.
    • Enhancements:

    ... (truncated)

    Changelog

    Sourced from mpi4py's changelog.

    Release 3.1.4 [2022-11-02]

    .. warning:: This is the last release supporting Python 2.

    • Rebuild C sources with Cython 0.29.32 to support Python 3.11.

    • Fix contiguity check for DLPack and CAI buffers.

    • Workaround build failures with setuptools v60.

    Release 3.1.3 [2021-11-25]

    .. warning:: This is the last release supporting Python 2.

    • Add missing support for MPI.BOTTOM to generalized all-to-all collectives.

    Release 3.1.2 [2021-11-04]

    .. warning:: This is the last release supporting Python 2.

    • mpi4py.futures: Add _max_workers property to MPIPoolExecutor.

    • mpi4py.util.dtlib: Fix computation of alignment for predefined datatypes.

    • mpi4py.util.pkl5: Fix deadlock when using ssend() + mprobe().

    • mpi4py.util.pkl5: Add environment variable MPI4PY_PICKLE_THRESHOLD.

    • mpi4py.rc: Interpret "y" and "n" strings as boolean values.

    • Fix/add typemap/typestr for MPI.WCHAR/MPI.COUNT datatypes.

    • Minor fixes and additions to documentation.

    • Minor fixes to typing support.

    • Support for local version identifier (PEP-440).

    Release 3.1.1 [2021-08-14]

    .. warning:: This is the last release supporting Python 2.

    • Fix typo in Requires-Python package metadata.

    ... (truncated)

    Commits
    • a7610e5 Bump version number to 3.1.4
    • ab2b897 Update release notes
    • e67bb07 mpi4py.rc: Honor environment variables for initialize/finalize
    • d699ec6 allow IPv6 sockets in test_dynproc.py testJoin
    • e3e4e02 test: NumPy 1.22+ DLPack support raises TypeError if the array is readonly
    • 9ccf55f fix: Fix contiguity check for CAI/DLPack buffers with shape[i] <= 1
    • d53c159 setup: Workaround CC='xcrun ...' in macOS
    • 069bfde test: Disable test failing with NumPy 1.22.0
    • c252c9c test: Disable test failing with NumPy 1.22.0
    • 471e12e lint: Fix mypy warnings
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • pip(deps): bump pytest-cov from 2.12.1 to 4.0.0

    pip(deps): bump pytest-cov from 2.12.1 to 4.0.0

    Bumps pytest-cov from 2.12.1 to 4.0.0.

    Changelog

    Sourced from pytest-cov's changelog.

    4.0.0 (2022-09-28)

    Note that this release drops support for multiprocessing.

    • --cov-fail-under no longer causes pytest --collect-only to fail Contributed by Zac Hatfield-Dodds in [#511](https://github.com/pytest-dev/pytest-cov/issues/511) <https://github.com/pytest-dev/pytest-cov/pull/511>_.

    • Dropped support for multiprocessing (mostly because issue 82408 <https://github.com/python/cpython/issues/82408>_). This feature was mostly working but very broken in certain scenarios and made the test suite very flaky and slow.

      There is builtin multiprocessing support in coverage and you can migrate to that. All you need is this in your .coveragerc::

      [run] concurrency = multiprocessing parallel = true sigterm = true

    • Fixed deprecation in setup.py by trying to import setuptools before distutils. Contributed by Ben Greiner in [#545](https://github.com/pytest-dev/pytest-cov/issues/545) <https://github.com/pytest-dev/pytest-cov/pull/545>_.

    • Removed undesirable new lines that were displayed while reporting was disabled. Contributed by Delgan in [#540](https://github.com/pytest-dev/pytest-cov/issues/540) <https://github.com/pytest-dev/pytest-cov/pull/540>_.

    • Documentation fixes. Contributed by Andre Brisco in [#543](https://github.com/pytest-dev/pytest-cov/issues/543) <https://github.com/pytest-dev/pytest-cov/pull/543>_ and Colin O'Dell in [#525](https://github.com/pytest-dev/pytest-cov/issues/525) <https://github.com/pytest-dev/pytest-cov/pull/525>_.

    • Added support for LCOV output format via --cov-report=lcov. Only works with coverage 6.3+. Contributed by Christian Fetzer in [#536](https://github.com/pytest-dev/pytest-cov/issues/536) <https://github.com/pytest-dev/pytest-cov/issues/536>_.

    • Modernized pytest hook implementation. Contributed by Bruno Oliveira in [#549](https://github.com/pytest-dev/pytest-cov/issues/549) <https://github.com/pytest-dev/pytest-cov/pull/549>_ and Ronny Pfannschmidt in [#550](https://github.com/pytest-dev/pytest-cov/issues/550) <https://github.com/pytest-dev/pytest-cov/pull/550>_.

    3.0.0 (2021-10-04)

    Note that this release drops support for Python 2.7 and Python 3.5.

    • Added support for Python 3.10 and updated various test dependencies. Contributed by Hugo van Kemenade in [#500](https://github.com/pytest-dev/pytest-cov/issues/500) <https://github.com/pytest-dev/pytest-cov/pull/500>_.
    • Switched from Travis CI to GitHub Actions. Contributed by Hugo van Kemenade in [#494](https://github.com/pytest-dev/pytest-cov/issues/494) <https://github.com/pytest-dev/pytest-cov/pull/494>_ and [#495](https://github.com/pytest-dev/pytest-cov/issues/495) <https://github.com/pytest-dev/pytest-cov/pull/495>_.
    • Add a --cov-reset CLI option. Contributed by Danilo Šegan in [#459](https://github.com/pytest-dev/pytest-cov/issues/459) <https://github.com/pytest-dev/pytest-cov/pull/459>_.
    • Improved validation of --cov-fail-under CLI option. Contributed by ... Ronny Pfannschmidt's desire for skark in [#480](https://github.com/pytest-dev/pytest-cov/issues/480) <https://github.com/pytest-dev/pytest-cov/pull/480>_.
    • Dropped Python 2.7 support.

    ... (truncated)

    Commits
    • 28db055 Bump version: 3.0.0 → 4.0.0
    • 57e9354 Really update the changelog.
    • 56b810b Update chagelog.
    • f7fced5 Add support for LCOV output
    • 1211d31 Fix flake8 error
    • b077753 Use modern approach to specify hook options
    • 00713b3 removed incorrect docs on data_file.
    • b3dda36 Improve workflow with a collecting status check. (#548)
    • 218419f Prevent undesirable new lines to be displayed when report is disabled
    • 60b73ec migrate build command from distutils to setuptools
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • pip(deps): bump coveralls from 3.2.0 to 3.3.1

    pip(deps): bump coveralls from 3.2.0 to 3.3.1

    Bumps coveralls from 3.2.0 to 3.3.1.

    Release notes

    Sourced from coveralls's releases.

    3.3.1 (2021-11-11)

    Bug Fixes

    Internal

    • exclude a few incompatible coverage versions (#337)

    coverage versions v6.0.0 through v6.1.1 exhibited some incompatibilies with coveralls; we've updated our version compatibility ranges to exclude those versions.

    3.3.0 (2021-11-04)

    Features

    Note this implicitly improves support for Python 3.10, as coverage v6.x includes some fixes for v3.10 of Python.

    Bug Fixes

    This solves some edge cases around duplicated / unmerged coverage results in parallel runs.

    Changelog

    Sourced from coveralls's changelog.

    3.3.1 (2021-11-11)

    Bug Fixes

    Internal

    • exclude a few incompatible coverage versions (#337)

    coverage versions v6.0.0 through v6.1.1 exhibited some incompatibilies with coveralls; we've updated our version compatibility ranges to exclude those versions.

    3.3.0 (2021-11-04)

    Features

    Note this implicitly improves support for Python 3.10, as coverage v6.x includes some fixes for v3.10 of Python.

    Bug Fixes

    This solves some edge cases around duplicated / unmerged coverage results in parallel runs.

    Commits
    • c35bf51 chore(release): bump version
    • 48f0ac0 chore: fix lint issues
    • 2610885 fix: correctly support parallel execution on CircleCI (#336)
    • 495ddd4 chore: exclude incompatible coverage versions (#337)
    • 17f52d2 test: remove test inter-dependencies
    • e03a2de chore(release): bump version
    • 95ac8a6 tests: fix tests & linting
    • 372443d feat(deps): add support for coverage v6.x (#330)
    • 1a0fd9b fix(env): fixup handling of default env service values (#314)
    • f5ebce6 docs(config): avoid over-exposing GITHUB_TOKEN (#332)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
Releases(v0.2.1)
Owner
Fidelity Investments
The tools of tomorrow have yet to be built. Waiting on you.
Fidelity Investments
Reformer, the efficient Transformer, in Pytorch

Reformer, the Efficient Transformer, in Pytorch This is a Pytorch implementation of Reformer https://openreview.net/pdf?id=rkgNKkHtvB It includes LSH

Phil Wang 1.8k Jan 06, 2023
A few Windows specific scripts for PyTorch

It is a repo that contains scripts that makes using PyTorch on Windows easier. Easy Installation Update: Starting from 0.4.0, you can go to the offici

408 Dec 15, 2022
ocaml-torch provides some ocaml bindings for the PyTorch tensor library.

ocaml-torch provides some ocaml bindings for the PyTorch tensor library. This brings to OCaml NumPy-like tensor computations with GPU acceleration and tape-based automatic differentiation.

Laurent Mazare 369 Jan 03, 2023
A tiny package to compare two neural networks in PyTorch

Compare neural networks by their feature similarity

Anand Krishnamoorthy 180 Dec 30, 2022
S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

Amazon Web Services 138 Jan 03, 2023
Implements pytorch code for the Accelerated SGD algorithm.

AccSGD This is the code associated with Accelerated SGD algorithm used in the paper On the insufficiency of existing momentum schemes for Stochastic O

205 Jan 02, 2023
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API

micrograd A tiny Autograd engine (with a bite! :)). Implements backpropagation (reverse-mode autodiff) over a dynamically built DAG and a small neural

Andrej 3.5k Jan 08, 2023
A code copied from google-research which named motion-imitation was rewrited with PyTorch

motor-system Introduction A code copied from google-research which named motion-imitation was rewrited with PyTorch. More details can get from this pr

NewEra 6 Jan 08, 2022
You like pytorch? You like micrograd? You love tinygrad! ❤️

For something in between a pytorch and a karpathy/micrograd This may not be the best deep learning framework, but it is a deep learning framework. Due

George Hotz 9.7k Jan 05, 2023
270 Dec 24, 2022
Code for paper "Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking"

model_based_energy_constrained_compression Code for paper "Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and

Haichuan Yang 16 Jun 15, 2022
pip install antialiased-cnns to improve stability and accuracy

Antialiased CNNs [Project Page] [Paper] [Talk] Making Convolutional Networks Shift-Invariant Again Richard Zhang. In ICML, 2019. Quick & easy start Ru

Adobe, Inc. 1.6k Dec 28, 2022
OptNet: Differentiable Optimization as a Layer in Neural Networks

OptNet: Differentiable Optimization as a Layer in Neural Networks This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch sourc

CMU Locus Lab 428 Dec 24, 2022
lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch

lookahead optimizer for pytorch PyTorch implement of Lookahead Optimizer: k steps forward, 1 step back Usage: base_opt = torch.optim.Adam(model.parame

Liam 318 Dec 09, 2022
PyTorch toolkit for biomedical imaging

farabio is a minimal PyTorch toolkit for out-of-the-box deep learning support in biomedical imaging. For further information, see Wikis and Docs.

San Askaruly 47 Dec 28, 2022
Distiller is an open-source Python package for neural network compression research.

Wiki and tutorials | Documentation | Getting Started | Algorithms | Design | FAQ Distiller is an open-source Python package for neural network compres

Intel Labs 4.1k Dec 28, 2022
Official implementations of EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis.

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis This repo contains the official implementations of EigenDamage: Structured Prunin

Chaoqi Wang 107 Apr 20, 2022
Use Jax functions in Pytorch with DLPack

Use Jax functions in Pytorch with DLPack

Phil Wang 106 Dec 17, 2022
PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

Cong Cai 12 Dec 19, 2021
3D-RETR: End-to-End Single and Multi-View3D Reconstruction with Transformers

3D-RETR: End-to-End Single and Multi-View 3D Reconstruction with Transformers (BMVC 2021) Zai Shi*, Zhao Meng*, Yiran Xing, Yunpu Ma, Roger Wattenhofe

Zai Shi 36 Dec 21, 2022