FFCV: Fast Forward Computer Vision (and other ML workloads!)

Overview

Fast Forward Computer Vision: train models at a fraction of the cost with accelerated data loading!

[install] [quickstart] [features] [docs] [support slack] [homepage]
Maintainers: Guillaume Leclerc, Andrew Ilyas and Logan Engstrom

ffcv is a drop-in data loading system that dramatically increases data throughput in model training:

Keep your training algorithm the same, just replace the data loader! Look at these speedups:

ffcv also comes prepacked with fast, simple code for standard vision benchmarks:

Installation

conda create -y -n ffcv python=3.9 cupy pkg-config compilers libjpeg-turbo opencv pytorch torchvision cudatoolkit=11.3 numba -c pytorch -c conda-forge
conda activate ffcv
pip install ffcv

Troubleshooting note: if the above commands result in a package conflict error, try running conda config --env --set channel_priority flexible in the environment and rerunning the installation command.

Citation

If you use FFCV, please cite it as:

@misc{leclerc2022ffcv,
    author = {Guillaume Leclerc and Andrew Ilyas and Logan Engstrom and Sung Min Park and Hadi Salman and Aleksander Madry},
    title = {ffcv},
    year = {2022},
    howpublished = {\url{https://github.com/libffcv/ffcv/}},
    note = {commit xxxxxxx}
}

(Make sure to replace xxxxxxx above with the hash of the commit used!)

Quickstart

Accelerate any learning system with ffcv. First, convert your dataset into ffcv format (ffcv converts both indexed PyTorch datasets and WebDatasets):

from ffcv.writer import DatasetWriter
from ffcv.fields import RGBImageField, IntField

# Your dataset (`torch.utils.data.Dataset`) of (image, label) pairs
my_dataset = make_my_dataset()
write_path = '/output/path/for/converted/ds.beton'

# Pass a type for each data field
writer = DatasetWriter(write_path, {
    # Tune options to optimize dataset size, throughput at train-time
    'image': RGBImageField(max_resolution=256, jpeg_quality=jpeg_quality),
    'label': IntField()
})

# Write dataset
writer.from_indexed_dataset(ds)

Then replace your old loader with the ffcv loader at train time (in PyTorch, no other changes required!):

from ffcv.loader import Loader, OrderOption
from ffcv.transforms import ToTensor, ToDevice, ToTorchImage, Cutout
from ffcv.fields.decoders import IntDecoder, RandomResizedCropRGBImageDecoder

# Random resized crop
decoder = RandomResizedCropRGBImageDecoder((224, 224))

# Data decoding and augmentation
image_pipeline = [decoder, Cutout(), ToTensor(), ToTorchImage(), ToDevice(0)]
label_pipeline = [IntDecoder(), ToTensor(), ToDevice(0)]

# Pipeline for each data field
pipelines = {
    'image': image_pipeline,
    'label': label_pipeline
}

# Replaces PyTorch data loader (`torch.utils.data.Dataloader`)
loader = Loader(write_path, batch_size=bs, num_workers=num_workers,
                order=OrderOption.RANDOM, pipelines=pipelines)

# rest of training / validation proceeds identically
for epoch in range(epochs):
    ...

See here for a more detailed guide to deploying ffcv for your dataset.

Prepackaged Computer Vision Benchmarks

From gridding to benchmarking to fast research iteration, there are many reasons to want faster model training. Below we present premade codebases for training on ImageNet and CIFAR, including both (a) extensible codebases and (b) numerous premade training configurations.

ImageNet

We provide a self-contained script for training ImageNet fast. Above we plot the training time versus accuracy frontier, and the dataloading speeds, for 1-GPU ResNet-18 and 8-GPU ResNet-50 alongside a few baselines.

Link to Config top_1 top_5 # Epochs Time (mins) Architecture Setup
Link 0.784 0.941 88 77.2 ResNet-50 8 x A100
Link 0.780 0.937 56 49.4 ResNet-50 8 x A100
Link 0.772 0.932 40 35.6 ResNet-50 8 x A100
Link 0.766 0.927 32 28.7 ResNet-50 8 x A100
Link 0.756 0.921 24 21.7 ResNet-50 8 x A100
Link 0.738 0.908 16 14.9 ResNet-50 8 x A100
Link 0.724 0.903 88 187.3 ResNet-18 1 x A100
Link 0.713 0.899 56 119.4 ResNet-18 1 x A100
Link 0.706 0.894 40 85.5 ResNet-18 1 x A100
Link 0.700 0.889 32 68.9 ResNet-18 1 x A100
Link 0.688 0.881 24 51.6 ResNet-18 1 x A100
Link 0.669 0.868 16 35.0 ResNet-18 1 x A100

Train your own ImageNet models! You can use our training script and premade configurations to train any model seen on the above graphs.

CIFAR-10

We also include premade code for efficient training on CIFAR-10 in the examples/ directory, obtaining 93% top1 accuracy in 36 seconds on a single A100 GPU (without optimizations such as MixUp, Ghost BatchNorm, etc. which have the potential to raise the accuracy even further). You can find the training script here.

Features

Computer vision or not, FFCV can help make training faster in a variety of resource-constrained settings! Our performance guide has a more detailed account of the ways in which FFCV can adapt to different performance bottlenecks.

  • Plug-and-play with any existing training code: Rather than changing aspects of model training itself, FFCV focuses on removing data bottlenecks, which turn out to be a problem everywhere from neural network training to linear regression. This means that:

    • FFCV can be introduced into any existing training code in just a few lines of code (e.g., just swapping out the data loader and optionally the augmentation pipeline);
    • You don't have to change the model itself to make it faster (e.g., feel free to analyze models without CutMix, Dropout, momentum scheduling, etc.);
    • FFCV can speed up a lot more beyond just neural network training---in fact, the more data-bottlenecked the application (e.g., linear regression, bulk inference, etc.), the faster FFCV will make it!

    See our Getting started guide, Example walkthroughs, and Code examples to see how easy it is to get started!

  • Fast data processing without the pain: FFCV automatically handles data reading, pre-fetching, caching, and transfer between devices in an extremely efficiently way, so that users don't have to think about it.

  • Automatically fused-and-compiled data processing: By either using pre-written FFCV transformations or easily writing custom ones, users can take advantage of FFCV's compilation and pipelining abilities, which will automatically fuse and compile simple Python augmentations to machine code using Numba, and schedule them asynchronously to avoid loading delays.

  • Load data fast from RAM, SSD, or networked disk: FFCV exposes user-friendly options that can be adjusted based on the resources available. For example, if a dataset fits into memory, FFCV can cache it at the OS level and ensure that multiple concurrent processes all get fast data access. Otherwise, FFCV can use fast process-level caching and will optimize data loading to minimize the underlying number of disk reads. See The Bottleneck Doctor guide for more information.

  • Training multiple models per GPU: Thanks to fully asynchronous thread-based data loading, you can now interleave training multiple models on the same GPU efficiently, without any data-loading overhead. See this guide for more info.

  • Dedicated tools for image handling: All the features above work are equally applicable to all sorts of machine learning models, but FFCV also offers some vision-specific features, such as fast JPEG encoding and decoding, storing datasets as mixtures of raw and compressed images to trade off I/O overhead and compute overhead, etc. See the Working with images guide for more information.

Contributors

  • Guillaume Leclerc
  • Logan Engstrom
  • Andrew Ilyas
  • Sam Park
  • Hadi Salman
Comments
  • Two or more crops for a single image

    Two or more crops for a single image

    Hey! Thank you for the great work.

    Is it possible to apply the same pipeline multiple times to the same image? From what I checked, this is currently not possible as it seems that the images are loaded and cropped within a single operation. Is there any way to go around this, loading the image only once and applying the augmentations pipelines multiple times?

    enhancement 
    opened by vturrisi 30
  • Cifar10 example: very slow training speed on P100 GPU

    Cifar10 example: very slow training speed on P100 GPU

    Hello, as the title suggested, I run the cifar example code on a tesla-p100 gpu, but got a very slow training speed (784.29492) compared to the official stated one (36 s). Anything wrong in my setting? I don't think the p100 gpu is 20x slower than a100.

    Below is the command line output (directly copy & paste and don't format it, sorry for that)

    (ffcv) ➜ ffcv python train_cifar10.py --config-file default_config.yaml ┌ Arguments defined────────┬───────────────────────────┐ │ Parameter │ Value │ ├──────────────────────────┼───────────────────────────┤ │ training.lr │ 0.5 │ │ training.epochs │ 24 │ │ training.lr_peak_epoch │ 5 │ │ training.batch_size │ 512 │ │ training.momentum │ 0.9 │ │ training.weight_decay │ 0.0005 │ │ training.label_smoothing │ 0.1 │ │ training.num_workers │ 4 │ │ training.lr_tta │ True │ │ data.train_dataset │ ../data/cifar_train.beton │ │ data.val_dataset │ ../data/cifar_test.beton │ └──────────────────────────┴───────────────────────────┘ 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [01:04<00:00, 1.51it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:30<00:00, 3.16it/s] Total time: 784.29492 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97/97 [00:11<00:00, 8.59it/s] train accuracy: 98.0% 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00, 4.07it/s] test accuracy: 92.1%

    opened by Vandermode 16
  • Git tag for v0.0.3

    Git tag for v0.0.3

    I see that there's a release tagged for v0.0.3rc1, but no tag for release v0.0.3 which is currently that latest release on pypi. Would it be possible to get a git tag for v0.0.3?

    Having git tags for releases makes it a lot easier for others to package ffcv, eg in Nixpkgs.

    opened by samuela 14
  • torch.nn.Module classes cannot be used in Pipeline

    torch.nn.Module classes cannot be used in Pipeline

    I tried to add color jittering augmentation to the ImageNet training through inserting line torchvision.transforms.ColorJitter(.4,.4,.4) right after RandomHorizontalFlip, but met this error:

    numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
    Failed in nopython mode pipeline (step: nopython frontend)
    Untyped global name 'self': Cannot determine Numba type of <class 'ffcv.transforms.module.ModuleWrapper'>
    
    File "../ffcv/ffcv/transforms/module.py", line 25:
            def apply_module(inp, _):
                res = self.module(inp)
                ^
    
    During: resolving callee type: type(CPUDispatcher(<function ModuleWrapper.generate_code.<locals>.apply_module at 0x7f921d4c98b0>))
    During: typing of call at  (2)
    
    During: resolving callee type: type(CPUDispatcher(<function ModuleWrapper.generate_code.<locals>.apply_module at 0x7f921d4c98b0>))
    During: typing of call at  (2)
    
    
    File "/home/chengxuz/ffcv-imagenet", line 2:
    <source missing, REPL/exec in use?>
    
    
    

    Any idea on what's happening here and how to fix this?

    opened by chengxuz 14
  • Tried allocating 144000000 but page size is {self.page_size}

    Tried allocating 144000000 but page size is {self.page_size}

    Hi,

    I am trying to convert my dataset into ffcv format. There are 3 inputs, img, ldx and, ldy. img has size (2, 600, 600), ldx has size(100,1,600,600) and, ldy has same size as ldy. Output has size (100,600,600) and all the inputs and outputs are float32. Witer works fine but writer.from_indexed_dataset(train_dataset) throws error:

    "Tried allocating 144000000 but page size is {self.page_size}".

    from dataset import dataset
    
    num=20
    train_dataset = dataset(num, is_train=True)
    
    write_path = '/ffcv_test/d.beton'
    
    for img, ldx, ldy, output in train_dataset:
           writer = DatasetWriter(write_path, {
            'img': NDArrayField(shape=(2, 600, 600), dtype=np.dtype('float32')),
            'ldx': NDArrayField(shape=(100,1, 600, 600), dtype=np.dtype('float32')),
            'ldy': NDArrayField(shape=(100, 1, 600, 600), dtype=np.dtype('float32')),
            'output': NDArrayField(shape=(100,600,600), dtype=np.dtype('float32')),
            }, num_workers=16)    
    
    writer.from_indexed_dataset(train_dataset)
    
    fixed_in_next_release 
    opened by bolandih 13
  • How to perform identical transformation on images and targets in dense prediction tasks?

    How to perform identical transformation on images and targets in dense prediction tasks?

    In dense prediction (or Pix2Pix) tasks such as segmentation, depth estimation etc, we often need to perform the same transformation on images and targets. e.g. Either both the image and the corresponding target should be horizontally flipped, or neither of them.

    How do I achieve that with FFCV?

    opened by shariqfarooq123 12
  • conflicts among packages when installing a conda environment

    conflicts among packages when installing a conda environment

    Hi! Congrats for the amazing work and thank you very much for sharing it publicly. I encounter an issue while creating a conda environment to install ffcv.

    I execute the command from the repo, on a linux machine with conda 4.10.3, and get the following issue (click on the toggle list to see the conda output) Would you have any recommendation on how to solve these conflicts?

    Thanks!

    Command to create a conda environment:

    conda create -y -n ffcv python=3.9 cupy pkg-config compilers libjpeg-turbo opencv pytorch torchvision cudatoolkit=11.3 numba -c pytorch -c conda-forge
    
    Click here to see the conda output
    UnsatisfiableError: The following specifications were found to be incompatible with each other:
    
    Output in format: Requested package -> Available versions
    
    Package libuuid conflicts for:
    python=3.9 -> libuuid[version='>=2.32.1,<3.0a0']
    cupy -> python[version='>=3.10,<3.11.0a0'] -> libuuid[version='>=1.0.3,<2.0a0|>=2.32.1,<3.0a0']
    numba -> python[version='>=3.9,<3.10.0a0'] -> libuuid[version='>=1.0.3,<2.0a0|>=2.32.1,<3.0a0']
    pytorch -> python[version='>=3.9,<3.10.0a0'] -> libuuid[version='>=2.32.1,<3.0a0']
    torchvision -> python[version='>=3.9,<3.10.0a0'] -> libuuid[version='>=1.0.3,<2.0a0|>=2.32.1,<3.0a0']
    
    Package libgfortran-ng conflicts for:
    opencv -> hdf5[version='>=1.10.5,<1.10.6.0a0'] -> libgfortran-ng[version='>=4.9|>=7,<8.0a0|>=7.2.0,<8.0a0']
    numba -> numpy[version='>=1.18.5,<2.0a0'] -> libgfortran-ng[version='>=4.9|>=7,<8.0a0|>=7.2.0,<8.0a0']
    pytorch -> blas=[build=mkl] -> libgfortran-ng[version='>=4.9|>=7,<8.0a0|>=7.2.0,<8.0a0']
    compilers -> fortran-compiler==1.2.0=h1990efc_0 -> libgfortran-ng[version='>=7,<8.0a0']
    torchvision -> numpy[version='>=1.11'] -> libgfortran-ng[version='>=4.9|>=7,<8.0a0|>=7.2.0,<8.0a0']
    cupy -> numpy[version='>=1.18'] -> libgfortran-ng[version='>=4.9|>=7,<8.0a0|>=7.2.0,<8.0a0']
    
    Package python_abi conflicts for:
    cupy -> fastrlock[version='>=0.3'] -> python_abi=2.7[build=*_cp27mu]
    cupy -> python_abi[version='3.10.*|3.7|3.7.*|3.9.*|3.8.*|3.6.*|3.6',build='*_cp38|*_cp39|*_cp37m|*_pypy37_pp73|*_cp310|*_cp36m|*_pypy36_pp73']
    
    Package libgcc-ng conflicts for:
    torchvision -> libgcc-ng[version='>=5.4.0|>=7.3.0|>=7.5.0']
    opencv -> libopencv==4.5.5=py38hd60e7aa_0 -> libgcc-ng[version='>=7.5.0|>=9.3.0|>=9.4.0']
    torchvision -> cudatoolkit[version='>=11.1,<11.2'] -> libgcc-ng[version='>=3.0|>=4.9|>=7.2.0|>=9.3.0|>=9.4.0']
    opencv -> libgcc-ng[version='>=4.9|>=7.3.0|>=7.2.0']
    numba -> libgcc-ng[version='>=4.9|>=7.3.0|>=7.5.0|>=9.3.0|>=9.4.0|>=7.2.0']
    compilers -> c-compiler==1.2.0=h7f98852_0 -> libgcc-ng[version='>=7.3.0|>=7.5.0|>=9.3.0']
    python=3.9 -> zlib[version='>=1.2.11,<1.3.0a0'] -> libgcc-ng[version='>=4.9|>=7.2.0']
    pytorch -> blas=[build=mkl] -> libgcc-ng[version='>=3.0|>=4.9|>=9.3.0|>=9.4.0|>=7.2.0']
    cupy -> libgcc-ng[version='>=5.4.0|>=7.3.0|>=7.5.0|>=9.4.0']
    cudatoolkit=11.3 -> libgcc-ng[version='>=9.3.0|>=9.4.0']
    libjpeg-turbo -> libgcc-ng[version='>=4.9|>=7.3.0|>=7.5.0|>=9.3.0|>=9.4.0']
    pkg-config -> libgcc-ng[version='>=4.9|>=7.3.0|>=7.5.0|>=7.2.0']
    cupy -> cudatoolkit[version='>=11.2,<12'] -> libgcc-ng[version='>=3.0|>=4.9|>=9.3.0|>=7.2.0']
    pytorch -> libgcc-ng[version='>=5.4.0|>=7.3.0|>=7.5.0']
    python=3.9 -> libgcc-ng[version='>=7.3.0|>=7.5.0|>=9.3.0|>=9.4.0']
    
    Package libprotobuf conflicts for:
    opencv -> libprotobuf[version='>=3.4.1,<3.5.0a0|>=3.5.2,<3.6.0a0']
    opencv -> libopencv==4.5.5=py38hd60e7aa_0 -> libprotobuf[version='>=3.15.6,<3.16.0a0|>=3.15.8,<3.16.0a0|>=3.16.0,<3.17.0a0|>=3.18.1,<3.19.0a0|>=3.19.1,<3.20.0a0|>=3.19.3,<3.20.0a0']
    
    Package libstdcxx-ng conflicts for:
    opencv -> libopencv==4.5.5=py38hd60e7aa_0 -> libstdcxx-ng[version='>=7.5.0|>=9.3.0|>=9.4.0']
    opencv -> libstdcxx-ng[version='>=4.9|>=7.3.0|>=7.2.0']
    cudatoolkit=11.3 -> libstdcxx-ng[version='>=9.3.0|>=9.4.0']
    cupy -> cudatoolkit[version='>=11.2,<12'] -> libstdcxx-ng[version='>=3.4|>=4.9|>=9.3.0|>=7.2.0']
    pytorch -> libstdcxx-ng[version='>=5.4.0|>=7.3.0|>=7.5.0']
    torchvision -> libstdcxx-ng[version='>=5.4.0|>=7.3.0|>=7.5.0']
    python=3.9 -> libffi[version='>=3.4.2,<3.5.0a0'] -> libstdcxx-ng[version='>=4.9|>=9.4.0|>=7.2.0']
    torchvision -> cudatoolkit[version='>=11.1,<11.2'] -> libstdcxx-ng[version='>=3.4|>=4.9|>=9.3.0|>=9.4.0|>=7.2.0']
    numba -> libstdcxx-ng[version='>=4.9|>=7.3.0|>=7.5.0|>=9.3.0|>=9.4.0|>=7.2.0']
    cupy -> libstdcxx-ng[version='>=5.4.0|>=7.3.0|>=7.5.0|>=9.4.0']
    pytorch -> cudatoolkit[version='>=11.3,<11.4'] -> libstdcxx-ng[version='>=3.4|>=4.9|>=9.3.0|>=9.4.0|>=7.2.0']
    python=3.9 -> libstdcxx-ng[version='>=7.3.0|>=7.5.0|>=9.3.0']
    compilers -> cxx-compiler==1.2.0=h4bd325d_0 -> libstdcxx-ng[version='>=7.3.0|>=7.5.0|>=9.3.0']
    
    Package zlib conflicts for:
    pytorch -> python[version='>=3.7,<3.8.0a0'] -> zlib[version='1.2.*|1.2.11|>=1.2.11,<1.3.0a0|1.2.8|1.2.11.*']
    numba -> llvmlite[version='>=0.38.0,<0.39.0a0'] -> zlib[version='1.2.*|1.2.11|>=1.2.11,<1.3.0a0|1.2.8|1.2.11.*']
    cupy -> python[version='>=3.7,<3.8.0a0'] -> zlib[version='1.2.*|1.2.11|>=1.2.11,<1.3.0a0|1.2.8|1.2.11.*']
    opencv -> python[version='>=3.6,<3.7.0a0'] -> zlib[version='1.2.11.*|1.2.8']
    pkg-config -> zlib[version='1.2.*|1.2.11|>=1.2.11,<1.3.0a0']
    opencv -> zlib[version='1.2.*|1.2.11|>=1.2.11,<1.3.0a0']
    torchvision -> ffmpeg[version='>=4.2'] -> zlib[version='1.2.*|1.2.11|>=1.2.11,<1.3.0a0|1.2.8|1.2.11.*']
    python=3.9 -> zlib[version='>=1.2.11,<1.3.0a0']
    
    Package libgfortran5 conflicts for:
    pytorch -> blas=[build=mkl] -> libgfortran5[version='>=9.3.0|>=9.4.0']
    opencv -> hdf5[version='>=1.10.5,<1.10.6.0a0'] -> libgfortran5[version='>=9.3.0']
    compilers -> fortran-compiler==1.2.0=h1990efc_0 -> libgfortran5[version='>=9.3.0']
    
    Package freetype conflicts for:
    opencv -> freetype[version='>=2.8,<2.9.0a0|>=2.9.1,<3.0a0']
    opencv -> libopencv==4.5.5=py38hd60e7aa_0 -> freetype[version='2.6.*|2.7|2.7.*|2.8.1|>=2.10.4,<3.0a0|>=2.8.1,<2.9.0a0|>=2.10.2,<3.0a0|>=2.10.3,<3.0a0|>=2.8.1,<2.8.2.0a0|2.8.1.*']
    
    Package cudnn conflicts for:
    pytorch -> cudnn[version='5.1.*|6.0.*|>=7.0.0,<=8.0a0|>=7.0.5,<=8.0a0|>=7.1.0,<=8.0a0|>=7.1.3,<8.0a0|>=7.3.0,<=8.0a0|>=7.6,<8.0a0|>=7.6.5.32,<8.0a0|>=8.2.1.32,<9.0a0|>=8.1.0.77,<9.0a0|>=7.6.5,<8.0a0|>=7.6.4,<8.0a0|>=7.3.1,<8.0a0|>=7.1.2,<
    =8.0a0']
    torchvision -> pytorch==1.7.1 -> cudnn[version='5.1.*|6.0.*|>=7.0.0,<=8.0a0|>=7.0.5,<=8.0a0|>=7.1.0,<=8.0a0|>=7.1.3,<8.0a0|>=7.3.0,<=8.0a0|>=7.6,<8.0a0|>=7.6.5,<8.0a0|>=7.6.4,<8.0a0|>=7.3.1,<8.0a0|>=7.1.2,<=8.0a0']
    cupy -> cudnn[version='>=7.0.5,<=8.0a0|>=7.1.3,<8.0a0|>=7.3.1,<8.0a0|>=7.6,<8.0a0|>=7.6.5.32,<8.0a0|>=8.1.0.77,<9.0a0|>=8.0.5.39,<9.0a0|>=7.1.0,<=8.0a0|>=7.1.2,<=8.0a0']
    torchvision -> cudnn[version='>=7.6.5.32,<8.0a0|>=8.2.1.32,<9.0a0|>=8.1.0.77,<9.0a0']
    
    Package jpeg conflicts for:
    opencv -> jpeg[version='9.*|>=9c,<10a|>=9b,<10a']
    opencv -> libopencv==4.5.5=py38hd60e7aa_0 -> jpeg[version='9b|>=9d,<10a']
    
    Package _libgcc_mutex conflicts for:
    libjpeg-turbo -> libgcc-ng[version='>=9.4.0'] -> _libgcc_mutex[version='*|0.1',build='conda_forge|main|main']
    numba -> libgcc-ng[version='>=9.4.0'] -> _libgcc_mutex[version='*|0.1',build='conda_forge|main|main']
    python=3.9 -> libgcc-ng[version='>=9.4.0'] -> _libgcc_mutex[version='*|0.1',build='conda_forge|main|main']
    cupy -> libgcc-ng[version='>=9.4.0'] -> _libgcc_mutex[version='*|0.1|0.1',build='conda_forge|main|main']
    cudatoolkit=11.3 -> libgcc-ng[version='>=9.4.0'] -> _libgcc_mutex[version='*|0.1',build='conda_forge|main|main']
    opencv -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex[version='*|0.1|0.1',build='conda_forge|main|main']
    pkg-config -> libgcc-ng[version='>=7.5.0'] -> _libgcc_mutex[version='*|0.1|0.1',build='conda_forge|main|main']
    torchvision -> libgcc-ng[version='>=7.5.0'] -> _libgcc_mutex[version='*|0.1|0.1',build='conda_forge|main|main']
    pytorch -> _openmp_mutex[version='>=4.5'] -> _libgcc_mutex[version='*|0.1',build='conda_forge|main|main']
    
    Package six conflicts for:
    numba -> singledispatch -> six
    torchvision -> six
    pytorch -> mkl-service[version='>=2,<3.0a0'] -> six
    cupy -> six[version='>=1.9|>=1.9.0']
    
    Package cudatoolkit conflicts for:
    torchvision -> cudatoolkit[version='10.2|10.2.*|11.0|11.0.*|11.1|11.1.*|>=10.0,<10.1|>=10.1,<10.2|>=10.2,<10.3|>=11.1,<11.2|>=11.3,<11.4|>=11.0,<11.1|>=9.2,<9.3|>=9.0,<9.1|>=11.2,<12.0a0|>=10.0.130,<10.1.0a0|>=9.2,<9.3.0a0|>=9.0,<9.1.0a0'
    ]
    cudatoolkit=11.3
    pytorch -> cudatoolkit[version='10.0.*|10.0|10.0.*|10.1|10.1.*|10.2|10.2.*|11.0|11.0.*|11.1|11.1.*|8.*|>=10.0,<10.1|>=10.1,<10.2|>=10.2,<10.3|>=11.1,<11.2|>=11.3,<11.4|>=11.0,<11.1|>=9.2,<9.3|>=9.0,<9.1|>=8.0,<8.1|9.*|>=11.2,<12|>=11.2,<1
    2.0a0|9.2|9.2.*|>=10.1.243,<10.2.0a0|>=9.2,<9.3.0a0|>=10.0.130,<10.1.0a0|9.2.*|>=9.0,<9.1.0a0|>=8.0,<8.1.0a0|9.0.*|8.0.*|7.5.*']
    pytorch -> cudnn[version='>=8.2.1.32,<9.0a0'] -> cudatoolkit[version='10.2.*|11.*']
    torchvision -> pytorch==1.10.0 -> cudatoolkit[version='10.0.*|10.0|10.0.*|10.1|10.1.*|>=11.2,<12|9.2|9.2.*|>=10.1.243,<10.2.0a0|9.2.*|>=8.0,<8.1|>=8.0,<8.1.0a0|8.*|9.*|9.0.*|8.0.*|7.5.*|11.*|10.2.*']
    
    Package libwebp conflicts for:
    opencv -> libopencv==4.3.0=py36_0 -> libwebp[version='>=1.0.2,<1.1.0a0']
    opencv -> libwebp[version='0.5.*|>=0.5.2,<0.6.0a0|>=1.0.0,<1.1.0a0']
    
    Package ffmpeg conflicts for:
    opencv -> ffmpeg[version='4.1.*|>=4.1.3,<4.2.0a0|>=4.1.1,<4.2.0a0|>=4.1,<4.2.0a0|>=4.0.2,<4.1.0a0|>=4.0.1,<4.1.0a0|>=3.2.3,<3.2.6|>=3.4,<3.5.0a0']
    opencv -> libopencv==4.5.5=py38hd60e7aa_0 -> ffmpeg[version='>=4.2,<4.3.0a0|>=4.2.3,<4.3.0a0|>=4.3,<4.4.0a0|>=4.3.1,<4.4.0a0|>=4.3.1,<5.0a0|>=4.3.2,<5.0a0']
    
    Package pypy3.7 conflicts for:
    cupy -> python[version='>=3.7,<3.8.0a0'] -> pypy3.7[version='7.3.*|7.3.3.*|7.3.4.*|7.3.5.*|7.3.7.*']
    cupy -> pypy3.7[version='>=7.3.3|>=7.3.4|>=7.3.5|>=7.3.7']
    
    Package expat conflicts for:
    opencv -> pypy3.6[version='>=7.3.3'] -> expat[version='>=2.2.10,<3.0.0a0|>=2.2.9,<3.0.0a0|>=2.2.5,<3.0.0a0|>=2.2.6,<3.0a0']
    cupy -> pypy3.7[version='>=7.3.7'] -> expat[version='>=2.2.9,<3.0.0a0|>=2.3.0,<3.0a0|>=2.4.1,<3.0a0']
    
    Package llvm-openmp conflicts for:
    pytorch -> blas=[build=mkl] -> llvm-openmp[version='>=10.0.0|>=11.0.0|>=11.0.1|>=11.1.0|>=12.0.1|>=9.0.1']
    numba -> _openmp_mutex[version='>=4.5'] -> llvm-openmp[version='>=9.0.1']
    
    Package liblapacke conflicts for:
    opencv -> liblapacke[version='>=3.8.0,<4.0.0a0']
    opencv -> libopencv==4.5.5=py38hd60e7aa_0 -> liblapacke[version='>=3.8.0,<4.0a0']
    
    Package python conflicts for:
    python=3.9
    pytorch -> python[version='>=2.7,<2.8.0a0|>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0']
    torchvision -> python[version='>=2.7,<2.8.0a0|>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.8,<3.9.0a0|>=3.7,<3.8.0a0|>=3.9,<3.10.0a0']
    opencv -> py-opencv==4.5.5=py38he5a9106_0 -> python[version='3.10.*|3.7.*|3.8.*|>=3.8,<3.9.0a0|>=3.10,<3.11.0a0|>=3.9,<3.10.0a0|3.9.*']
    numba -> python_abi=3.7[build=*_cp37m] -> python[version='3.10.*|3.4.*|3.7.*|3.9.*|3.8.*']
    pytorch -> typing_extensions -> python[version='2.7.*|3.5.*|3.6.*|3.9.*|>=3.10,<3.11.0a0|>=3.5|>=3.6|>=3.7|>=3.6,<3.7|3.4.*|3.7.12|3.7.10|3.7.10|3.6.12|3.7.9|3.6.12|3.6.9|3.6.9|3.6.9|3.6.9|>=3|3.8.*|3.7.*',build='0_73_pypy|3_73_pypy|4_73_
    pypy|5_73_pypy|5_73_pypy|0_73_pypy|0_73_pypy|1_73_pypy|2_73_pypy|1_73_pypy']
    torchvision -> numpy[version='>=1.11'] -> python[version='2.7.*|3.5.*|3.6.*|>=3.10,<3.11.0a0|3.4.*|3.9.*|3.7.*|3.8.*']
    numba -> python[version='2.7.*|3.5.*|3.6.*|>=2.7,<2.8.0a0|>=3.10,<3.11.0a0|>=3.7,<3.8.0a0|>=3.9,<3.10.0a0|>=3.8,<3.9.0a0|>=3.6,<3.7.0a0|>=3.5,<3.6.0a0']
    opencv -> python[version='2.7.*|3.5.*|3.6.*|>=2.7,<2.8.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|>=3.5,<3.6.0a0|3.4.*']
    
    Package certifi conflicts for:
    numba -> setuptools -> certifi[version='>=2016.09|>=2016.9.26']
    pytorch -> setuptools -> certifi[version='>=2016.09|>=2016.9.26']
    cupy -> setuptools -> certifi[version='>=2016.09|>=2016.9.26']
    
    Package libgfortran4 conflicts for:
    pytorch -> blas=[build=mkl] -> libgfortran4[version='>=7.5.0']
    opencv -> hdf5[version='>=1.10.5,<1.10.6.0a0'] -> libgfortran4[version='>=7.5.0']
    
    Package x264 conflicts for:
    torchvision -> ffmpeg[version='>=4.2'] -> x264[version='>=1!152.20180806,<1!153|>=1!161.3030,<1!162|>=1!157.20191217,<1!158']
    opencv -> ffmpeg=4.1 -> x264[version='1!152.*|>=1!152.20180717,<1!153|>=1!152.20180806,<1!153|>=1!161.3030,<1!162|>=20180712,<20180713.0a0|>=20180712,<20190000|>=20180501,<20180502.0a0|>=1!157.20191217,<1!158']
    
    Package numpy-base conflicts for:
    pytorch -> numpy[version='>=1.19'] -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.
    3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.14.3|1.14.3|1.14.3|1.14.
    3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.
    5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.1|1.15.
    1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.
    4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.3|1.16.
    3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.
    6|1.17.2.*|1.17.3.*|1.17.4.*|1.18.1.*|1.18.5.*|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1.20.3|1.
    20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.21.2|1.17.0|1.17.0|1.17.0|1.17.0|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0',build='py37h2b20989_6|py27h2b20989_7|py36hdbf6ddf_7|py36h2b20989_7|py3
    5h2b20989_7|py37hdbf6ddf_7|py37hde5b4d6_0|py36hde5b4d6_0|py37h2b20989_7|py37hdbf6ddf_7|py36hdbf6ddf_8|py37hdbf6ddf_8|py37h2b20989_8|py35hdbf6ddf_8|py36h7cdd4dd_9|py35h3dfced4_9|py27h3dfced4_9|py27h74e8950_9|py27h81de0dd_9|py35h74e8950_10|
    py27h74e8950_10|py27h81de0dd_10|py36h81de0dd_10|py37h2f8d375_10|py27h2f8d375_10|py27h2f8d375_11|py27hde5b4d6_11|py37h2f8d375_12|py27h2f8d375_12|py36h2f8d375_12|py27hde5b4d6_12|py38hde5b4d6_12|py38h2f8d375_12|py36h0ea5e3f_1|py27h0ea5e3f_1|
    py36h9be14a7_1|py27h9be14a7_1|py35h9be14a7_1|py36h2b20989_0|py27hdbf6ddf_0|py35hdbf6ddf_0|py36hdbf6ddf_0|py27hdbf6ddf_0|py35hdbf6ddf_0|py37h2b20989_1|py27hdbf6ddf_1|py27h2b20989_1|py36h2b20989_2|py36hdbf6ddf_2|py36h2b20989_3|py27h2b20989_
    3|py27hdbf6ddf_3|py27hdbf6ddf_4|py37hdbf6ddf_4|py36hdbf6ddf_4|py35h2b20989_4|py35h2f8d375_4|py36h2f8d375_4|py27h2f8d375_4|py35h81de0dd_4|py37h81de0dd_4|py36h81de0dd_4|py38h2f8d375_4|py27hde5b4d6_5|py36h7cdd4dd_0|py35h7cdd4dd_0|py36h3dfced
    4_0|py35h3dfced4_0|py37h3dfced4_0|py36h74e8950_0|py36h81de0dd_0|py36h81de0dd_1|py37h2f8d375_1|py37h2f8d375_0|py37h2f8d375_0|py36h81de0dd_0|py27hde5b4d6_0|py36hde5b4d6_0|py37h2f8d375_0|py37hde5b4d6_0|py36h2f8d375_0|py36hde5b4d6_0|py37h2f8d
    375_1|py27h2f8d375_1|py37hde5b4d6_1|py27hde5b4d6_1|py36hde5b4d6_1|py36hde5b4d6_0|py37h2f8d375_1|py36h2f8d375_1|py37hde5b4d6_1|py36hde5b4d6_1|py27hde5b4d6_1|py36h2f8d375_0|py27hde5b4d6_0|py36hde5b4d6_0|py37hde5b4d6_0|py36hde5b4d6_0|py37h2f
    8d375_0|py36hde5b4d6_0|py37hde5b4d6_0|py39h76555f2_1|py39h41b4c56_3|py38h41b4c56_3|py36hdc34a94_3|py39hdc34a94_3|py37hfa32c7d_0|py38hfa32c7d_0|py36h75fe3a5_0|py36h75fe3a5_0|py38hfa32c7d_0|py39h2ae0177_0|py37h7d8b39e_0|py38h7d8b39e_0|py38h
    e2ba247_0|py37h74d4b33_0|py39h74d4b33_0|py38h39b7dee_0|py39h39b7dee_0|py37h79a1101_0|py38h2b8c604_0|py310h79a1101_0|py310h2b8c604_0|py39h2b8c604_0|py37h2b8c604_0|py39h79a1101_0|py38h79a1101_0|py37h39b7dee_0|py38h74d4b33_0|py39hfae3a4d_0|p
    y37he2ba247_0|py37hfae3a4d_0|py39he2ba247_0|py38hfae3a4d_0|py39h7d8b39e_0|py38h34387ca_0|py39h34387ca_0|py37h34387ca_0|py39h0f7b65f_0|py36hfa32c7d_0|py37h75fe3a5_0|py38h75fe3a5_0|py37hfa32c7d_0|py37h75fe3a5_0|py36hfa32c7d_0|py38h75fe3a5_0
    |py37hdc34a94_3|py38hdc34a94_3|py36h41b4c56_3|py37h41b4c56_3|py39hfb011de_1|py27hde5b4d6_0|py27h2f8d375_0|py38hde5b4d6_0|py38h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py27h2f8d375_0|py36h2f8d375
    _0|py27hde5b4d6_0|py37h2f8d375_0|py36h2f8d375_0|py27h2f8d375_0|py27hde5b4d6_0|py37hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py27h2f8d375_0|py36hde5b4d6_0|py37hde5b4d6_0|py27h2f8d375_0|py37h2f8d375_0|py27h2f8d375_1|py27hde5b4d6_0|py37hde5b4
    d6_0|py36h2f8d375_0|py27h2f8d375_0|py37h2f8d375_0|py36h2f8d375_1|py27hde5b4d6_0|py27h2f8d375_0|py37hde5b4d6_0|py27h81de0dd_0|py37h81de0dd_0|py27h2f8d375_0|py36h2f8d375_0|py37h81de0dd_0|py36h81de0dd_0|py27h81de0dd_0|py36h2f8d375_0|py27h2f8
    d375_0|py37h81de0dd_1|py27h81de0dd_1|py27h2f8d375_1|py36h2f8d375_1|py36h81de0dd_0|py36h2f8d375_0|py35h2f8d375_0|py35h81de0dd_0|py27h81de0dd_0|py37h81de0dd_0|py27h2f8d375_0|py37h2f8d375_0|py37h2f8d375_0|py36h2f8d375_0|py27h2f8d375_0|py35h2
    f8d375_0|py35h81de0dd_0|py37h81de0dd_0|py27h81de0dd_0|py37h74e8950_0|py27h74e8950_0|py35h74e8950_0|py27h3dfced4_0|py37h7cdd4dd_0|py27h7cdd4dd_0|py36hde5b4d6_5|py37hde5b4d6_5|py27h2f8d375_5|py37h2f8d375_5|py36h2f8d375_5|py38hde5b4d6_4|py27
    h81de0dd_4|py37h2f8d375_4|py35hdbf6ddf_4|py36h2b20989_4|py37h2b20989_4|py27h2b20989_4|py36hdbf6ddf_3|py37hdbf6ddf_3|py37h2b20989_3|py37hdbf6ddf_2|py27hdbf6ddf_2|py37h2b20989_2|py27h2b20989_2|py37hdbf6ddf_1|py36hdbf6ddf_1|py36h2b20989_1|py
    27h2b20989_0|py36h2b20989_0|py36hdbf6ddf_0|py35h2b20989_0|py27h2b20989_0|py35h0ea5e3f_1|py36hde5b4d6_12|py37hde5b4d6_12|py37hde5b4d6_11|py36hde5b4d6_11|py36h2f8d375_11|py37h2f8d375_11|py35h2f8d375_10|py36h2f8d375_10|py35h81de0dd_10|py37h8
    1de0dd_10|py36h74e8950_10|py37h74e8950_10|py35h81de0dd_9|py36h74e8950_9|py37h74e8950_9|py35h74e8950_9|py37h81de0dd_9|py36h81de0dd_9|py36h3dfced4_9|py37h3dfced4_9|py37h7cdd4dd_9|py27h7cdd4dd_9|py35h7cdd4dd_9|py35h2b20989_8|py27hdbf6ddf_8|p
    y27h2b20989_8|py36h2b20989_8|py36hdbf6ddf_7|py27hdbf6ddf_7|py36h2b20989_7|py27h2b20989_7|py37h2f8d375_0|py36h2f8d375_0|py37h2b20989_7|py35hdbf6ddf_7|py27hdbf6ddf_7|py36hdbf6ddf_6|py27hdbf6ddf_6|py37hdbf6ddf_6|py36h2b20989_6|py27h2b20989_6
    ']
    opencv -> numpy[version='>=1.14.6,<2.0a0'] -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11
    .3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.14.3|1.14.3|1.14
    .3|1.14.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14
    .5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15
    .1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15
    .4|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16
    .3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16
    .6|1.16.6|1.17.2.*|1.17.3.*|1.17.4.*|1.18.1.*|1.18.5.*|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1
    .20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.21.2|1.17.0|1.17.0|1.17.0|1.17.0|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0',build='py36hdbf6ddf_6|py36h2b20989_7|py37hdbf6ddf_7|py37hdbf6d
    df_7|py36hdbf6ddf_8|py36h2b20989_8|py37hdbf6ddf_8|py37h2b20989_8|py27h2b20989_8|py27hdbf6ddf_8|py36h7cdd4dd_9|py27h7cdd4dd_9|py27h3dfced4_9|py27h74e8950_9|py35h74e8950_9|py27h81de0dd_9|py35h74e8950_10|py27h74e8950_10|py27h81de0dd_10|py37h
    81de0dd_10|py37h2f8d375_10|py27h2f8d375_10|py36hde5b4d6_11|py37hde5b4d6_11|py37h2f8d375_12|py36h2f8d375_12|py38hde5b4d6_12|py38h2f8d375_12|py27h0ea5e3f_1|py36h9be14a7_1|py27h9be14a7_1|py35h9be14a7_1|py27hdbf6ddf_0|py36h2b20989_0|py27hdbf6
    ddf_0|py35hdbf6ddf_0|py36h2b20989_1|py37h2b20989_1|py27hdbf6ddf_1|py36hdbf6ddf_1|py27h2b20989_1|py27hdbf6ddf_2|py37hdbf6ddf_2|py36hdbf6ddf_2|py37hdbf6ddf_3|py36hdbf6ddf_3|py27hdbf6ddf_3|py27hdbf6ddf_4|py36h2b20989_4|py35hdbf6ddf_4|py35h2b
    20989_4|py36h2f8d375_0|py36hde5b4d6_0|py35h2f8d375_4|py36h2f8d375_4|py27h2f8d375_4|py35h81de0dd_4|py37h81de0dd_4|py36h81de0dd_4|py38h2f8d375_4|py27hde5b4d6_5|py36h7cdd4dd_0|py35h7cdd4dd_0|py36h3dfced4_0|py35h3dfced4_0|py37h3dfced4_0|py36h
    74e8950_0|py36h81de0dd_0|py36h81de0dd_1|py37h2f8d375_1|py37h2f8d375_0|py37h2f8d375_0|py36h81de0dd_0|py27hde5b4d6_0|py36hde5b4d6_0|py37h2f8d375_0|py37hde5b4d6_0|py36h2f8d375_0|py36hde5b4d6_0|py37h2f8d375_1|py27h2f8d375_1|py37hde5b4d6_1|py2
    7hde5b4d6_1|py36hde5b4d6_1|py36hde5b4d6_0|py37h2f8d375_1|py36h2f8d375_1|py37hde5b4d6_1|py36hde5b4d6_1|py27hde5b4d6_1|py36h2f8d375_0|py27hde5b4d6_0|py36hde5b4d6_0|py37hde5b4d6_0|py36hde5b4d6_0|py37h2f8d375_0|py36hde5b4d6_0|py37hde5b4d6_0|p
    y39h76555f2_1|py39h41b4c56_3|py38h41b4c56_3|py36hdc34a94_3|py39hdc34a94_3|py37hfa32c7d_0|py38hfa32c7d_0|py36h75fe3a5_0|py36h75fe3a5_0|py38hfa32c7d_0|py39h2ae0177_0|py37h7d8b39e_0|py38h7d8b39e_0|py38he2ba247_0|py37h74d4b33_0|py39h74d4b33_0
    |py38h39b7dee_0|py39h39b7dee_0|py37h79a1101_0|py38h2b8c604_0|py310h79a1101_0|py310h2b8c604_0|py39h2b8c604_0|py37h2b8c604_0|py39h79a1101_0|py38h79a1101_0|py37h39b7dee_0|py38h74d4b33_0|py39hfae3a4d_0|py37he2ba247_0|py37hfae3a4d_0|py39he2ba2
    47_0|py38hfae3a4d_0|py39h7d8b39e_0|py38h34387ca_0|py39h34387ca_0|py37h34387ca_0|py39h0f7b65f_0|py36hfa32c7d_0|py37h75fe3a5_0|py38h75fe3a5_0|py37hfa32c7d_0|py37h75fe3a5_0|py36hfa32c7d_0|py38h75fe3a5_0|py37hdc34a94_3|py38hdc34a94_3|py36h41b
    4c56_3|py37h41b4c56_3|py39hfb011de_1|py27hde5b4d6_0|py27h2f8d375_0|py38hde5b4d6_0|py38h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py27h2f8d375_0|py36h2f8d375_0|py27hde5b4d6_0|py37h2f8d375_0|py36h2
    f8d375_0|py27h2f8d375_0|py27hde5b4d6_0|py37hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py27h2f8d375_0|py36hde5b4d6_0|py37hde5b4d6_0|py27h2f8d375_0|py37h2f8d375_0|py27h2f8d375_1|py27hde5b4d6_0|py37hde5b4d6_0|py36h2f8d375_0|py27h2f8d375_0|py37
    h2f8d375_0|py36h2f8d375_1|py27hde5b4d6_0|py27h2f8d375_0|py37hde5b4d6_0|py27h81de0dd_0|py37h81de0dd_0|py27h2f8d375_0|py36h2f8d375_0|py37h81de0dd_0|py36h81de0dd_0|py27h81de0dd_0|py36h2f8d375_0|py27h2f8d375_0|py37h81de0dd_1|py27h81de0dd_1|py
    27h2f8d375_1|py36h2f8d375_1|py36h81de0dd_0|py36h2f8d375_0|py35h2f8d375_0|py35h81de0dd_0|py27h81de0dd_0|py37h81de0dd_0|py27h2f8d375_0|py37h2f8d375_0|py37h2f8d375_0|py36h2f8d375_0|py27h2f8d375_0|py35h2f8d375_0|py35h81de0dd_0|py37h81de0dd_0|
    py27h81de0dd_0|py37h74e8950_0|py27h74e8950_0|py35h74e8950_0|py27h3dfced4_0|py37h7cdd4dd_0|py27h7cdd4dd_0|py36hde5b4d6_5|py37hde5b4d6_5|py27h2f8d375_5|py37h2f8d375_5|py36h2f8d375_5|py38hde5b4d6_4|py27h81de0dd_4|py37h2f8d375_4|py37hde5b4d6_
    0|py37h2f8d375_0|py36hdbf6ddf_4|py37hdbf6ddf_4|py37h2b20989_4|py27h2b20989_4|py27h2b20989_3|py37h2b20989_3|py36h2b20989_3|py37h2b20989_2|py36h2b20989_2|py27h2b20989_2|py37hdbf6ddf_1|py36hdbf6ddf_0|py27h2b20989_0|py35hdbf6ddf_0|py36hdbf6dd
    f_0|py35h2b20989_0|py36h2b20989_0|py27h2b20989_0|py35h0ea5e3f_1|py36h0ea5e3f_1|py36hde5b4d6_12|py37hde5b4d6_12|py27hde5b4d6_12|py27h2f8d375_12|py27hde5b4d6_11|py36h2f8d375_11|py27h2f8d375_11|py37h2f8d375_11|py35h2f8d375_10|py36h2f8d375_10
    |py35h81de0dd_10|py36h81de0dd_10|py36h74e8950_10|py37h74e8950_10|py35h81de0dd_9|py36h74e8950_9|py37h74e8950_9|py37h81de0dd_9|py36h81de0dd_9|py35h3dfced4_9|py36h3dfced4_9|py37h3dfced4_9|py37h7cdd4dd_9|py35h7cdd4dd_9|py35h2b20989_8|py35hdbf
    6ddf_8|py36hdbf6ddf_7|py27hdbf6ddf_7|py36h2b20989_7|py27h2b20989_7|py37h2b20989_7|py37h2b20989_7|py35h2b20989_7|py35hdbf6ddf_7|py36hdbf6ddf_7|py27h2b20989_7|py27hdbf6ddf_7|py27hdbf6ddf_6|py37hdbf6ddf_6|py37h2b20989_6|py36h2b20989_6|py27h2
    b20989_6']
    numba -> numpy[version='>=1.18.5,<2.0a0'] -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.
    3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.14.3|1.14.3|1.14.
    3|1.14.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.
    5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.
    1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.
    4|1.15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.
    3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.
    6|1.16.6|1.17.2.*|1.17.3.*|1.17.4.*|1.18.1.*|1.18.5.*|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1.
    20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.21.2|1.17.0|1.17.0|1.17.0|1.17.0|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0',build='py37h2b20989_6|py37hdbf6ddf_6|py36hdbf6ddf_6|py36hdbf6dd
    f_7|py35h2b20989_7|py37hdbf6ddf_7|py37h2b20989_7|py36hdbf6ddf_7|py37hdbf6ddf_7|py37hdbf6ddf_8|py37h2b20989_8|py27hdbf6ddf_8|py36h7cdd4dd_9|py37h3dfced4_9|py27h3dfced4_9|py36h81de0dd_9|py27h74e8950_9|py37h74e8950_9|py27h81de0dd_9|py35h74e8
    950_10|py27h74e8950_10|py36h74e8950_10|py27h81de0dd_10|py37h81de0dd_10|py36h81de0dd_10|py37h2f8d375_10|py27h2f8d375_10|py36hde5b4d6_11|py37hde5b4d6_11|py37h2f8d375_12|py36h2f8d375_12|py38hde5b4d6_12|py38h2f8d375_12|py27h0ea5e3f_1|py36h9be
    14a7_1|py27h9be14a7_1|py35h9be14a7_1|py36h2b20989_0|py27hdbf6ddf_0|py36hdbf6ddf_0|py35hdbf6ddf_0|py36h2b20989_0|py27hdbf6ddf_0|py35hdbf6ddf_0|py37h2b20989_1|py27hdbf6ddf_1|py27h2b20989_1|py36h2b20989_2|py27hdbf6ddf_2|py36hdbf6ddf_2|py36h2
    b20989_3|py37hdbf6ddf_3|py27hdbf6ddf_3|py37h2b20989_4|py37hdbf6ddf_4|py35hdbf6ddf_4|py35h2f8d375_4|py36h2f8d375_4|py27h2f8d375_4|py37h81de0dd_4|py36h81de0dd_4|py38h2f8d375_4|py37hde5b4d6_5|py27h7cdd4dd_0|py36h7cdd4dd_0|py35h7cdd4dd_0|py37
    h3dfced4_0|py27h74e8950_0|py36h74e8950_0|py37h74e8950_0|py36h81de0dd_0|py27h81de0dd_0|py35h81de0dd_0|py37h81de0dd_0|py27h81de0dd_0|py35h81de0dd_0|py37h81de0dd_1|py37h2f8d375_1|py36h2f8d375_0|py27h81de0dd_0|py36h81de0dd_0|py36hde5b4d6_0|py
    37hde5b4d6_0|py37hde5b4d6_0|py36hde5b4d6_0|py37h2f8d375_1|py36h2f8d375_1|py37hde5b4d6_1|py27hde5b4d6_1|py36h2f8d375_0|py36hde5b4d6_0|py27hde5b4d6_0|py27h2f8d375_1|py37h2f8d375_1|py37hde5b4d6_1|py27hde5b4d6_1|py37hde5b4d6_0|py27hde5b4d6_0|
    py36hde5b4d6_0|py37h2f8d375_0|py36hde5b4d6_0|py37h2f8d375_0|py36hde5b4d6_0|py36hde5b4d6_0|py37hde5b4d6_0|py39h76555f2_1|py39h41b4c56_3|py38h41b4c56_3|py36hdc34a94_3|py39hdc34a94_3|py37hfa32c7d_0|py38hfa32c7d_0|py36h75fe3a5_0|py36h75fe3a5_
    0|py38hfa32c7d_0|py39h2ae0177_0|py37h7d8b39e_0|py38h7d8b39e_0|py38he2ba247_0|py37h74d4b33_0|py39h74d4b33_0|py38h39b7dee_0|py39h39b7dee_0|py37h79a1101_0|py38h2b8c604_0|py310h79a1101_0|py310h2b8c604_0|py39h2b8c604_0|py37h2b8c604_0|py39h79a1
    101_0|py38h79a1101_0|py37h39b7dee_0|py38h74d4b33_0|py39hfae3a4d_0|py37he2ba247_0|py37hfae3a4d_0|py39he2ba247_0|py38hfae3a4d_0|py39h7d8b39e_0|py38h34387ca_0|py39h34387ca_0|py37h34387ca_0|py39h0f7b65f_0|py36hfa32c7d_0|py37h75fe3a5_0|py38h75
    fe3a5_0|py37hfa32c7d_0|py37h75fe3a5_0|py36hfa32c7d_0|py38h75fe3a5_0|py37hdc34a94_3|py38hdc34a94_3|py36h41b4c56_3|py37h41b4c56_3|py39hfb011de_1|py27hde5b4d6_0|py27h2f8d375_0|py38hde5b4d6_0|py38h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py37h
    2f8d375_0|py37hde5b4d6_0|py37h2f8d375_0|py36h2f8d375_0|py36hde5b4d6_0|py37hde5b4d6_0|py27hde5b4d6_0|py37h2f8d375_0|py27h2f8d375_0|py36h2f8d375_0|py27hde5b4d6_0|py37hde5b4d6_0|py36h2f8d375_0|py27h2f8d375_0|py27hde5b4d6_0|py37hde5b4d6_0|py3
    6h2f8d375_0|py27h2f8d375_0|py27h2f8d375_0|py36h2f8d375_0|py37h2f8d375_0|py36hde5b4d6_1|py36h2f8d375_1|py37hde5b4d6_0|py27h2f8d375_0|py37h2f8d375_0|py36hde5b4d6_1|py27h2f8d375_1|py27hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py27h2f8d375_0|p
    y27hde5b4d6_0|py27h81de0dd_0|py37h81de0dd_0|py27h2f8d375_0|py36h2f8d375_0|py37h2f8d375_0|py37h81de0dd_0|py36h81de0dd_0|py37h2f8d375_0|py27h2f8d375_0|py36h81de0dd_1|py27h81de0dd_1|py27h2f8d375_1|py36h2f8d375_1|py36h81de0dd_0|py36h2f8d375_0
    |py35h2f8d375_0|py27h2f8d375_0|py37h2f8d375_0|py37h2f8d375_0|py36h2f8d375_0|py27h2f8d375_0|py35h2f8d375_0|py37h81de0dd_0|py35h74e8950_0|py35h3dfced4_0|py36h3dfced4_0|py27h3dfced4_0|py37h7cdd4dd_0|py36hde5b4d6_5|py27hde5b4d6_5|py27h2f8d375
    _5|py37h2f8d375_5|py36h2f8d375_5|py38hde5b4d6_4|py35h81de0dd_4|py27h81de0dd_4|py37h2f8d375_4|py35h2b20989_4|py36hdbf6ddf_4|py36h2b20989_4|py27hdbf6ddf_4|py27h2b20989_4|py27h2b20989_3|py36hdbf6ddf_3|py37h2b20989_3|py37hdbf6ddf_2|py37h2b209
    89_2|py27h2b20989_2|py37hdbf6ddf_1|py36hdbf6ddf_1|py36h2b20989_1|py36hdbf6ddf_0|py27h2b20989_0|py35h2b20989_0|py27h2b20989_0|py35h0ea5e3f_1|py36h0ea5e3f_1|py36hde5b4d6_12|py37hde5b4d6_12|py27hde5b4d6_12|py27h2f8d375_12|py27hde5b4d6_11|py3
    6h2f8d375_11|py27h2f8d375_11|py37h2f8d375_11|py35h2f8d375_10|py36h2f8d375_10|py35h81de0dd_10|py37h74e8950_10|py35h81de0dd_9|py36h74e8950_9|py35h74e8950_9|py37h81de0dd_9|py35h3dfced4_9|py36h3dfced4_9|py37h7cdd4dd_9|py27h7cdd4dd_9|py35h7cdd
    4dd_9|py35h2b20989_8|py35hdbf6ddf_8|py27h2b20989_8|py36h2b20989_8|py36hdbf6ddf_8|py27hdbf6ddf_7|py36h2b20989_7|py27h2b20989_7|py37h2b20989_7|py35hdbf6ddf_7|py36h2b20989_7|py27h2b20989_7|py27hdbf6ddf_7|py27hdbf6ddf_6|py36h2b20989_6|py27h2b
    20989_6']
    torchvision -> numpy[version='>=1.11'] -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1
    .11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.14.3|1.14.3|1.14.3|1
    .14.3|1.14.3|1.14.3|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1
    .14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.1|1
    .15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1
    .15.4|1.15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.3|1
    .16.3|1.16.3|1.16.3|1.16.3|1.16.3|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1
    .16.6|1.17.2.*|1.17.3.*|1.17.4.*|1.18.1.*|1.18.5.*|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1.20.2|1.20.
    3|1.20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.21.2|1.17.0|1.17.0|1.17.0|1.17.0|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0',build='py37h2b20989_6|py27h2b20989_7|py36hdbf6ddf_7|py36h2b20989_7
    |py35h2b20989_7|py37hdbf6ddf_7|py37hde5b4d6_0|py36hde5b4d6_0|py37h2b20989_7|py37hdbf6ddf_7|py36hdbf6ddf_8|py37hdbf6ddf_8|py37h2b20989_8|py35hdbf6ddf_8|py36h7cdd4dd_9|py35h3dfced4_9|py27h3dfced4_9|py27h74e8950_9|py27h81de0dd_9|py35h74e8950
    _10|py27h74e8950_10|py27h81de0dd_10|py36h81de0dd_10|py37h2f8d375_10|py27h2f8d375_10|py27h2f8d375_11|py27hde5b4d6_11|py37h2f8d375_12|py27h2f8d375_12|py36h2f8d375_12|py27hde5b4d6_12|py38hde5b4d6_12|py38h2f8d375_12|py36h0ea5e3f_1|py27h0ea5e3
    f_1|py36h9be14a7_1|py27h9be14a7_1|py35h9be14a7_1|py36h2b20989_0|py27hdbf6ddf_0|py35hdbf6ddf_0|py36hdbf6ddf_0|py27hdbf6ddf_0|py35hdbf6ddf_0|py37h2b20989_1|py27hdbf6ddf_1|py27h2b20989_1|py36h2b20989_2|py36hdbf6ddf_2|py36h2b20989_3|py27h2b20
    989_3|py27hdbf6ddf_3|py27hdbf6ddf_4|py37hdbf6ddf_4|py36hdbf6ddf_4|py35h2b20989_4|py35h2f8d375_4|py36h2f8d375_4|py27h2f8d375_4|py35h81de0dd_4|py37h81de0dd_4|py36h81de0dd_4|py38h2f8d375_4|py27hde5b4d6_5|py36h7cdd4dd_0|py35h7cdd4dd_0|py36h3d
    fced4_0|py35h3dfced4_0|py37h3dfced4_0|py36h74e8950_0|py36h81de0dd_0|py36h81de0dd_1|py37h2f8d375_1|py37h2f8d375_0|py37h2f8d375_0|py36h81de0dd_0|py27hde5b4d6_0|py36hde5b4d6_0|py37h2f8d375_0|py37hde5b4d6_0|py36h2f8d375_0|py36hde5b4d6_0|py37h
    2f8d375_1|py27h2f8d375_1|py37hde5b4d6_1|py27hde5b4d6_1|py36hde5b4d6_1|py36hde5b4d6_0|py37h2f8d375_1|py36h2f8d375_1|py37hde5b4d6_1|py36hde5b4d6_1|py27hde5b4d6_1|py36h2f8d375_0|py27hde5b4d6_0|py36hde5b4d6_0|py37hde5b4d6_0|py36hde5b4d6_0|py3
    7h2f8d375_0|py36hde5b4d6_0|py37hde5b4d6_0|py39h76555f2_1|py39h41b4c56_3|py38h41b4c56_3|py36hdc34a94_3|py39hdc34a94_3|py37hfa32c7d_0|py38hfa32c7d_0|py36h75fe3a5_0|py36h75fe3a5_0|py38hfa32c7d_0|py39h2ae0177_0|py37h7d8b39e_0|py38h7d8b39e_0|p
    y38he2ba247_0|py37h74d4b33_0|py39h74d4b33_0|py38h39b7dee_0|py39h39b7dee_0|py37h79a1101_0|py38h2b8c604_0|py310h79a1101_0|py310h2b8c604_0|py39h2b8c604_0|py37h2b8c604_0|py39h79a1101_0|py38h79a1101_0|py37h39b7dee_0|py38h74d4b33_0|py39hfae3a4d
    _0|py37he2ba247_0|py37hfae3a4d_0|py39he2ba247_0|py38hfae3a4d_0|py39h7d8b39e_0|py38h34387ca_0|py39h34387ca_0|py37h34387ca_0|py39h0f7b65f_0|py36hfa32c7d_0|py37h75fe3a5_0|py38h75fe3a5_0|py37hfa32c7d_0|py37h75fe3a5_0|py36hfa32c7d_0|py38h75fe3
    a5_0|py37hdc34a94_3|py38hdc34a94_3|py36h41b4c56_3|py37h41b4c56_3|py39hfb011de_1|py27hde5b4d6_0|py27h2f8d375_0|py38hde5b4d6_0|py38h2f8d375_0|py36hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py27h2f8d375_0|py36h2f8
    d375_0|py27hde5b4d6_0|py37h2f8d375_0|py36h2f8d375_0|py27h2f8d375_0|py27hde5b4d6_0|py37hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py27h2f8d375_0|py36hde5b4d6_0|py37hde5b4d6_0|py27h2f8d375_0|py37h2f8d375_0|py27h2f8d375_1|py27hde5b4d6_0|py37hd
    e5b4d6_0|py36h2f8d375_0|py27h2f8d375_0|py37h2f8d375_0|py36h2f8d375_1|py27hde5b4d6_0|py27h2f8d375_0|py37hde5b4d6_0|py27h81de0dd_0|py37h81de0dd_0|py27h2f8d375_0|py36h2f8d375_0|py37h81de0dd_0|py36h81de0dd_0|py27h81de0dd_0|py36h2f8d375_0|py27
    h2f8d375_0|py37h81de0dd_1|py27h81de0dd_1|py27h2f8d375_1|py36h2f8d375_1|py36h81de0dd_0|py36h2f8d375_0|py35h2f8d375_0|py35h81de0dd_0|py27h81de0dd_0|py37h81de0dd_0|py27h2f8d375_0|py37h2f8d375_0|py37h2f8d375_0|py36h2f8d375_0|py27h2f8d375_0|py
    35h2f8d375_0|py35h81de0dd_0|py37h81de0dd_0|py27h81de0dd_0|py37h74e8950_0|py27h74e8950_0|py35h74e8950_0|py27h3dfced4_0|py37h7cdd4dd_0|py27h7cdd4dd_0|py36hde5b4d6_5|py37hde5b4d6_5|py27h2f8d375_5|py37h2f8d375_5|py36h2f8d375_5|py38hde5b4d6_4|
    py27h81de0dd_4|py37h2f8d375_4|py35hdbf6ddf_4|py36h2b20989_4|py37h2b20989_4|py27h2b20989_4|py36hdbf6ddf_3|py37hdbf6ddf_3|py37h2b20989_3|py37hdbf6ddf_2|py27hdbf6ddf_2|py37h2b20989_2|py27h2b20989_2|py37hdbf6ddf_1|py36hdbf6ddf_1|py36h2b20989_
    1|py27h2b20989_0|py36h2b20989_0|py36hdbf6ddf_0|py35h2b20989_0|py27h2b20989_0|py35h0ea5e3f_1|py36hde5b4d6_12|py37hde5b4d6_12|py37hde5b4d6_11|py36hde5b4d6_11|py36h2f8d375_11|py37h2f8d375_11|py35h2f8d375_10|py36h2f8d375_10|py35h81de0dd_10|py
    37h81de0dd_10|py36h74e8950_10|py37h74e8950_10|py35h81de0dd_9|py36h74e8950_9|py37h74e8950_9|py35h74e8950_9|py37h81de0dd_9|py36h81de0dd_9|py36h3dfced4_9|py37h3dfced4_9|py37h7cdd4dd_9|py27h7cdd4dd_9|py35h7cdd4dd_9|py35h2b20989_8|py27hdbf6ddf
    _8|py27h2b20989_8|py36h2b20989_8|py36hdbf6ddf_7|py27hdbf6ddf_7|py36h2b20989_7|py27h2b20989_7|py37h2f8d375_0|py36h2f8d375_0|py37h2b20989_7|py35hdbf6ddf_7|py27hdbf6ddf_7|py36hdbf6ddf_6|py27hdbf6ddf_6|py37hdbf6ddf_6|py36h2b20989_6|py27h2b209
    89_6']
    cupy -> numpy[version='>=1.18'] -> numpy-base[version='1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1
    .11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.11.3|1.14.3|1.14.3|1.14.3|1.14.3|1
    .14.3|1.14.3|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.4|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.5|1
    .14.5|1.14.5|1.14.5|1.14.5|1.14.5|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.14.6|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.0|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1
    .15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.1|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.2|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.3|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1.15.4|1
    .15.4|1.15.4|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.0|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.1|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.2|1.16.3|1.16.3|1
    .16.3|1.16.3|1.16.3|1.16.3|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.4|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.5|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1.16.6|1
    .17.0|1.17.0|1.17.0|1.17.0|1.17.2.*|1.17.3.*|1.17.4.*|1.18.1.*|1.18.5.*|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.1|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.19.2|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.1|1.20.2|1.20.2|1.20.2|1.20.
    2|1.20.2|1.20.2|1.20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.20.3|1.21.2|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|1.9.3|>=1.9.3,<2.0a0',build='py37hdbf6ddf_6|py36hdbf6ddf_6|py27hdbf6ddf_7|py27h2b20989_7|py35h2
    b20989_7|py37hdbf6ddf_7|py37h2b20989_7|py36h2b20989_7|py27hdbf6ddf_7|py37hdbf6ddf_7|py36h2b20989_8|py37h2b20989_8|py36h7cdd4dd_9|py27h7cdd4dd_9|py35h3dfced4_9|py27h74e8950_9|py27h81de0dd_9|py35h81de0dd_9|py35h74e8950_10|py27h74e8950_10|py
    27h81de0dd_10|py37h2f8d375_10|py27h2f8d375_11|py36hde5b4d6_11|py37h2f8d375_12|py36h2f8d375_12|py27hde5b4d6_12|py36hde5b4d6_12|py38hde5b4d6_12|py38h2f8d375_12|py27h0ea5e3f_1|py36h9be14a7_1|py27h9be14a7_1|py35h9be14a7_1|py27hdbf6ddf_0|py27h
    2b20989_0|py27hdbf6ddf_0|py35hdbf6ddf_0|py37h2b20989_1|py27hdbf6ddf_1|py36hdbf6ddf_1|py27h2b20989_1|py36h2b20989_2|py37hdbf6ddf_2|py36hdbf6ddf_2|py37hdbf6ddf_3|py36hdbf6ddf_3|py27hdbf6ddf_3|py27h2b20989_4|py27hdbf6ddf_4|py37hdbf6ddf_4|py3
    6h2b20989_4|py36hdbf6ddf_4|py35h2f8d375_4|py27h81de0dd_4|py36h2f8d375_4|py27h2f8d375_4|py37h81de0dd_4|py38h2f8d375_4|py36hde5b4d6_5|py36h7cdd4dd_0|py37h7cdd4dd_0|py35h7cdd4dd_0|py37h3dfced4_0|py36h74e8950_0|py37h74e8950_0|py36h81de0dd_0|p
    y37h81de0dd_0|py27h81de0dd_0|py27h2f8d375_1|py36h81de0dd_1|py37h81de0dd_1|py37h2f8d375_1|py36h2f8d375_0|py37h2f8d375_0|py36h81de0dd_0|py36hde5b4d6_0|py36hde5b4d6_0|py37h2f8d375_1|py27h2f8d375_1|py36h2f8d375_1|py37hde5b4d6_1|py27hde5b4d6_1
    |py36hde5b4d6_1|py36hde5b4d6_0|py37hde5b4d6_0|py27h2f8d375_1|py37h2f8d375_1|py36h2f8d375_1|py37hde5b4d6_1|py27hde5b4d6_1|py37h2f8d375_0|py36hde5b4d6_0|py37h2f8d375_0|py36h2f8d375_0|py36hde5b4d6_0|py37hde5b4d6_0|py37h2f8d375_0|py36hde5b4d6
    _0|py38h2f8d375_0|py27hde5b4d6_0|py39h76555f2_1|py38h41b4c56_3|py37h41b4c56_3|py37hdc34a94_3|py37hde5b4d6_0|py37hfa32c7d_0|py38hfa32c7d_0|py36h75fe3a5_0|py36h75fe3a5_0|py38hfa32c7d_0|py39h2ae0177_0|py37h7d8b39e_0|py38h7d8b39e_0|py38he2ba2
    47_0|py37h74d4b33_0|py39h74d4b33_0|py38h39b7dee_0|py39h39b7dee_0|py37h79a1101_0|py38h2b8c604_0|py310h79a1101_0|py310h2b8c604_0|py39h2b8c604_0|py37h2b8c604_0|py39h79a1101_0|py38h79a1101_0|py37h39b7dee_0|py38h74d4b33_0|py39hfae3a4d_0|py37he
    2ba247_0|py37hfae3a4d_0|py39he2ba247_0|py38hfae3a4d_0|py39h7d8b39e_0|py38h34387ca_0|py39h34387ca_0|py37h34387ca_0|py39h0f7b65f_0|py36hfa32c7d_0|py37h75fe3a5_0|py38h75fe3a5_0|py37hfa32c7d_0|py37h75fe3a5_0|py36hfa32c7d_0|py38h75fe3a5_0|py36
    hde5b4d6_0|py37h2f8d375_0|py36h2f8d375_0|py39hdc34a94_3|py38hdc34a94_3|py36hdc34a94_3|py36h41b4c56_3|py39h41b4c56_3|py39hfb011de_1|py27h2f8d375_0|py38hde5b4d6_0|py37hde5b4d6_0|py36h2f8d375_0|py37h2f8d375_0|py36hde5b4d6_0|py37hde5b4d6_0|py
    27hde5b4d6_0|py27h2f8d375_0|py36h2f8d375_0|py36hde5b4d6_0|py27hde5b4d6_0|py37h2f8d375_0|py36h2f8d375_0|py27h2f8d375_0|py27hde5b4d6_0|py37hde5b4d6_0|py27h2f8d375_0|py27hde5b4d6_0|py37hde5b4d6_0|py27h2f8d375_0|py36h2f8d375_0|py36hde5b4d6_1|
    py27hde5b4d6_0|py36h2f8d375_0|py27h2f8d375_0|py37h2f8d375_0|py27hde5b4d6_0|py36h2f8d375_0|py37hde5b4d6_0|py37h2f8d375_0|py27h2f8d375_0|py37hde5b4d6_0|py27hde5b4d6_0|py27h81de0dd_0|py37h81de0dd_0|py27h2f8d375_0|py36h2f8d375_0|py37h81de0dd_
    0|py36h81de0dd_0|py27h81de0dd_0|py37h2f8d375_0|py27h2f8d375_0|py27h81de0dd_1|py36h2f8d375_1|py36h81de0dd_0|py36h2f8d375_0|py35h2f8d375_0|py35h81de0dd_0|py27h2f8d375_0|py37h2f8d375_0|py37h2f8d375_0|py36h2f8d375_0|py27h2f8d375_0|py35h2f8d37
    5_0|py35h81de0dd_0|py37h81de0dd_0|py27h81de0dd_0|py27h74e8950_0|py35h74e8950_0|py35h3dfced4_0|py36h3dfced4_0|py27h3dfced4_0|py27h7cdd4dd_0|py37hde5b4d6_5|py27hde5b4d6_5|py27h2f8d375_5|py37h2f8d375_5|py36h2f8d375_5|py38hde5b4d6_4|py36h81de
    0dd_4|py35h81de0dd_4|py37h2f8d375_4|py35h2b20989_4|py35hdbf6ddf_4|py37h2b20989_4|py27h2b20989_3|py37h2b20989_3|py36h2b20989_3|py27hdbf6ddf_2|py37h2b20989_2|py27h2b20989_2|py37hdbf6ddf_1|py36h2b20989_1|py36hdbf6ddf_0|py36h2b20989_0|py35hdb
    f6ddf_0|py36hdbf6ddf_0|py35h2b20989_0|py36h2b20989_0|py27h2b20989_0|py35h0ea5e3f_1|py36h0ea5e3f_1|py37hde5b4d6_12|py27h2f8d375_12|py27hde5b4d6_11|py37hde5b4d6_11|py36h2f8d375_11|py37h2f8d375_11|py35h2f8d375_10|py27h2f8d375_10|py36h2f8d375
    _10|py35h81de0dd_10|py36h81de0dd_10|py37h81de0dd_10|py36h74e8950_10|py37h74e8950_10|py36h74e8950_9|py37h74e8950_9|py35h74e8950_9|py37h81de0dd_9|py36h81de0dd_9|py27h3dfced4_9|py36h3dfced4_9|py37h3dfced4_9|py37h7cdd4dd_9|py35h7cdd4dd_9|py35
    h2b20989_8|py35hdbf6ddf_8|py27hdbf6ddf_8|py27h2b20989_8|py37hdbf6ddf_8|py36hdbf6ddf_8|py36hdbf6ddf_7|py27h2b20989_7|py37h2b20989_7|py35hdbf6ddf_7|py36h2b20989_7|py36hdbf6ddf_7|py27hdbf6ddf_6|py37h2b20989_6|py36h2b20989_6|py27h2b20989_6']
    
    Package pytorch conflicts for:
    torchvision -> pytorch-gpu -> pytorch[version='1.10.0|1.10.0|1.10.0|1.10.0|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.1|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1
    .9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.6.0|1.6.0|1.6.0
    |1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.6.0|1.9.1|1.9.1|1.9.1|1.9.1|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.9.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.8
    .0|1.8.0|1.8.0|1.8.0|1.8.0|1.8.0|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.7.1|1.6.0|1.6.0|1.6.0|1.6',build='cpu_py36h63cae03_1|cpu_py38h36eccb8_1|cpu_py39h714fb45_1|cpu_py39h714fb45_2|cpu_py38h36eccb8_2|cpu_py37hafa7651_0|cpu_py36h2d15
    a6b_1|cpu_py36h95c28ec_2|cpu_py38h91ab35c_2|cpu_py39hfbcbfe4_2|cpu_py39hfbcbfe4_3|cpu_py38h91ab35c_0|cpu_py39hfbcbfe4_0|cpu_py36h3564fbe_1|cpu_py37hff829bd_1|cpu_py39h818de69_1|cpu_py37hb06efa0_2|cpu_py39h818de69_2|cpu_py38h4bbe6ce_2|cpu_
    py39hc5866cc_3|cpu_py39hc5866cc_0|cpu_py37hf3cc979_0|cpu_py38h1ee18c8_0|cuda92py36h7ecc001_1|cuda100py36hd82b6f9_1|cuda102py36h8620ce9_1|cuda92py37hc3ec645_1|cuda100py37h50b9e00_1|cuda92py38hb6ed0dd_1|cuda100py38h679e3f5_1|cuda92py39hde86
    683_1|cuda102py38h9f8c3ab_1|cuda92py36h7ecc001_1|cuda102py36h8620ce9_1|cuda100py36hd82b6f9_1|cuda102py37h4d98c68_1|cuda100py37h50b9e00_1|cuda92py39hde86683_1|cuda100py39h2b73809_1|cuda92py38hb6ed0dd_1|cuda101py36h42dc283_1|cuda102py38h9f8
    c3ab_1|cuda102py38h540557e_0|cuda110py36h7ef7e1d_0|cuda110py37h5fb8b0b_0|cuda110py39hbc72f07_0|cuda112py39h716d6ff_1|cuda111py36h3cb1cac_1|cuda112py36h5fea6e2_1|cuda111py37h50e976f_1|cuda112py37h946b90b_1|cuda111py39hc274426_1|cuda102py36
    he3537ca_1|cuda110py36h768fbb7_1|cuda112py39hbeb36f3_1|cuda111py38h5169e65_1|cuda102py39h9bf10ef_1|cuda110py38hc2289b8_1|cuda112py36h755b813_1|cuda102py37h92fd811_1|cuda111py37h78388d7_1|cuda112py36h36e649e_3|cuda111py36h8a2106e_3|cuda112
    py37h3bec1eb_3|cuda102py36h3d4679f_3|cuda102py37h98b7ee3_3|cuda102py38ha031fbe_3|cuda110py36he570edd_3|cuda110py37h4a33b93_3|cuda111py37he371307_0|cuda110py37h4a33b93_0|cuda112py38h4f2a933_0|cuda102py37h98b7ee3_0|cuda110py39h5cf7045_0|cud
    a111py39hb4a4491_0|cuda112py39h4e14dd4_0|cuda102py38ha031fbe_0|cuda111py38h2f85826_0|cuda112py37h3bec1eb_0|cuda102py39h2fcd037_0|cuda110py38hf84197b_0|cuda110py39h5cf7045_3|cuda110py38hf84197b_3|cuda102py39h2fcd037_3|cuda111py38h2f85826_3
    |cuda111py37he371307_3|cuda111py39hb4a4491_3|cuda112py39h4e14dd4_3|cuda112py38h4f2a933_3|cuda112py38h3d13190_1|cuda110py39hd6acddb_1|cuda110py37h00edf66_1|cuda111py36hc5445e6_1|cuda112py37hcb91bf2_1|cuda102py38hf03d9dd_1|cuda111py39h37e5b
    68_1|cuda112py38h3bc52bc_1|cuda111py38he2736ed_1|cuda110py38h65e529b_0|cuda102py39hf89b2ab_0|cuda102py37h4454d97_0|cuda102py36hf4eb8d7_0|cuda101py39h41d04a9_1|cuda100py38h679e3f5_1|cuda102py39h09d0254_1|cuda101py38h2499a06_1|cuda92py37hc3
    ec645_1|cuda101py37h7589291_1|cuda101py39h41d04a9_1|cuda100py39h2b73809_1|cuda102py39h09d0254_1|cuda101py38h2499a06_1|cuda101py37h7589291_1|cuda102py37h4d98c68_1|cuda101py36h42dc283_1|cpu_py38h1ee18c8_3|cpu_py36ha8b20dc_3|cpu_py37hf3cc979
    _3|cpu_py36h1c7b8ea_2|cpu_py38hfb3baa6_1|cpu_py36h95c28ec_0|cpu_py37hd5260e0_0|cpu_py36h95c28ec_3|cpu_py38h91ab35c_3|cpu_py37hd5260e0_3|cpu_py37hd5260e0_2|cpu_py38hd248515_1|cpu_py39h0fbb4fb_1|cpu_py37ha70c682_1|cpu_py39h0fbb4fb_0|cpu_py3
    8he614459_0|cpu_py36h2ecc29a_0|cpu_py37hf1c21f6_2|cpu_py36h63cae03_2|cpu_py37hf1c21f6_1|cpu_py36h63cae03_1|cpu_py38h36eccb8_1|cpu_py37hf1c21f6_1']
    pytorch
    torchvision -> pytorch[version='*|*|1.10.0|1.10.1|1.9.1|1.9.0|1.8.1|1.8.0|1.7.1|1.7.0|1.6.0|1.5.1|1.5.0|1.4.0|1.3.1|1.3.0|1.2.0|1.2.0+cu92|>=1.1.0|>=1.0.0|>=0.4|>=0.3|>=1.8.0|>=1.8.0|1.7.1.*|1.3.1.*|1.2.0.*|1.1.*',build='cuda*|cuda*|cpu*|
    cpu*']
    
    Package libtiff conflicts for:
    opencv -> libopencv==4.5.5=py38hd60e7aa_0 -> libtiff[version='>=4.1.0,<5.0a0|>=4.2.0,<5.0a0|>=4.3.0,<5.0a0']
    opencv -> libtiff[version='4.0.*|>=4.0.10,<5.0a0|>=4.0.9,<5.0a0|>=4.0.8,<4.0.10|>=4.0.3,<4.0.8']
    
    Package gmp conflicts for:
    pytorch -> libgcc -> gmp[version='>=4.2']
    opencv -> ffmpeg=4.1 -> gmp[version='>=6.1.2|>=6.1.2,<7.0a0|>=6.2.1,<7.0a0|>=6.2.0,<7.0a0']
    torchvision -> ffmpeg[version='>=4.2'] -> gmp[version='>=6.1.2,<7.0a0|>=6.1.2|>=6.2.1,<7.0a0|>=6.2.0,<7.0a0']
    
    Package pypy3.6 conflicts for:
    cupy -> python[version='>=3.6,<3.7.0a0'] -> pypy3.6[version='7.3.*|7.3.0.*|7.3.1.*|7.3.2.*|7.3.3.*']
    cupy -> pypy3.6[version='>=7.3.1|>=7.3.2|>=7.3.3']The following specifications were found to be incompatible with your system:
    
      - feature:/linux-64::__glibc==2.17=0
      - feature:|@/linux-64::__glibc==2.17=0
      - cudatoolkit=11.3 -> __glibc[version='>=2.17,<3.0.a0']
      - cudatoolkit=11.3 -> libgcc-ng[version='>=9.3.0'] -> __glibc[version='>=2.17']
      - libjpeg-turbo -> libgcc-ng[version='>=9.3.0'] -> __glibc[version='>=2.17']
      - numba -> libgcc-ng[version='>=9.3.0'] -> __glibc[version='>=2.17']
      - opencv -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
      - pkg-config -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
      - python=3.9 -> libgcc-ng[version='>=9.3.0'] -> __glibc[version='>=2.17']
      - pytorch -> __glibc[version='>=2.17|>=2.17,<3.0.a0']
      - torchvision -> __glibc[version='>=2.17|>=2.17,<3.0.a0']
    
    Your installed version is: 2.17
    
    Note that strict channel priority may have removed packages required for satisfiability.
    
    opened by mbsariyildiz 11
  • FFCV Imagenet Image Quality vs. Native Imagenet Image Quality

    FFCV Imagenet Image Quality vs. Native Imagenet Image Quality

    Hello! I see that FFCV offers alot of options for the quality of the dataset - e.g. in the imagenet example:

    # - 50% JPEG encoded, 90% raw pixel values
    # - quality=90 JPEGs
    ./write_dataset.sh 500 0.50 90
    

    One thing I'm curious is the effects of these quality options on the training results of the dataset, as I'm interested in reproducing Imagenet results but faster using FFCV. What would also be recommended settings to use if you would like to produce Native Imagenet Quality precisely (barring the crop size).

    opened by 1234gary 11
  • Is albumentations support useful?

    Is albumentations support useful?

    HI, ffcv team! Thanks for your great work. ffcv is so exciting for me.

    Do you plan to improve its interoperability with other libraries? In fact, I want to use more complicated augmentations on ffcv pipeline. So I tried to add a new Operation to wrap transforms of argumentation. These codes are below.

    https://github.com/ar90n/ffcv/commit/f53d49dcdb7084f9a05dd99d49a4925814a6172b

    It seems that they work correctly except for integrations with JIT-compiled codes. If are you interested in my work, I would like to contribute them to ffcv. What do you think?

    opened by ar90n 10
  • can't pip install ffcv

    can't pip install ffcv

    Hi, I have an existing conda environment that I'm trying to add ffcv to. Oddly enough, if I create a new conda environment with ffcv in it (via the conda create line given in the docs), I can import ffcv just fine. However, if I want to add the ffcv dependancies to an existing conda environment, ffcv won't install. Here's my work flow.

    (Note: I used pip instead of conda install because I get some

    • have an existing conda environment w/ python 3.8, cuda 10.2, torch 1.9, torchvision 0.10, numba
    • pip install open-cv, cupy, pkgconfig, compiler, pyturbojpeg (no libjpeg-turbo for pip available)
    • pip install ffcv

    Then I get this exception:

        ERROR: Command errored out with exit status 1:
         command: /home/miniconda3/envs/hippo-ffcv2/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-q89j7v53/ffcv/setup.py'"'"'; __file__='"'"'/tmp/pip-install-q89j7v53/ffcv/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-4qu3jg5l
             cwd: /tmp/pip-install-q89j7v53/ffcv/
        Complete output (7 lines):
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/tmp/pip-install-q89j7v53/ffcv/setup.py", line 30, in <module>
            extension_kwargs = pkgconfig('opencv4', extension_kwargs)
          File "/tmp/pip-install-q89j7v53/ffcv/setup.py", line 18, in pkgconfig
            raise Exception()
        Exception
        ----------------------------------------
    ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
    

    Note: I used pip because when I tried to use conda for (cupy, pkg-config, compilers, open-cv, etc), I get this error:

    Collecting package metadata (current_repodata.json): done
    Solving environment: | 
    The environment is inconsistent, please check the package plan carefully
    The following packages are causing the inconsistency:
    
      - pytorch/linux-64::torchvision==0.10.0=py38_cu111
      - pytorch/linux-64::torchaudio==0.9.0=py38
    

    Which after a long output of conflicts, it states this:

    Your installed version is: 2.27

    To get around this, I used pip, and didn't have these conflicts. But again, I can't install ffcv. Anybody have any idea what might be the issue?

    Update:

    I've also tried install opencv from source via https://docs.opencv.org/4.x/d7/d9f/tutorial_linux_install.html#tutorial_linux_install_quick_build_core, but still got the same exception above when trying to pip install ffcv

    Update2:

    I cloned the ffcv repo and ran python setup.py, and inspected the extension_kwargs var, and got this:

    {'sources': ['./libffcv/libffcv.cpp'], 'include_dirs': ['/home/miniconda3/envs/ffcv_hippo2/include/opencv4', '/home/miniconda3/envs/ffcv_hippo2/include'], 'library_dirs': ['/home/miniconda3/envs/ffcv_hippo2/lib', '/home/miniconda3/envs/ffcv_hippo2/lib'], 'libraries': ['opencv_gapi', 'opencv_stitching', 'opencv_alphamat', 'opencv_aruco', 'opencv_barcode', 'opencv_bgsegm', 'opencv_bioinspired', 'opencv_ccalib', 'opencv_cvv', 'opencv_dnn_objdetect', 'opencv_dnn_superres', 'opencv_dpm', 'opencv_face', 'opencv_freetype', 'opencv_fuzzy', 'opencv_hdf', 'opencv_hfs', 'opencv_img_hash', 'opencv_intensity_transform', 'opencv_line_descriptor', 'opencv_mcc', 'opencv_quality', 'opencv_rapid', 'opencv_reg', 'opencv_rgbd', 'opencv_saliency', 'opencv_stereo', 'opencv_structured_light', 'opencv_phase_unwrapping', 'opencv_superres', 'opencv_optflow', 'opencv_surface_matching', 'opencv_tracking', 'opencv_highgui', 'opencv_datasets', 'opencv_text', 'opencv_plot', 'opencv_videostab', 'opencv_videoio', 'opencv_wechat_qrcode', 'opencv_xfeatures2d', 'opencv_shape', 'opencv_ml', 'opencv_ximgproc', 'opencv_video', 'opencv_dnn', 'opencv_xobjdetect', 'opencv_objdetect', 'opencv_calib3d', 'opencv_imgcodecs', 'opencv_features2d', 'opencv_flann', 'opencv_xphoto', 'opencv_photo', 'opencv_imgproc', 'opencv_core', 'turbojpeg', 'pthread']}
    
    opened by exnx 9
  • pip install ffcv error

    pip install ffcv error

    (ffcv) $ pip install ffcv Collecting ffcv Using cached ffcv-0.0.2.tar.gz (53 kB) Preparing metadata (setup.py) ... error ERROR: Command errored out with exit status 1: command: /XXX/ffcv/bin/python3.9 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-j9vpaf6i/ffcv_c2eb5c5207c34d81ac29a3ccddbb59fb/setup.py'"'"'; file='"'"'/tmp/pip-install-j9vpaf6i/ffcv_c2eb5c5207c34d81ac29a3ccddbb59fb/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-y1669tep cwd: /tmp/pip-install-j9vpaf6i/ffcv_c2eb5c5207c34d81ac29a3ccddbb59fb/ Complete output (5 lines): Traceback (most recent call last): File "", line 1, in File "/tmp/pip-install-j9vpaf6i/ffcv_c2eb5c5207c34d81ac29a3ccddbb59fb/setup.py", line 36, in libffcv = Extension('ffcv._libffcv', TypeError: keywords must be strings

    WARNING: Discarding https://files.pythonhosted.org/packages/4f/55/9b06a72c29710110387c3af33eb1a9d6e5d9d5781d5b2850a3ad202942d2/ffcv-0.0.1.tar.gz#sha256=5246dbbbc0a3bcf788783d413292d8bede455a08ad0fa9c29736eca85b577b26 (from https://pypi.org/simple/ffcv/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ERROR: Could not find a version that satisfies the requirement ffcv (from versions: 0.0.1, 0.0.2) ERROR: No matching distribution found for ffcv_

    opened by geekchen007 9
  • Error when using install ffcv using guidance command.

    Error when using install ffcv using guidance command.

    Hi, I am trying to install ffcv on my server. Here are the server settings.

    Platform:amd-linux
    GPU: 1*RTX02080Ti
    

    When I use the default code to create the environment, and set up the environment, I could not move data to gpu and an error occurred. Here is the error:

    Torch not compiled with CUDA enabled
    

    I have installed GPU driver on my server. Could you please tell me why this happens?

    opened by HaoKang-Timmy 0
  • Unexpected pipeline behavior with NDArrayField

    Unexpected pipeline behavior with NDArrayField

    I don't think this is a bug, but as a new user it surprised me, perhaps its documented but if not perhaps should be. I define a dataset like so:

    fshape = (1,31,32)
    writer = DatasetWriter(write_path, {
        'data': NDArrayField(shape=fshape, dtype=np.dtype('float64')),
        'target': NDArrayField(shape=fshape, dtype=np.dtype('float64')),
        'vol': NDArrayField(shape=(1,fshape[-1]), dtype=np.dtype('float64')),
        'temp': NDArrayField(shape=(1,), dtype=np.dtype('float64')),
    
    }, num_workers=64) 
    

    After writing the .beton file, I at first tried creating a loader using the same pipeline

    float_pipeline = [NDArrayDecoder(), ToTensor()]
    
    # Pipeline for each data field
    pipelines = {
        'data': float_pipeline,
        'target': float_pipeline,
        'vol': float_pipeline,
        'temp': float_pipeline
    }       
    
    loader = Loader(ffcv_file, batch_size=64, num_workers=8,
                    order=OrderOption.RANDOM, pipelines=pipelines)
    data,target,vol,temp = next(iter(loader))
    

    However, all of the variables have the shape of the smallest array, in this case temp (i.e. data is shape (Nbatch,1), where it should be (Nbatch,1,31,32)).

    When I create separate pipelines for each variable which is a different size, then things come out correctly:

    float_pipeline = [NDArrayDecoder(), ToTensor()]
    vol_pipeline = [NDArrayDecoder(), ToTensor()]
    T_pipeline = [NDArrayDecoder(), ToTensor()]
    
    # Pipeline for each data field
    pipelines = {
        'data': float_pipeline,
        'target': float_pipeline,
        'vol': vol_pipeline,
        'temp': T_pipeline
    }      
    
    opened by rmchurch 0
  • Getting errors in FFCV import in macos with M1 chpi

    Getting errors in FFCV import in macos with M1 chpi

    Traceback (most recent call last):
      File "/Users/levon_/projects/course_work/run_benchmarks.py", line 2, in <module>
        from runners.run import run_config
      File "/Users/levon_/projects/course_work/runners/run.py", line 3, in <module>
        from runners.run_ffcv import run_ffcv
      File "/Users/levon_/projects/course_work/runners/run_ffcv.py", line 7, in <module>
        from ingestor.ffcv_ingestor import FFCVIngestor
      File "/Users/levon_/projects/course_work/ingestor/ffcv_ingestor.py", line 6, in <module>
        from ffcv.fields import BytesField, IntField, FloatField, RGBImageField
      File "/Users/levon_/opt/anaconda3/envs/ffcv/lib/python3.9/site-packages/ffcv/__init__.py", line 1, in <module>
        from .loader import Loader
      File "/Users/levon_/opt/anaconda3/envs/ffcv/lib/python3.9/site-packages/ffcv/loader/__init__.py", line 1, in <module>
        from .loader import Loader, OrderOption
      File "/Users/levon_/opt/anaconda3/envs/ffcv/lib/python3.9/site-packages/ffcv/loader/loader.py", line 12, in <module>
        from ffcv.fields.base import Field
      File "/Users/levon_/opt/anaconda3/envs/ffcv/lib/python3.9/site-packages/ffcv/fields/__init__.py", line 3, in <module>
        from .rgb_image import RGBImageField
      File "/Users/levon_/opt/anaconda3/envs/ffcv/lib/python3.9/site-packages/ffcv/fields/rgb_image.py", line 15, in <module>
        from ..libffcv import imdecode, memcpy, resize_crop
      File "/Users/levon_/opt/anaconda3/envs/ffcv/lib/python3.9/site-packages/ffcv/libffcv.py", line 8, in <module>
        libc = cdll.LoadLibrary('libc.so.6')
      File "/Users/levon_/opt/anaconda3/envs/ffcv/lib/python3.9/ctypes/__init__.py", line 460, in LoadLibrary
        return self._dlltype(name)
      File "/Users/levon_/opt/anaconda3/envs/ffcv/lib/python3.9/ctypes/__init__.py", line 382, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: dlopen(libc.so.6, 0x0006): tried: '/libc.so.6' (no such file), '/Users/levon_/Downloads/boost_1_78_0/stage/lib/libc.so.6' (no such file), '/Users/levon_/opt/anaconda3/envs/ffcv/lib/python3.9/lib-dynload/../../libc.so.6' (no such file), '/Users/levon_/opt/anaconda3/envs/ffcv/bin/../lib/libc.so.6' (no such file), 'libc.so.6' (no such file), '/usr/local/lib/libc.so.6' (no such file), '/usr/lib/libc.so.6' (no such file), '/libc.so.6' (no such file), '/Users/levon_/Downloads/boost_1_78_0/stage/lib/libc.so.6' (no such file), '/Users/levon_/projects/course_work/libc.so.6' (no such file), '/usr/local/lib/libc.so.6' (no such file), '/usr/lib/libc.so.6' (no such file)
    
    opened by levongh 0
  • Unable to reproduce the results using the official training script

    Unable to reproduce the results using the official training script

    Hello,

    Unfortunately I have been unable to reproduce reported results despite a lot of efforts.

    Installation

    conda create -y -n ffcv python=3.9 cupy pkg-config compilers libjpeg-turbo opencv pytorch torchvision cudatoolkit=11.3 numba -c pytorch -c conda-forge
    

    As PyTorch 1.13 has been released and it only supports CUDA 11.6 and 11.7, the above installed the CPU version of PyTorch 1.13. Thus I had to install again PyTorch 1.12.1 for compatibility with CUDA 11.3:

    conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
    

    And installation of dependencies:

    conda activate ffcv
    pip install torchmetrics
    

    Create dataset

    git clone https://github.com/libffcv/ffcv-imagenet.git
    cd ffcv-imagenet
    export IMAGENET_DIR=$HOME/data/imagenet
    export WRITE_DIR=$HOME/data/imagenet_ffcv/jpg50
    bash write_imagenet.sh 500 0.50 90
    

    Results:

    (ffcv) bash write_imagenet.sh 500 0.50 90
    Writing ImageNet train dataset to /home/data/imagenet_ffcv/jpg50/train_500_0.50_90.ffcv
    ┌ Arguments defined────────┬─────────────────────────────────────────────────────────────────────────────────┐
    │ Parameter                │ Value                                                                           │
    ├──────────────────────────┼─────────────────────────────────────────────────────────────────────────────────┤
    │ cfg.dataset              │ imagenet                                                                        │
    │ cfg.split                │ train                                                                           │
    │ cfg.data_dir             │ /home/data/imagenet/train                                                       │
    │ cfg.write_path           │ /home/data/imagenet_ffcv/jpg50/train_500_0.50_90.ffcv                           │
    │ cfg.write_mode           │ jpg                                                                             │
    │ cfg.max_resolution       │ 500                                                                             │
    │ cfg.num_workers          │ 64                                                                              │
    │ cfg.chunk_size           │ 100                                                                             │
    │ cfg.jpeg_quality         │ 90.0                                                                            │
    │ cfg.subset               │ -1                                                                              │
    │ cfg.compress_probability │ 0.5                                                                             │
    └──────────────────────────┴─────────────────────────────────────────────────────────────────────────────────┘
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1281167/1281167 [3:38:32<00:00, 97.70it/s]
    Writing ImageNet val dataset to /home/data/imagenet_ffcv/jpg50/val_500_0.50_90.ffcv
    ┌ Arguments defined────────┬───────────────────────────────────────────────────────────────────────────────┐
    │ Parameter                │ Value                                                                         │
    ├──────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤
    │ cfg.dataset              │ imagenet                                                                      │
    │ cfg.split                │ val                                                                           │
    │ cfg.data_dir             │ /home/data/imagenet/val                                                       │
    │ cfg.write_path           │ /home/data/imagenet_ffcv/jpg50/val_500_0.50_90.ffcv                           │
    │ cfg.write_mode           │ jpg                                                                           │
    │ cfg.max_resolution       │ 500                                                                           │
    │ cfg.num_workers          │ 64                                                                            │
    │ cfg.chunk_size           │ 100                                                                           │
    │ cfg.jpeg_quality         │ 90.0                                                                          │
    │ cfg.subset               │ -1                                                                            │
    │ cfg.compress_probability │ 0.5                                                                           │
    └──────────────────────────┴───────────────────────────────────────────────────────────────────────────────┘
    100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50000/50000 [09:28<00:00, 87.95it/s]
    

    Training

    I tried ResNet50 for 16 epochs and expected the training to complete in about 15 minutes, as reported.

    (ffcv) python train_imagenet.py --config-file rn50_configs/rn50_16_epochs.yaml \                                                                                                                 
    >     --data.train_dataset=$HOME/data/imagenet_ffcv/jpg50/train_500_0.50_90.ffcv \                                                                                                               
    >     --data.val_dataset=$HOME/data/imagenet_ffcv/jpg50/val_500_0.50_90.ffcv \                                                                                                                   
    >     --data.num_workers=8 --data.in_memory=1 \                                                                                                                                                  
    >     --logging.folder=$HOME/experiments/ffcv                                                                                                                                                    
    ┌ Arguments defined────────┬─────────────────────────────────────────────────────────────────────────────────┐                                                                                   
    │ Parameter                │ Value                                                                           │                                                                                   
    ├──────────────────────────┼─────────────────────────────────────────────────────────────────────────────────┤                                                                                   
    │ model.arch               │ resnet50                                                                        │                                                                                   
    │ model.pretrained         │ 0                                                                               │                                                                                   
    │ resolution.min_res       │ 160                                                                             │                                                                                   
    │ resolution.max_res       │ 192                                                                             │                                                                                   
    │ resolution.end_ramp      │ 13                                                                              │                                                                                   
    │ resolution.start_ramp    │ 11                                                                              │                                                                                   
    │ data.train_dataset       │ /home/data/imagenet_ffcv/jpg50/train_500_0.50_90.ffcv                           │                                                                                   
    │ data.val_dataset         │ /home/data/imagenet_ffcv/jpg50/val_500_0.50_90.ffcv                             │                                                                                   
    │ data.num_workers         │ 8                                                                               │                                                                                   
    │ data.in_memory           │ 1                                                                               │                                                                                   
    │ lr.step_ratio            │ 0.1                                                                             │                                                                                   
    │ lr.step_length           │ 30                                                                              │                                                                                   
    │ lr.lr_schedule_type      │ cyclic                                                                          │                                                                                   
    │ lr.lr                    │ 1.7                                                                             │                                                                                   
    │ lr.lr_peak_epoch         │ 2                                                                               │                                                                                   
    │ logging.folder           │ /home/experiments/ffcv                                                          │                                                                                   
    │ logging.log_level        │ 1                                                                               │                                                                                   
    │ validation.batch_size    │ 512                                                                             │                                                                                   
    │ validation.resolution    │ 256                                                                             │                                                                                   
    │ validation.lr_tta        │ 1                                                                               │                                                                                   
    │ training.eval_only       │ 0                                                                               │                                                                                   
    │ training.batch_size      │ 512                                                                             │                                                                                   
    │ training.optimizer       │ sgd                                                                             │                                                                                   
    │ training.momentum        │ 0.9                                                                             │                                                                                   
    │ training.weight_decay    │ 0.0001                                                                          │                                                                                   
    │ training.epochs          │ 16                                                                              │                                                                                   
    │ training.label_smoothing │ 0.1                                                                             │                                                                                   
    │ training.distributed     │ 1                                                                               │                                                                                   
    │ training.use_blurpool    │ 1                                                                               │                                                                                   
    │ dist.world_size          │ 8                                                                               │                                                                                   
    │ dist.address             │ localhost                                                                       │                                                                                   
    │ dist.port                │ 12355                                                                           │                                                                                   
    └──────────────────────────┴─────────────────────────────────────────────────────────────────────────────────┘                                                                                   
    Warning: no ordering seed was specified with distributed=True. Setting seed to 0 to match PyTorch distributed sampler.Warning: no ordering seed was specified with distributed=True. Setting seed to 0 to match PyTorch distributed sampler.Warning: no ordering seed was specified with distributed=True. Setting seed to 0 to match PyTorch distributed sampler.Warning: no ordering seed was specified with distributed=True. Setting seed to 0 to match PyTorch distributed sampler.Warning: no ordering seed was specified with distributed=True. Setting seed to 0 to match PyTorch distributed sampler.                                                                                                                                                                                      
    Warning: no ordering seed was specified with distributed=True. Setting seed to 0 to match PyTorch distributed sampler.  
    Warning: no ordering seed was specified with distributed=True. Setting seed to 0 to match PyTorch distributed sampler.
    Warning: no ordering seed was specified with distributed=True. Setting seed to 0 to match PyTorch distributed sampler.
    /home/.conda/envs/ffcv/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
    /home/.conda/envs/ffcv/lib/python3.9/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called 
    `full_state_update` that has
                    not been set for this class (MeanScalarMetric). The property determines if `update` by
                    default needs access to the full metric state. If this is not the case, significant speedups can be
                    achieved and we recommend setting this to `False`.
                    We provide an checking function
                    `from torchmetrics.utilities import check_forward_full_state_property`
                    that can be used to check if the `full_state_update=True` (old and potential slower behaviour,
                    default for now) or if `full_state_update=False` can be used safely.
    => Logging in /home/experiments/ffcv/69f9f7b3-1f39-48b6-be91-2242639ef094
    ep=0, iter=311, shape=(512, 3, 160, 160), lrs=['0.847', '0.847']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [28:08<00:00,  5.41s/it]
    ep=0, iter=311, shape=(512, 3, 160, 160), lrs=['0.847', '0.847']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [28:08<00:00,  5.41s/it]
    ep=0, iter=311, shape=(512, 3, 160, 160), lrs=['0.847', '0.847']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [28:08<00:00,  5.41s/it]
    ep=0, iter=311, shape=(512, 3, 160, 160), lrs=['0.847', '0.847']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [28:08<00:00,  5.41s/it]
    ep=0, iter=311, shape=(512, 3, 160, 160), lrs=['0.847', '0.847']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [28:08<00:00,  5.41s/it]
    ep=0, iter=311, shape=(512, 3, 160, 160), lrs=['0.847', '0.847']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [28:08<00:00,  4.53s/it]
    ep=0, iter=311, shape=(512, 3, 160, 160), lrs=['0.847', '0.847']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [28:08<00:00,  5.41s/it]
    ep=0, iter=311, shape=(512, 3, 160, 160), lrs=['0.847', '0.847']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [28:08<00:00,  5.41s/it]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:17<00:00,  1.31s/it]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:17<00:00,  1.31s/it]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:17<00:00,  1.26it/s]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:17<00:00,  1.31s/it]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:17<00:00,  1.31s/it]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:17<00:00,  1.31s/it]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:17<00:00,  1.31s/it]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:17<00:00,  1.31s/it]
    => Log: {'current_lr': 0.8473609134615385, 'top_1': 0.06341999769210815, 'top_5': 0.17452000081539154, 'val_time': 17.07068157196045, 'train_loss': None, 'epoch': 0}                            
    ep=1, iter=311, shape=(512, 3, 160, 160), lrs=['1.697', '1.697']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [06:18<00:00,  1.21s/it]
    ep=1, iter=311, shape=(512, 3, 160, 160), lrs=['1.697', '1.697']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [06:18<00:00,  1.21s/it]
    ep=1, iter=311, shape=(512, 3, 160, 160), lrs=['1.697', '1.697']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [06:18<00:00,  1.21s/it]
    ep=1, iter=311, shape=(512, 3, 160, 160), lrs=['1.697', '1.697']: 100%|███████████████████████████████████████████████████████████████████████████████████████▋| 311/312 [06:18<00:01,  1.50s/it]
    ep=1, iter=311, shape=(512, 3, 160, 160), lrs=['1.697', '1.697']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [06:18<00:00,  1.21s/it]
    ep=1, iter=311, shape=(512, 3, 160, 160), lrs=['1.697', '1.697']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [06:18<00:00,  1.21s/it]
    ep=1, iter=311, shape=(512, 3, 160, 160), lrs=['1.697', '1.697']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [06:18<00:00,  1.21s/it]
    ep=1, iter=311, shape=(512, 3, 160, 160), lrs=['1.697', '1.697']: 100%|████████████████████████████████████████████████████████████████████████████████████████| 312/312 [06:18<00:00,  1.21s/it]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:12<00:00,  1.03it/s]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:12<00:00,  1.03it/s]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:12<00:00,  1.02it/s]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:12<00:00,  1.02it/s]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:12<00:00,  1.02it/s]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:12<00:00,  1.02it/s]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:12<00:00,  1.02it/s]
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:12<00:00,  1.01it/s]
    => Log: {'current_lr': 1.6972759134615385, 'top_1': 0.14985999464988708, 'top_5': 0.35117998719215393, 'val_time': 12.889710664749146, 'train_loss': None, 'epoch': 1}
    

    As you can see, epoch 0 already took 28 minutes to complete (the subsequent epochs took more than 6 minutes but this is still too high). I checked GPU utilization using nvidia-smi: all GPUs were used (and under-used, obviously):

    $ nvidia-smi
    Sun Nov 13 00:40:48 2022       
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 515.65.07    Driver Version: 515.65.07    CUDA Version: 11.7     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  NVIDIA A100-SXM...  On   | 00000000:07:00.0 Off |                    0 |
    | N/A   41C    P0   160W / 400W |  22562MiB / 81920MiB |    100%      Default |
    |                               |                      |             Disabled |
    +-------------------------------+----------------------+----------------------+
    |   1  NVIDIA A100-SXM...  On   | 00000000:0B:00.0 Off |                    0 |
    | N/A   43C    P0   160W / 400W |  16599MiB / 81920MiB |    100%      Default |
    |                               |                      |             Disabled |
    +-------------------------------+----------------------+----------------------+
    |   2  NVIDIA A100-SXM...  On   | 00000000:48:00.0 Off |                    0 |
    | N/A   37C    P0   346W / 400W |  16705MiB / 81920MiB |    100%      Default |
    |                               |                      |             Disabled |
    +-------------------------------+----------------------+----------------------+
    |   3  NVIDIA A100-SXM...  On   | 00000000:4C:00.0 Off |                    0 |
    | N/A   40C    P0    94W / 400W |  16599MiB / 81920MiB |     74%      Default |
    |                               |                      |             Disabled |
    +-------------------------------+----------------------+----------------------+
    |   4  NVIDIA A100-SXM...  On   | 00000000:88:00.0 Off |                    0 |
    | N/A   34C    P0    77W / 400W |  16599MiB / 81920MiB |    100%      Default |
    |                               |                      |             Disabled |
    +-------------------------------+----------------------+----------------------+
    |   5  NVIDIA A100-SXM...  On   | 00000000:8B:00.0 Off |                    0 |
    | N/A   32C    P0    67W / 400W |  16599MiB / 81920MiB |     47%      Default |
    |                               |                      |             Disabled |
    +-------------------------------+----------------------+----------------------+
    |   6  NVIDIA A100-SXM...  On   | 00000000:C8:00.0 Off |                    0 |
    | N/A   33C    P0    83W / 400W |  16599MiB / 81920MiB |     23%      Default |
    |                               |                      |             Disabled |
    +-------------------------------+----------------------+----------------------+
    |   7  NVIDIA A100-SXM...  On   | 00000000:CB:00.0 Off |                    0 |
    | N/A   32C    P0    79W / 400W |  16455MiB / 81920MiB |      1%      Default |
    |                               |                      |             Disabled |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
    |    0   N/A  N/A    806252      C   ...onda/envs/ffcv/bin/python    16449MiB |
    |    0   N/A  N/A    806253      C   ...onda/envs/ffcv/bin/python      867MiB |
    |    0   N/A  N/A    806254      C   ...onda/envs/ffcv/bin/python      867MiB |
    |    0   N/A  N/A    806255      C   ...onda/envs/ffcv/bin/python      867MiB |
    |    0   N/A  N/A    806256      C   ...onda/envs/ffcv/bin/python      867MiB |
    |    0   N/A  N/A    806257      C   ...onda/envs/ffcv/bin/python      867MiB |
    |    0   N/A  N/A    806258      C   ...onda/envs/ffcv/bin/python      867MiB |
    |    0   N/A  N/A    806259      C   ...onda/envs/ffcv/bin/python      867MiB |
    |    1   N/A  N/A    806253      C   ...onda/envs/ffcv/bin/python    16593MiB |
    |    2   N/A  N/A     18785      G   /usr/libexec/Xorg                  63MiB |
    |    2   N/A  N/A     18848      G   /usr/bin/gnome-shell               41MiB |
    |    2   N/A  N/A    806254      C   ...onda/envs/ffcv/bin/python    16593MiB |
    |    3   N/A  N/A    806255      C   ...onda/envs/ffcv/bin/python    16593MiB |
    |    4   N/A  N/A    806256      C   ...onda/envs/ffcv/bin/python    16593MiB |
    |    5   N/A  N/A    806257      C   ...onda/envs/ffcv/bin/python    16593MiB |
    |    6   N/A  N/A    806258      C   ...onda/envs/ffcv/bin/python    16593MiB |
    |    7   N/A  N/A    806259      C   ...onda/envs/ffcv/bin/python    16449MiB |
    +-----------------------------------------------------------------------------+
    

    My server has 8 GPU Nvidia A100 (SXM4 80 Go) and 512 GB of RAM, which is comparable to what was used in your experiments.

    What should I check to see what went wrong? Could you please try reproducing the above on your side? I guess this would take only a few minutes (the write_imagenet step takes a lot of time but you already have the files so this step can be skipped).

    Thank you very much in advance for your response!

    opened by netw0rkf10w 1
  • Merging two FFCV .beton datasets

    Merging two FFCV .beton datasets

    Hi

    Let's say we have two datasets written to two FCCV dataset files say dataset1.beton and dataset2.beton. Assume both these datasets have the exact same image and label format and are written using the same method. Is it possible to take these two .beton dataset files and merge them into a single merged-dataset.beton FFCV dataset file?

    This will be very useful when we want to continuously increase the size of the training datasets through continuous data collection.

    Thanks!

    opened by manideep2510 0
Releases(v0.0.3)
Weight initialization schemes for PyTorch nn.Modules

nninit Weight initialization schemes for PyTorch nn.Modules. This is a port of the popular nninit for Torch7 by @kaixhin. ##Update This repo has been

Alykhan Tejani 69 Jan 26, 2021
Code for 2021 NeurIPS --- Towards Multi-Grained Explainability for Graph Neural Networks

ReFine: Multi-Grained Explainability for GNNs This is the official code for Towards Multi-Grained Explainability for Graph Neural Networks (NeurIPS 20

Shirley (Ying-Xin) Wu 47 Dec 16, 2022
Generate Cartoon Images using Generative Adversarial Network

AvatarGAN ✨ Generate Cartoon Images using DC-GAN Deep Convolutional GAN is a generative adversarial network architecture. It uses a couple of guidelin

Aakash Jhawar 50 Dec 29, 2022
This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transformers.

TransMix: Attend to Mix for Vision Transformers This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transf

Jie-Neng Chen 130 Jan 01, 2023
Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020.

RegNet Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020. Paper | Official Implementation RegNet offer a very

Vishal R 2 Feb 11, 2022
African language Speech Recognition - Speech-to-Text

Swahili-Speech-To-Text Table of Contents Swahili-Speech-To-Text Overview Scenario Approach Project Structure data: models: notebooks: scripts tests: l

2 Jan 05, 2023
DiscoNet: Learning Distilled Collaboration Graph for Multi-Agent Perception [NeurIPS 2021]

DiscoNet: Learning Distilled Collaboration Graph for Multi-Agent Perception [NeurIPS 2021] Yiming Li, Shunli Ren, Pengxiang Wu, Siheng Chen, Chen Feng

Automation and Intelligence for Civil Engineering (AI4CE) Lab @ NYU 98 Dec 21, 2022
PyTorch implementation of "Conformer: Convolution-augmented Transformer for Speech Recognition" (INTERSPEECH 2020)

PyTorch implementation of Conformer: Convolution-augmented Transformer for Speech Recognition. Transformer models are good at capturing content-based

Soohwan Kim 565 Jan 04, 2023
Flax is a neural network ecosystem for JAX that is designed for flexibility.

Flax: A neural network library and ecosystem for JAX designed for flexibility Overview | Quick install | What does Flax look like? | Documentation See

Google 3.9k Jan 02, 2023
Greedy Gaussian Segmentation

GGS Greedy Gaussian Segmentation (GGS) is a Python solver for efficiently segmenting multivariate time series data. For implementation details, please

Stanford University Convex Optimization Group 72 Dec 07, 2022
This is a Keras-based Python implementation of DeepMask- a complex deep neural network for learning object segmentation masks

NNProject - DeepMask This is a Keras-based Python implementation of DeepMask- a complex deep neural network for learning object segmentation masks. Th

189 Nov 16, 2022
GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

Joel Huang 15 Dec 24, 2022
[ICCV 2021] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain

Amplitude-Phase Recombination (ICCV'21) Official PyTorch implementation of "Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neur

Guangyao Chen 53 Oct 05, 2022
MANO hand model porting for the GraspIt simulator

Learning Joint Reconstruction of Hands and Manipulated Objects - ManoGrasp Porting the MANO hand model to GraspIt! simulator Yana Hasson, Gül Varol, D

Lucas Wohlhart 10 Feb 08, 2022
Official implementation of MSR-GCN (ICCV 2021 paper)

MSR-GCN Official implementation of MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human Motion Prediction (ICCV 2021 paper) [Paper] [Sup

LevonDang 42 Nov 07, 2022
Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted)

NLOS-OT Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted) Description In this reposit

Ruixu Geng(耿瑞旭) 16 Dec 16, 2022
Deep learning model for EEG artifact removal

DeepSeparator Introduction Electroencephalogram (EEG) recordings are often contaminated with artifacts. Various methods have been developed to elimina

23 Dec 21, 2022
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

514 Dec 28, 2022
Funnels: Exact maximum likelihood with dimensionality reduction.

Funnels This repository contains the code needed to reproduce the experiments from the paper: Funnels: Exact maximum likelihood with dimensionality re

2 Apr 21, 2022
A library for using chemistry in your applications

Chemistry in python Resources Used The following items are not made by me! Click the words to go to the original source Periodic Tab Json - Used in -

Tech Penguin 28 Dec 17, 2021