OpenMMLab Image Classification Toolbox and Benchmark

Overview

Build Status Documentation Status codecov license

Introduction

English | 简体中文

MMClassification is an open source image classification toolbox based on PyTorch. It is a part of the OpenMMLab project.

Documentation: https://mmclassification.readthedocs.io/en/latest/

demo

Major features

  • Various backbones and pretrained models
  • Bag of training tricks
  • Large-scale training configs
  • High efficiency and extensibility

License

This project is released under the Apache 2.0 license.

Changelog

v0.17.0 was released in 29/10/2021.

Highlights of the new version:

  • Support Tokens-to-Token ViT backbone and Res2Net backbone. Welcome to use!
  • Support ImageNet21k dataset.
  • Add a pipeline visualization tool. Try it with the tutorials!

Please refer to changelog.md for more details and other release history.

Benchmark and model zoo

Results and models are available in the model zoo.

Supported backbones:

  • VGG
  • ResNet
  • ResNeXt
  • SE-ResNet
  • SE-ResNeXt
  • RegNet
  • ShuffleNetV1
  • ShuffleNetV2
  • MobileNetV2
  • MobileNetV3
  • Swin-Transformer
  • RepVGG
  • Vision-Transformer
  • Transformer-in-Transformer
  • Res2Net

Installation

Please refer to install.md for installation and dataset preparation.

Getting Started

Please see getting_started.md for the basic usage of MMClassification. There are also tutorials:

Colab tutorials are also provided. To learn about MMClassification Python API, you may preview the notebook here or directly run on Colab. To learn about MMClassification shell tools, you may preview the notebook here or directly run on Colab.

Citation

If you find this project useful in your research, please consider cite:

@misc{2020mmclassification,
    title={OpenMMLab's Image Classification Toolbox and Benchmark},
    author={MMClassification Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmclassification}},
    year={2020}
}

Contributing

We appreciate all contributions to improve MMClassification. Please refer to CONTRUBUTING.md for the contributing guideline.

Acknowledgement

MMClassification is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new classifiers.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM Installs OpenMMLab Packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMOCR: OpenMMLab toolbox for text detection, recognition and understanding.
  • MMGeneration: OpenMMlab toolkit for generative models.
  • MMFlow OpenMMLab optical flow toolbox and benchmark.
Comments
  • Display acc and val_acc on the Plot Curves

    Display acc and val_acc on the Plot Curves

    Describe the feature

    How can I plot the val_acc on the graph by the tools/analysis_tools/analyze_logs.py

    Motivation

    Hi, I am studying in this mmclassification, when I want to plot the curves and compare between acc and val_acc, I can not find some thing like val_acc to display.

    Related resources

    The tutorial just show the result of acc top 1, and top 5, currently I have not find out how to show the val_acc https://mmclassification.readthedocs.io/en/latest/tools/analysis.html

    Additional context

    As usual, by using Tensorflow or some libraries, we can show the graph to compare the result as the picture below: image

    opened by TranTriDat 21
  • Inference results don't match test results

    Inference results don't match test results

    Checklist

    • I have searched related issues but cannot get the expected help.
    • I have read related documents and don't know what to do.

    Describe the question you meet

    Hello, I trained Resnet-101 on my custom dataset. Then I tested the model with the official test tool on my test set and got an accuracy of 91.44%. However, when I try to inference the test set image one by one, I found the model can't classify images correctly. Almost all the images are misclassified. My data set has 3 labels '3h','5h' and '8h', The model will classify everything into ''8h'. But when I reload the model, it might predict everything into '5h'.

    To control, I tried different models on different dataset and this issue still existed. Which result should I believe? Thank you!

    Post related information

    1. The output of pip list | grep "mmcv\|mmcls\|^torch"

    mmcls 0.18.0 mmcv-full 1.4.0 torch 1.8.1 torchaudio 0.8.0a0+e4e171a torchvision 0.9.1

    1. Your config file if you modified it or created a new one.
    model = dict(
        type='ImageClassifier',
        backbone=dict(
            type='ResNet',
            depth=101,
            num_stages=4,
            out_indices=(3, ),
            style='pytorch'),
        neck=dict(type='GlobalAveragePooling'),
        head=dict(
            type='LinearClsHead',
            num_classes=3,
            in_channels=2048,
            loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
            topk=(1, 5)))
    dataset_type = 'ImageNet'
    img_norm_cfg = dict(
        mean=[420.57, 683.84, 1104.41], std=[100.3, 50.3, 150.6], to_rgb=True)
    train_pipeline = [
        dict(type='LoadImageFromFile', color_type='unchanged', to_float32=True),
        dict(type='RandomResizedCrop', size=224),
        dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
        dict(
            type='Normalize',
            mean=[420.57, 683.84, 1104.41],
            std=[100.3, 50.3, 150.6],
            to_rgb=True),
        dict(type='ImageToTensor', keys=['img']),
        dict(type='ToTensor', keys=['gt_label']),
        dict(type='Collect', keys=['img', 'gt_label'])
    ]
    test_pipeline = [
        dict(type='LoadImageFromFile', color_type='unchanged', to_float32=True),
        dict(type='Resize', size=(256, -1)),
        dict(type='CenterCrop', crop_size=224),
        dict(
            type='Normalize',
            mean=[420.57, 683.84, 1104.41],
            std=[100.3, 50.3, 150.6],
            to_rgb=True),
        dict(type='ImageToTensor', keys=['img']),
        dict(type='Collect', keys=['img'])
    ]
    data = dict(
        samples_per_gpu=32,
        workers_per_gpu=4,
        train=dict(
            type='ImageNet',
            data_prefix='/mnt/data/processed/microglia/ImageNet_merge/train',
            pipeline=[
                dict(
                    type='LoadImageFromFile',
                    color_type='unchanged',
                    to_float32=True),
                dict(type='RandomResizedCrop', size=224),
                dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
                dict(
                    type='Normalize',
                    mean=[420.57, 683.84, 1104.41],
                    std=[100.3, 50.3, 150.6],
                    to_rgb=True),
                dict(type='ImageToTensor', keys=['img']),
                dict(type='ToTensor', keys=['gt_label']),
                dict(type='Collect', keys=['img', 'gt_label'])
            ],
            classes='/mnt/data/processed/microglia/ImageNet_merge/classes.txt'),
        val=dict(
            type='ImageNet',
            data_prefix='/mnt/data/processed/microglia/ImageNet_merge/test',
            ann_file='/mnt/data/processed/microglia/ImageNet_merge/test/test.txt',
            pipeline=[
                dict(
                    type='LoadImageFromFile',
                    color_type='unchanged',
                    to_float32=True),
                dict(type='Resize', size=(256, -1)),
                dict(type='CenterCrop', crop_size=224),
                dict(
                    type='Normalize',
                    mean=[420.57, 683.84, 1104.41],
                    std=[100.3, 50.3, 150.6],
                    to_rgb=True),
                dict(type='ImageToTensor', keys=['img']),
                dict(type='Collect', keys=['img'])
            ],
            classes='/mnt/data/processed/microglia/ImageNet_merge/classes.txt'),
        test=dict(
            type='ImageNet',
            data_prefix='/mnt/data/processed/microglia/ImageNet_merge/test',
            ann_file='/mnt/data/processed/microglia/ImageNet_merge/test/test.txt',
            pipeline=[
                dict(
                    type='LoadImageFromFile',
                    color_type='unchanged',
                    to_float32=True),
                dict(type='Resize', size=(256, -1)),
                dict(type='CenterCrop', crop_size=224),
                dict(
                    type='Normalize',
                    mean=[420.57, 683.84, 1104.41],
                    std=[100.3, 50.3, 150.6],
                    to_rgb=True),
                dict(type='ImageToTensor', keys=['img']),
                dict(type='Collect', keys=['img'])
            ],
            classes='/mnt/data/processed/microglia/ImageNet_merge/classes.txt'))
    evaluation = dict(interval=10, metric='accuracy', metric_options=dict(topk=1))
    optimizer = dict(type='Adam', lr=0.0001, weight_decay=0.001, amsgrad=False)
    optimizer_config = dict(grad_clip=None)
    lr_config = dict(policy='step', step=[20, 50, 70], gamma=0.1)
    runner = dict(type='EpochBasedRunner', max_epochs=100)
    checkpoint_config = dict(interval=10)
    log_config = dict(interval=100, hooks=[dict(type='TextLoggerHook')])
    dist_params = dict(backend='nccl')
    log_level = 'INFO'
    load_from = None
    resume_from = None
    workflow = [('train', 1)]
    work_dir = 'merge_resnet_100e_128'
    gpu_ids = range(0, 1)
    
    help wanted 
    opened by gabrielpan147 15
  • 为何精度低了

    为何精度低了

    推荐使用英语模板 General question,以便你的问题帮助更多人。

    首先确认以下内容

    • 我已经查询了相关的 issue,但没有找到需要的帮助。
    • 我已经阅读了相关文档,但仍不知道如何解决。

    描述你遇到的问题

    为什么同样的数据和模型,同样的超参数及优化器。模型精度比我在tf上的低了 96%降到92%

    help wanted 
    opened by zj837578609 14
  • Question about K-Fold config

    Question about K-Fold config

    Hi, dear authors! Thank you for your amazing job!

    I have a question about K-Fold config. In K-Fold, can the val and the test be the same? Like follow:

        train=dict(
            type='KFoldDataset',
            num_splits=5,
            dataset=dict(
                type='RepeatDataset',
                times=15,
                dataset=dict(
                    type=dataset_type,
                    data_prefix='data/my_data/train',
                    pipeline=train_pipeline))),
        val=dict(
            type=dataset_type,
            data_prefix='data/my_data/val',
            ann_file='data/my_data/meta/val.txt',
            pipeline=test_pipeline),
        test=dict(
            # replace `data/val` with `data/test` for standard test
            type=dataset_type,
            data_prefix='data/my_data/val',
            ann_file='data/my_data/meta/val.txt',
            pipeline=test_pipeline))
    

    Thank you!


    您好: 非常感谢您的工作! 我有一个关于 K-Fold 配置的问题。请问在 K-Fold 中,val 的配置和 test 的配置可以相同吗?(就是valtest公用一套图像) 就像下面这样:

        train=dict(
            type='KFoldDataset',
            num_splits=5,
            dataset=dict(
                type='RepeatDataset',
                times=15,
                dataset=dict(
                    type=dataset_type,
                    data_prefix='data/my_data/train',
                    pipeline=train_pipeline))),
        val=dict(
            type=dataset_type,
            data_prefix='data/my_data/val',
            ann_file='data/my_data/meta/val.txt',
            pipeline=test_pipeline),
        test=dict(
            # replace `data/val` with `data/test` for standard test
            type=dataset_type,
            data_prefix='data/my_data/val',
            ann_file='data/my_data/meta/val.txt',
            pipeline=test_pipeline))
    

    谢谢!

    help wanted 
    opened by zhongqiu1245 13
  • [Feature] Dedicated MMClsWandbHook for MMClassification (Weights and Biases Integration)

    [Feature] Dedicated MMClsWandbHook for MMClassification (Weights and Biases Integration)

    I have raised a PR in MMDetection contributing a dedicated MMDetWandbHook for MMDetection. I was playing around with MMClassification and realized that MMClassification can use a similar dedicated hook (with few minor modifications).

    Motivation

    The goal of this PR is to contribute a dedicated Weights and Biases hook for MMClassification.

    Modification

    The PR adds two new files:

    • wandblogger_hook.py where all the Weights and Biases related logic lives and,
    • eval_hooks.py was added so that the MMClsWandbHook can reuse the validation results.

    The feature can easily be used this:

    log_config = dict(
                interval=10,
                hooks=[
                    dict(type='MMClsWandbHook',
                         wandb_init_kwargs={
                             'entity': WANDB_ENTITY,
                             'project': WANDB_PROJECT_NAME
                         },
                         log_checkpoint=True,
                         log_checkpoint_metadata=True,
                         num_eval_images=100)
                ])
    

    Use cases (Optional)

    Here are some of the use cases that this PR introduces and should be helpful to the community in general.

    Metrics

    • The WandbLogger will automatically log training and validation metrics.
    • It will log system (CPU/GPU) metrics.

    https://user-images.githubusercontent.com/31141479/161776566-57c59402-a16b-4940-a90a-76b64963a4d1.mov

    Checkpointing with Metadata

    If log_checkpoint is True, the checkpoint saved at every checkpoint interval will be saved as W&B Artifacts. On top of this, if log_checkpoint_metadata is True, every checkpoint artifact will have metadata associated with it as shown in the recording below.

    https://user-images.githubusercontent.com/31141479/161777072-cd00f1ee-aa24-42a4-b47f-9c915600a2f4.mov

    Log Model Prediction 🎉

    If num_eval_images > 0, at every evaluation interval, theMMClsWandbHook logs the model prediction as interactive W&B Tables. To know more about W&B Tables, please refer to the docs here. The 'MMClsWandbHook' logs the predicted class labels along with ground truth labels.

    https://user-images.githubusercontent.com/31141479/161784564-51943f43-53c8-41cf-a920-4d70e67bdc9b.mov

    Checklist

    Before PR:

    • [x] Pre-commit or other linting tools are used to fix the potential lint issues.
    • [ ] Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [ ] The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [ ] The documentation has been modified accordingly, like docstring or example tutorials.

    After PR:

    • [x] If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
    • [x] CLA has been signed and all committers have signed the CLA in this PR.
    opened by ayulockin 13
  • [Feature] Support DeiT3

    [Feature] Support DeiT3

    Motivation

    DeiT III: Revenge of the ViT github

    Checklist

    Before PR:

    • [x] Pre-commit or other linting tools are used to fix the potential lint issues.
    • [x] Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [x] The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [x] The documentation has been modified accordingly, like docstring or example tutorials.

    After PR:

    • [x] If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
    • [x] CLA has been signed and all committers have signed the CLA in this PR.
    opened by okotaku 12
  • [Feature] Support CUB dataset

    [Feature] Support CUB dataset

    Motivation

    Support CUB dataset.

    Augmentation and optimizer is based on A Novel Plug-in Module for Fine-Grained Visual Classification.

    Checklist

    Before PR:

    • [x] Pre-commit or other linting tools are used to fix the potential lint issues.
    • [x] Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [x] The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [x] The documentation has been modified accordingly, like docstring or example tutorials.

    After PR:

    • [x] If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
    • [x] CLA has been signed and all committers have signed the CLA in this PR.
    opened by okotaku 12
  • Training time about ResNet101V1c

    Training time about ResNet101V1c

    推荐使用英语模板 General question,以便你的问题帮助更多人。

    首先确认以下内容

    • 我已经查询了相关的 issue,但没有找到需要的帮助。
    • 我已经阅读了相关文档,但仍不知道如何解决。

    描述你遇到的问题

    您好! 我需要在ImageNet1K上训练一个基于ResNet101V1c进行定制的backbone,即在每个bottleneck中的第二个卷积上增加一个经过卷积的残差输出,用作下游分割任务。有以下两个问题进行请教。 微信图片_20211229121049

    1. 我们实验室有8张GTX 3090,能否告知一下使用8张GPU需要训练多久。
    2. 是应该完全从随机权重开始训练?还是应该对主支加载预训练权重,对每个bolck内新增残差模块随机初始化并在残差项最后使用0初始化的norm层来稳定训练?如果加载预训练权重,新增残差模块学习率是否应该与主支一样? 万分感谢!

    相关信息

    1. pip list | grep "mmcv\|mmcls\|^torch" 命令的输出 [填写这里]
    2. 如果你修改了,或者使用了新的配置文件,请在这里写明
    [填写这里]
    
    1. 如果你是在训练过程中遇到的问题,请填写完整的训练日志和报错信息 [填写这里]
    2. 如果你对 mmcls 文件夹下的代码做了其他相关的修改,请在这里写明 [填写这里]
    help wanted 
    opened by xiaoachen98 12
  • 请问如何对两张输入的图像做相同的aug操作?

    请问如何对两张输入的图像做相同的aug操作?

    您好,作者! 首先非常感谢您的工作! 请问如何对两张输入的图像做相同的aug操作? 具体来说,我将两张3通道的图像concat成了6通道图像(上头要求的),之后我想对这种6通道图像进行aug操作,比如autoaug,albu等。但是这些aug只支持3通道图像,不支持6通道图像。 所以我想到先把这6通道图像拆成两张3通道图像,之后对这两张3通道图像进行相同的aug操作(mmcv.xxxx),之后再把它们重新合并成6通道图像。 可是我该如何对这两张3通道的图像做相同的aug操作? 比如:

        def __call__(self, results):
            if np.random.rand() > self.prob:
                return results
            for key in results.get('img_fields', ['img']):
                img = results[key]
                img_contrasted = mmcv.auto_contrast(img)
                results[key] = img_contrasted.astype(img.dtype)
            return results
    

    假设现在result[“img”]就是6通道图像,我现在把它拆成result[“img_1”]和result[“img_2”],请问我该如何对result[“img_1”]和result[“img_2”]施加相同的mmcv.auto_contrast操作? 如果我用到Albu,请问我该如何对result[“img_1”]和result[“img_2”]施加相同的Albu操作? (因为autoaug和Albu中有一些操作具有随机性,比如RandomBrightnessContrast, ChannelShuffle,RadomCrop等,所以要确保result[“img_1”]和result[“img_2”]也要受到一模一样的随机操作。 如果我写成类似:

    result[“img_1”] = Albu.RandomBrightnessContrast(result[“img_1”])
    result[“img_2”] = Albu.RandomBrightnessContrast(result[“img_2"])
    

    则不能保证result[“img_1”]和result[“img_2”]受到相同的RandomBrightnessContrast操作,因为RandomBrightnessContrast有随机性,result[“img_1”]和result[“img_2”]收到的Random是不一致的。 一模一样的意思是比如result[“img_1”]被RadomCrop为224,则result[“img_2”]也必须被crop成224,且其crop的位置必须和result[“img_1”]的一模一样) 谢谢您!期待您的回复!

    help wanted 
    opened by zhongqiu1245 12
  •  problem about inferrence

    problem about inferrence

    when i infer my model, i meet this problem. Via the inferrence file of mmclassification, it turns out that the label of the image is 1 and the pred score is 0.8889318704605103. but when i use my own inferrence file, the output of fc is tensor([[ 0.5133, -0.5640]], device='cuda:0'). Do you have an ideas where i have operate incorrectly?

    my own inferrence file is as below:

    def forward_pytorch(weightfile, image):
        net=resnet.resnet18(num_classes=2)
        checkpoint = torch.load(weightfile)
        new_state_dict = OrderedDict()
        for k, v in checkpoint.items():
            if k == "state_dict":
                for layer_name, weight in v.items():
                    layer_name1 = layer_name.replace("backbone.", "")
                    layer_name1 = layer_name1.replace("head.", "")
                    print(layer_name, layer_name1)
                    new_state_dict[layer_name1] = weight
        net.load_state_dict(new_state_dict)
        net.cuda()
        net.eval()
        normalize = transforms.Normalize(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375])
        transform1 = transforms.Compose([
            transforms.ToTensor(),
            normalize
            ]
        )
        image = transform1(image)  
        image = Variable(image.cuda())
        image = Variable(torch.unsqueeze(image, dim=0).float(), requires_grad=False)
        with torch.no_grad():
            blobs = net.forward(image)
        return blobs
    
    help wanted 
    opened by reclusezz 12
  • [Bug]AttributeError: 'ConfigDict' object has no attribute 'model'

    [Bug]AttributeError: 'ConfigDict' object has no attribute 'model'

    Describe the bug

    I was going through Visualization.

    I try to ran

    python tools/visualizations/vis_cam.py \
        demo/bird.JPEG \
        configs/resnet/resnet50_8xb32_in1k.py \
        https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_batch256_imagenet_20200708-cfb998bf.pth \
        --method GradCAM
        # GradCAM++, XGradCAM, EigenCAM, EigenGradCAM, LayerCAM
    

    It raised the following error:

    Traceback (most recent call last):
      File "tools/visualizations/vis_cam.py", line 356, in <module>
        main()
      File "tools/visualizations/vis_cam.py", line 310, in main
        model = init_model(cfg, args.checkpoint, device=args.device)
      File "/public/liushuo/mmclassification-master/mmcls/apis/inference.py", line 34, in init_model
        config.model.pretrained = None
      File "/public/apps/anaconda3/envs/liushuo_UNet/lib/python3.8/site-packages/mmcv/utils/config.py", line 519, in __getattr__
        return getattr(self._cfg_dict, name)
      File "/public/apps/anaconda3/envs/liushuo_UNet/lib/python3.8/site-packages/mmcv/utils/config.py", line 50, in __getattr__
        raise ex
    AttributeError: 'ConfigDict' object has no attribute 'model'
    

    Post related information

    1. The output of pip list | grep "mmcv\|mmcls\|^torch"
    mmcls                          0.23.0                /public/liushuo/mmclassification-master
    mmcv-full                      1.5.1
    torch                          1.10.2
    torchaudio                     0.10.2
    torchsummary                   1.5.1
    torchvision                    0.11.3
    
    

    Please give me some advice to fix it. Thanks.

    opened by Crower-1 11
  • add scope to randaugment transform

    add scope to randaugment transform

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    I am going to implement semi supervised learning, and reuse multibranch(https://mmdetection.readthedocs.io/en/3.x/user_guides/semi_det.html) from mmdetection. It would raise transform not found exceptions without add those prefix scope.

    Also, it's necessary to add the scope since we may not use mmcls as our default scope.

    Modification

    Add mmcls scope to randaugment transforms

    BC-breaking (Optional)

    Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here and update the documentation.

    Checklist

    Before PR:

    • [ ] Pre-commit or other linting tools are used to fix the potential lint issues.
    • [ ] Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [ ] The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [ ] The documentation has been modified accordingly, like docstring or example tutorials.

    After PR:

    • [ ] If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
    • [ ] CLA has been signed and all committers have signed the CLA in this PR.
    opened by twmht 2
  • Add deployment guide

    Add deployment guide

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Add deployment guide for mmclassifcation models with mmdeploy

    Modification

    Please briefly describe what modification is made in this PR.

    BC-breaking (Optional)

    Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here and update the documentation.

    Checklist

    Before PR:

    • [ ] Pre-commit or other linting tools are used to fix the potential lint issues.
    • [ ] Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [ ] The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [ ] The documentation has been modified accordingly, like docstring or example tutorials.

    After PR:

    • [ ] If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
    • [ ] CLA has been signed and all committers have signed the CLA in this PR.
    opened by lvhan028 1
  • add CSWin

    add CSWin

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Add CSWin model.

    Modification

    Add CSWin model.

    BC-breaking (Optional)

    Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here and update the documentation.

    Checklist

    Before PR:

    • [ ] Pre-commit or other linting tools are used to fix the potential lint issues.
    • [ ] Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [ ] The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [ ] The documentation has been modified accordingly, like docstring or example tutorials.

    After PR:

    • [ ] If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
    • [ ] CLA has been signed and all committers have signed the CLA in this PR.
    opened by ZhangYuanhan-AI 3
  • [Bug] Missing wanbd hook in v1.x.x

    [Bug] Missing wanbd hook in v1.x.x

    Branch

    master branch (0.24 or other 0.x version)

    Describe the bug

    After some issues with the v0.x.x I was recommended to move to v1.x.x . !1259

    After finishing migrating all the configs I'm now hitting a wall with wanbd, I cannot figure out the equivalent to :

            dict(
                type="WandbLoggerHook",
                init_kwargs={
                    "project": "mmcls",
                    "name": "will-be-added-by-train-script",
                },
                interval=100,
                log_checkpoint=False,
                log_checkpoint_metadata=False,
                num_eval_images=100,
            ),
    

    in the new version. Has WandbLoggerHook been removed?

    The error I get is :

    KeyError: 'WandbLoggerHook is not in the hook registry. Please check whether the value of `WandbLoggerHook` is correct or it was registered as expected. More details can be found at https://mmengine.readthedocs.io/en/latest/tutorials/config.html#import-custom-python-modules'
    

    Any help would be much appreciated.

    Environment

    {'sys.platform': 'linux',
     'Python': '3.8.15 (default, Oct 12 2022, 19:15:16) [GCC 11.2.0]',
     'CUDA available': True,
     'numpy_random_seed': 2147483648,
     'GPU 0': 'NVIDIA GeForce RTX 3060 Laptop GPU',
     'CUDA_HOME': None,
     'GCC': 'x86_64-linux-gnu-gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0',
     'PyTorch': '1.11.0+cu113',
     'PyTorch compiling details': 'PyTorch built with:\n'
                                  '  - GCC 7.3\n'
                                  '  - C++ Version: 201402\n'
                                  '  - Intel(R) Math Kernel Library Version '
                                  '2020.0.0 Product Build 20191122 for Intel(R) 64 '
                                  'architecture applications\n'
                                  '  - Intel(R) MKL-DNN v2.5.2 (Git Hash '
                                  'a9302535553c73243c632ad3c4c80beec3d19a1e)\n'
                                  '  - OpenMP 201511 (a.k.a. OpenMP 4.5)\n'
                                  '  - LAPACK is enabled (usually provided by '
                                  'MKL)\n'
                                  '  - NNPACK is enabled\n'
                                  '  - CPU capability usage: AVX2\n'
                                  '  - CUDA Runtime 11.3\n'
                                  '  - NVCC architecture flags: '
                                  '-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\n'
                                  '  - CuDNN 8.2\n'
                                  '  - Magma 2.5.2\n'
                                  '  - Build settings: BLAS_INFO=mkl, '
                                  'BUILD_TYPE=Release, CUDA_VERSION=11.3, '
                                  'CUDNN_VERSION=8.2.0, '
                                  'CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, '
                                  'CXX_FLAGS= -Wno-deprecated '
                                  '-fvisibility-inlines-hidden -DUSE_PTHREADPOOL '
                                  '-fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM '
                                  '-DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK '
                                  '-DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE '
                                  '-DEDGE_PROFILER_USE_KINETO -O2 -fPIC '
                                  '-Wno-narrowing -Wall -Wextra '
                                  '-Werror=return-type '
                                  '-Wno-missing-field-initializers '
                                  '-Wno-type-limits -Wno-array-bounds '
                                  '-Wno-unknown-pragmas -Wno-sign-compare '
                                  '-Wno-unused-parameter -Wno-unused-function '
                                  '-Wno-unused-result -Wno-unused-local-typedefs '
                                  '-Wno-strict-overflow -Wno-strict-aliasing '
                                  '-Wno-error=deprecated-declarations '
                                  '-Wno-stringop-overflow -Wno-psabi '
                                  '-Wno-error=pedantic -Wno-error=redundant-decls '
                                  '-Wno-error=old-style-cast '
                                  '-fdiagnostics-color=always -faligned-new '
                                  '-Wno-unused-but-set-variable '
                                  '-Wno-maybe-uninitialized -fno-math-errno '
                                  '-fno-trapping-math -Werror=format '
                                  '-Wno-stringop-overflow, LAPACK_INFO=mkl, '
                                  'PERF_WITH_AVX=1, PERF_WITH_AVX2=1, '
                                  'PERF_WITH_AVX512=1, TORCH_VERSION=1.11.0, '
                                  'USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, '
                                  'USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, '
                                  'USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, '
                                  'USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n',
     'TorchVision': '0.12.0+cu113',
     'OpenCV': '4.6.0',
     'MMEngine': '0.3.2',
     'MMClassification': '1.0.0rc4+c9d87cb'}
    
    

    Other information

    No response

    opened by robin-maillot 1
  • [Feature] Apply MixUp, CutMix, or ResizeMix on only some samples in each batch

    [Feature] Apply MixUp, CutMix, or ResizeMix on only some samples in each batch

    Branch

    master branch (0.24 or other 0.x version)

    Describe the feature

    In the following modules of MixUp, CutMix, and ResizeMix, prob arg is not used to determine whether to apply the augmentation for each sample in the batch. https://github.com/open-mmlab/mmclassification/blob/master/mmcls/models/utils/augment/mixup.py#L31 https://github.com/open-mmlab/mmclassification/blob/master/mmcls/models/utils/augment/cutmix.py#L40 https://github.com/open-mmlab/mmclassification/blob/master/mmcls/models/utils/augment/resizemix.py#L68

    This seems to mean that prob arg cannot be used to apply the augmentation (MixUp, CutMix, or ResizeMix) on only some samples in each batch keeping the other samples as is. If the augmentation is applied to the all the samples or none of the samples in each batch, the characteristics of each batch will be inconsistent over iterations and possibly cause the model training unstable.

    I'd like to suggest mmclassification supports a feature to apply the mixing operation on only some samples in each batch.

    Will you implement it?

    • [ ] I would like to implement this feature and create a PR!
    opened by Minyus 1
  • [Bug] Chinese tutorial document: evaluation keyword:

    [Bug] Chinese tutorial document: evaluation keyword:"save_best"

    分支

    master 分支 (0.24 或其他 0.x 版本)

    描述该错误

    In the Chinese tutorial document, it says evaluation = dict(interval=1, save_best=True, metric='accuracy', metric_options={'topk': (1, )}) , however, the keyword save_best should be str instead of bool , like "auto" or 'accuracy_top-1'

    环境信息

    python=3.6

    其他信息

    Thanks for your tutorial documentation! However, I was confused when I met the keyword error. It would be appreciated if you can upgrade the tutorial documentation.

    Bug:P1 
    opened by Tamiku 2
Releases(v1.0.0rc5)
  • v1.0.0rc5(Dec 30, 2022)

    Highlights

    • Support EVA, RevViT, EfficientnetV2, CLIP, TinyViT and MixMIM backbones.
    • Reproduce the training accuracy of ConvNeXt and RepVGG.
    • Support multi-task training and testing.
    • Support Test-time Augmentation.

    New Features

    • [Feature] Add EfficientnetV2 Backbone. (#1253)
    • [Feature] Support TTA and add --tta in tools/test.py. (#1161)
    • [Feature] Support Multi-task. (#1229)
    • [Feature] Add clip backbone. (#1258)
    • [Feature] Add mixmim backbone with checkpoints. (#1224)
    • [Feature] Add TinyViT for dev-1.x. (#1042)
    • [Feature] Add some scripts for development. (#1257)
    • [Feature] Support EVA. (#1239)
    • [Feature] Implementation of RevViT. (#1127)

    Improvements

    • [Reproduce] Reproduce RepVGG Training Accuracy. (#1264)
    • [Enhance] Support ConvNeXt More Weights. (#1240)
    • [Reproduce] Update ConvNeXt config files. (#1256)
    • [CI] Update CI to test PyTorch 1.13.0. (#1260)
    • [Project] Add ACCV workshop 1st Solution. (#1245)
    • [Project] Add Example project. (#1254)

    Bug Fixes

    • [Fix] Fix imports in transforms. (#1255)
    • [Fix] Fix CAM visualization. (#1248)
    • [Fix] Fix the requirements and lazy register mmcls models. (#1275)

    Contributors

    A total of 11 developers contributed to this release.

    @piercus @Ezra-Yu @mzr1996 @bobo0810 @suibe-qingtian @Scarecrow0 @tonysy @WINDSKY45 @wangbo-zhao @Francis777 @okotaku

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc4(Dec 6, 2022)

    Highlights

    • New API to get pre-defined models of MMClassification. See #1236 for more details.
    • Refactor BEiT backbone and support v1/v2 inference. See #1144.

    New Features

    • Support getting models from the name defined in the model-index file. (#1236)

    Improvements

    • Support evaluation on both EMA and non-EMA models. (#1204)
    • Refactor BEiT backbone and support v1/v2 inference. (#1144)

    Bug Fixes

    • Fix reparameterize_model.py doesn't save meta info. (#1221)
    • Fix dict update in BEiT. (#1234)

    Docs Update

    • Update install tutorial. (#1223)
    • Update MobileNetv2 & MobileNetv3 readme. (#1222)
    • Add version selection in the banner. (#1217)

    Contributors

    A total of 4 developers contributed to this release.

    @techmonsterwang @mzr1996 @fangyixiao18 @kitecats

    Source code(tar.gz)
    Source code(zip)
  • v0.25.0(Dec 6, 2022)

    Highlights

    • Support MLU backend.
    • Add dist_train_arm.sh for ARM device.

    New Features

    • Support MLU backend. (#1159)
    • Support Activation Checkpointing for ConvNeXt. (#1152)

    Improvements

    • Add dist_train_arm.sh for ARM device and update NPU results. (#1218)

    Bug Fixes

    • Fix a bug caused MMClsWandbHook stuck. (#1242)
    • Fix the redundant device_ids in tools/test.py. (#1215)

    Docs Update

    • Add version banner and version warning in master docs. (#1216)
    • Update NPU support doc. (#1198)
    • Fixed typo in pytorch2torchscript.md. (#1173)
    • Fix typo in miscellaneous.md. (#1137)
    • further detail for the doc for ClassBalancedDataset. (#901)

    Contributors

    A total of 7 developers contributed to this release.

    @nijkah @xiaoyuan0203 @mzr1996 @Qiza-lyhm @ganghe74 @unseenme @wangjiangben-hw

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc3(Nov 21, 2022)

    Highlights

    • Add Switch Recipe Hook, Now we can modify training pipeline, mixup and loss settings during training, see #1101.
    • Add TIMM and HuggingFace wrappers. Now you can train/use models in TIMM/HuggingFace directly, see #1102.
    • Support retrieval tasks, see #1055.
    • Reproduce MobileOne training accuracy. See #1191.

    New Features

    • Add checkpoints from EfficientNets NoisyStudent & L2. (#1122)
    • Migrate CSRA head to 1.x. (#1177)
    • Support RepLKnet backbone. (#1129)
    • Add Switch Recipe Hook. (#1101)
    • Add adan optimizer. (#1180)
    • Support DaViT. (#1105)
    • Support Activation Checkpointing for ConvNeXt. (#1153)
    • Add TIMM and HuggingFace wrappers to build classifiers from them directly. (#1102)
    • Add reduction for neck (#978)
    • Support HorNet Backbone for dev1.x. (#1094)
    • Add arcface head. (#926)
    • Add Base Retriever and Image2Image Retriever for retrieval tasks. (#1055)
    • Support MobileViT backbone. (#1068)

    Improvements

    • [Enhance] Enhance ArcFaceClsHead. (#1181)
    • [Refactor] Refactor to use new fileio API in MMEngine. (#1176)
    • [Enhance] Reproduce mobileone training accuracy. (#1191)
    • [Enhance] add deleting params info in swinv2. (#1142)
    • [Enhance] Add more mobilenetv3 pretrains. (#1154)
    • [Enhancement] RepVGG for YOLOX-PAI for dev-1.x. (#1126)
    • [Improve] Speed up data preprocessor. (#1064)

    Bug Fixes

    • Fix the torchserve. (#1143)
    • Fix configs due to api refactor of num_classes. (#1184)
    • Update mmcls2torchserve. (#1189)
    • Fix for inference_model cannot get classes information in checkpoint. (#1093)

    Docs Update

    • Add not-found page extension. (#1207)
    • update visualization doc. (#1160)
    • Support sort and search the Model Summary table. (#1100)
    • Improve the ResNet model page. (#1118)
    • update the readme of convnext. (#1156)
    • Fix the installation docs link in README. (#1164)
    • Improve ViT and MobileViT model pages. (#1155)
    • Improve Swin Doc and Add Tabs enxtation. (#1145)
    • Add MMEval projects link in README. (#1162)
    • Add runtime configuration docs. (#1128)
    • Add custom evaluation docs (#1130)
    • Add custom pipeline docs. (#1124)
    • Add MMYOLO projects link in MMCLS1.x. (#1117)

    Contributors

    A total of 14 developers contributed to this release.

    @austinmw @Ezra-Yu @nijkah @yingfhu @techmonsterwang @mzr1996 @sanbuphy @tonysy @XingyuXie @gaoyang07 @kitecats @marouaneamz @okotaku @zzc98

    Source code(tar.gz)
    Source code(zip)
  • v0.24.1(Nov 1, 2022)

    New Features

    • [Feature] Support mmcls with NPU backend. (#1072)

    Bug Fixes

    • [Fix] Fix performance issue in convnext DDP train. (#1098)

    Contributors

    A total of 3 developers contributed to this release.

    @wangjiangben-hw @790475019 @mzr1996

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc2(Oct 12, 2022)

    New Features

    Improvements

    • Update analyze_results.py for dev-1.x. (#1071)
    • Get scores from inference api. (#1070)

    Bug Fixes

    • Update requirements. (#1083)

    Docs Update

    • Add 1x docs schedule. (#1015)

    Contributors

    A total of 3 developers contributed to this release.

    @mzr1996 @okotaku @yingfhu

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc1(Sep 30, 2022)

    Highlights

    • Support MViT, EdgeNeXt, Swin-Transformer V2, EfficientFormer and MobileOne.
    • Support BEiT type transformer layer.

    New Features

    • Support MViT for MMCLS 1.x (#1023)
    • Add ViT huge architecture. (#1049)
    • Support EdgeNeXt for dev-1.x. (#1037)
    • Support Swin Transformer V2 for MMCLS 1.x. (#1029)
    • Add efficientformer Backbone for MMCls 1.x. (#1031)
    • Add MobileOne Backbone For MMCls 1.x. (#1030)
    • Support BEiT Transformer layer. (#919)

    Improvements

    • [Refactor] Fix visualization tools. (#1045)
    • [Improve] Update benchmark scripts (#1028)
    • [Imporve] Update tools to enable pin_memory and persistent_workers by default. (#1024)
    • [CI] Update circle-ci and github workflow. (#1018)

    Bug Fixes

    • Fix verify dataset tool in 1.x. (#1062)
    • Fix loss_weight in LabelSmoothLoss. (#1058)
    • Fix the output position of Swin-Transformer. (#947)

    Docs Update

    • Fix typo in migration document. (#1063)
    • Auto generate model summary table. (#1010)
    • Refactor new modules tutorial. (#998)

    Contributors

    A total of 8 developers contributed to this release.

    @Ezra-Yu @yingfhu @mzr1996 @tonysy @fangyixiao18 @YuanLiuuuuuu @HIT-cwh @techmonsterwang

    Source code(tar.gz)
    Source code(zip)
  • v0.24.0(Sep 30, 2022)

    Highlights

    • Support HorNet, EfficientFormerm, SwinTransformer V2, and MViT backbones.
    • Support Standford Cars dataset.

    New Features

    • Support HorNet Backbone. (#1013)
    • Support EfficientFormer. (#954)
    • Support Stanford Cars dataset. (#893)
    • Support CSRA head. (#881)
    • Support Swin Transform V2. (#799)
    • Support MViT and add checkpoints. (#924)

    Improvements

    • [Improve] replace loop of progressbar in api/test. (#878)
    • [Enhance] RepVGG for YOLOX-PAI. (#1025)
    • [Enhancement] Update VAN. (#1017)
    • [Refactor] Re-write get_sinusoid_encoding from third-party implementation. (#965)
    • [Improve] Upgrade onnxsim to v0.4.0. (#915)
    • [Improve] Fixed typo in RepVGG. (#985)
    • [Imporve] Using train_step instead of forward in PreciseBNHook (#964)
    • [Improve] Use forward_dummy to calculate FLOPS. (#953)

    Bug Fixes

    • Fix warning with torch.meshgrid. (#860)
    • Add matplotlib minimum version requirements. (#909)
    • val loader should not drop last by default. (#857)
    • Fix config.device bug in toturial. (#1059)
    • Fix attenstion clamp max params (#1034)
    • Fix device mismatch in Swin-v2. (#976)
    • Fix the output position of Swin-Transformer. (#947)

    Docs Update

    • Add version for torchvision to avoid error. (#903)
    • Fix typo for --out-dir option of analyze_results.py. (#898)
    • Refine the docstring of RegNet. (#935)

    Contributors

    A total of 19 developers contributed to this release.

    @a-mos @Ezra-Yu @Fei-Wang @nijkah @PeterH0323 @yingfhu @techmonsterwang @JiayuXu0 @jlim262 @hukkai @mzr1996 @liu-mengyang @twmht @pallgeuer @timothylimyl @daquexian @okotaku @tpoisonooo @zzc98

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc0(Aug 31, 2022)

    MMClassification 1.0.0rc0 is the first version of MMClassification 1.x, a part of the OpenMMLab 2.0 projects.

    Built upon the new training engine, MMClassification 1.x unifies the interfaces of dataset, models, evaluation, and visualization.

    And there are some BC-breaking changes. Please check the migration tutorial for more details.

    Source code(tar.gz)
    Source code(zip)
  • v0.23.2(Jul 28, 2022)

    New Features

    • Support MPS device. (#894)

    Improvements

    • Add test mim CI. (#879)

    Bug Fixes

    • [Fix] Fix Albu crash bug. (#918)
    • [Fix] Add mim to extras_require in setup.py. (#872)

    Contributors

    A total of 2 developers contributed to this release.

    @mzr1996 @PeterH0323

    Source code(tar.gz)
    Source code(zip)
  • v0.23.1(Jun 2, 2022)

    Highlights

    • New WandbHook to store your training log and visualize validation results!

    New Features

    • [Feature] Dedicated MMClsWandbHook for MMClassification (Weights and Biases Integration) (#764)

    Improvements

    • [Refactor] Use mdformat instead of markdownlint to format markdown. (#844)

    Bug Fixes

    • [Fix] Fix wrong --local_rank.

    Docs Update

    • [Docs] Update install tutorials. (#854)
    • [Docs] Fix wrong link in README. (#835)

    Contributors

    A total of 3 developers contributed to this release.

    @ayulockin @mzr1996 @timothylimyl

    Source code(tar.gz)
    Source code(zip)
  • v0.23.0(May 1, 2022)

    New Features

    • Support DenseNet. (#750)
    • Support VAN. (#739)

    Improvements

    • Support training on IPU and add fine-tuning configs of ViT. (#723)

    Docs Update

    • New style API reference, and easier to use! Welcome view it. (#774)

    Contributors

    A total of 4 developers contributed to this release.

    @mzr1996 @okotaku @yingfhu @HuDi2018

    Source code(tar.gz)
    Source code(zip)
  • v0.22.1(Apr 15, 2022)

    New Features

    • Support resize relative position embedding in SwinTransformer. (#749)
    • Add PoolFormer backbone and checkpoints. (#746)

    Improvements

    • Improve CPE performance by reduce memory copy. (#762)
    • Add extra dataloader settings in configs. (#752)

    Contributors

    A total of 4 developers contributed to this release.

    @mzr1996 @yuweihao @XiaobingSuper @YuanLiuuuuuu

    Source code(tar.gz)
    Source code(zip)
  • v0.22.0(Mar 30, 2022)

    Considering more and more codebase depends on new features of MMClassification, we will release a minor version at the middle of every month. 😉

    Highlights

    • Support a series of CSP Network, such as CSP-ResNet, CSP-ResNeXt and CSP-DarkNet.
    • A new CustomDataset class to help you build dataset of yourself!
    • Support ConvMixer, RepMLP and new dataset - CUB dataset.

    New Features

    • Add CSPNet and backbone and checkpoints (#735)
    • Add CustomDataset. (#738)
    • Add diff seeds to diff ranks. (#744)
    • Support ConvMixer. (#716)
    • Our dist_train & dist_test tools support distributed training on multiple machines. (#734)
    • Add RepMLP backbone and checkpoints. (#709)
    • Support CUB dataset. (#703)
    • Support ResizeMix. (#676)

    Improvements

    • Use --a-b instead of --a_b in arguments. (#754)
    • Add get_cat_ids and get_gt_labels to KFoldDataset. (#721)
    • Set torch seed in worker_init_fn. (#733)

    Bug Fixes

    • Fix the discontiguous output feature map of ConvNeXt. (#743)

    Docs Update

    • Add brief installation steps in README for copy&paste. (#755)
    • fix logo url link from mmocr to mmcls. (#732)

    Contributors

    A total of 6 developers contributed to this release.

    @Ezra-Yu @yingfhu @Hydrion-Qlz @mzr1996 @huyu398 @okotaku

    Source code(tar.gz)
    Source code(zip)
  • v0.21.0(Mar 4, 2022)

    Highlights

    • Support ResNetV1c and Wide-ResNet, and provide pre-trained models.
    • Support dynamic input shape for ViT-based algorithms. Now our ViT, DeiT, Swin-Transformer and T2T-ViT supports forwarding with any input shape.
    • Reproduce training results of DeiT. And our DeiT-T and DeiT-S have higher accuracy comparing with the official weights.

    New Features

    • Add ResNetV1c. (#692)
    • Support Wide-ResNet. (#715)
    • Support gem pooling (#677)

    Improvements

    • Reproduce training results of DeiT. (#711)
    • Add ConvNeXt pretrain models on ImageNet-1k. (#707)
    • Support dynamic input shape for ViT-based algorithms. (#706)
    • Add evaluate function for ConcatDataset. (#650)
    • Enhance vis-pipeline tool. (#604)
    • Return code 1 if scripts runs failed. (#694)
    • Use PyTorch official one_hot to implement convert_to_one_hot. (#696)
    • Add a new pre-commit-hook to automatically add a copyright. (#710)
    • Add deprecation message for deploy tools. (#697)
    • Upgrade isort pre-commit hooks. (#687)
    • Use --gpu-id instead of --gpu-ids in non-distributed multi-gpu training/testing. (#688)
    • Remove deprecation. (#633)

    Bug Fixes

    • Fix Conformer forward with irregular input size. (#686)
    • Add dist.barrier to fix a bug in directory checking. (#666)

    Contributors

    A total of 8 developers contributed to this release.

    @Ezra-Yu @HumberMe @mzr1996 @twmht @RunningLeon @yasu0001 @okotaku @yingfhu

    Source code(tar.gz)
    Source code(zip)
  • v0.20.1(Feb 7, 2022)

  • v0.20.0(Jan 31, 2022)

    Tomorrow is the Chinese new year. Happy new year!

    Highlights

    • Support K-fold cross-validation. The tutorial will be released later.
    • Support HRNet, ConvNeXt, Twins, and EfficientNet.
    • Support model conversion from PyTorch to Core-ML by a tool.

    New Features

    • Support K-fold cross-validation. (#563)
    • Support HRNet and add pre-trained models. (#660)
    • Support ConvNeXt and add pre-trained models. (#670)
    • Support Twins and add pre-trained models. (#642)
    • Support EfficientNet and add pre-trained models.(#649)
    • Support features_only option in TIMMBackbone. (#668)
    • Add conversion script from pytorch to Core-ML model. (#597)

    Improvements

    • New-style CPU training and inference. (#674)
    • Add setup multi-processing both in train and test. (#671)
    • Rewrite channel split operation in ShufflenetV2. (#632)
    • Deprecate the support for "python setup.py test". (#646)
    • Support single-label, softmax, custom eps by asymmetric loss. (#609)
    • Save class names in best checkpoint created by evaluation hook. (#641)

    Bug Fixes

    • Fix potential unexcepted behaviors if metric_options is not specified in multi-label evaluation. (#647)
    • Fix API changes in pytorch-grad-cam&gt;=1.3.7. (#656)
    • Fix bug which breaks cal_train_time in analyze_logs.py. (#662)

    Docs Update

    • Update README in configs according to OpenMMLab standard. (#672)
    • Update installation guide and README. (#624)

    Contributors

    A total of 10 developers contributed to this release.

    @Ezra-Yu @mzr1996 @rlleshi @WINDSKY45 @shinya7y @Minyus @0x4f5da2 @imyhxy @dreamer121121 @xiefeifeihu

    Source code(tar.gz)
    Source code(zip)
  • v0.19.0(Dec 31, 2021)

    Highlights

    • The feature extraction function has been enhanced. See #593 for more details.
    • Provide the high-acc ResNet-50 training settings from ResNet strikes back.
    • Reproduce the training accuracy of T2T-ViT & RegNetX, and provide self-training checkpoints.
    • Support DeiT & Conformer backbone and checkpoints.
    • Provide a CAM visualization tool based on pytorch-grad-cam, and detailed user guide!

    New Features

    • Support Precise BN. (#401)
    • Add CAM visualization tool. (#577)
    • Repeated Aug and Sampler Registry. (#588)
    • Add DeiT backbone and checkpoints. (#576)
    • Support LAMB optimizer. (#591)
    • Implement the conformer backbone. (#494)
    • Add the frozen function for Swin Transformer model. (#574)
    • Support using checkpoint in Swin Transformer to save memory. (#557)

    Improvements

    • [Reproduction] Reproduce RegNetX training accuracy. (#587)
    • [Reproduction] Reproduce training results of T2T-ViT. (#610)
    • [Enhance] Provide high-acc training settings of ResNet. (#572)
    • [Enhance] Set a random seed when the user does not set a seed. (#554)
    • [Enhance] Added NumClassCheckHook and unit tests. (#559)
    • [Enhance] Enhance feature extraction function. (#593)
    • [Enhance] Imporve efficiency of precision, recall, f1_score and support. (#595)
    • [Enhance] Improve accuracy calculation performance. (#592)
    • [Refactor] Refactor analysis_log.py. (#529)
    • [Refactor] Use new API of matplotlib to handle blocking input in visualization. (#568)
    • [CI] Cancel previous runs that are not completed. (#583)
    • [CI] Skip build CI if only configs or docs modification. (#575)

    Bug Fixes

    • Fix test sampler bug. (#611)
    • Try to create a symbolic link, otherwise copy. (#580)
    • Fix a bug for multiple output in swin transformer. (#571)

    Docs Update

    • Update mmcv, torch, cuda version in Dockerfile and docs. (#594)
    • Add analysis&misc docs. (#525)
    • Fix docs build dependency. (#584)

    Contributors

    A total of 6 developers contributed to this release.

    @elopezz @Ezra-Yu @mzr1996 @0x4f5da2 @fangxu622 @okotaku

    Source code(tar.gz)
    Source code(zip)
  • v0.18.0(Nov 30, 2021)

    Highlights

    • Support MLP-Mixer backbone and provide pre-trained checkpoints.
    • Add a tool to visualize the learning rate curve of the training phase. Welcome to use with the tutorial!

    New Features

    • Add MLP Mixer Backbone. (#528, #539)
    • Support positive weights in BCE. (#516)
    • Add a tool to visualize learning rate in each iterations. (#498)

    Improvements

    • Use CircleCI to do unit tests. (#567)
    • Focal loss for single label tasks. (#548)
    • Remove useless import_modules_from_string. (#544)
    • Rename config files according to the config name standard. (#508)
    • Use reset_classifier to remove head of timm backbones. (#534)
    • Support passing arguments to loss from head. (#523)
    • Refactor Resize transform and add Pad transform. (#506)
    • Update mmcv dependency version. (#509)

    Bug Fixes

    • Fix bug when using ClassBalancedDataset. (#555)
    • Fix a bug when using iter-based runner with 'val' workflow. (#542)
    • Fix interpolation method checking in Resize. (#547)
    • Fix a bug when load checkpoints in mulit-GPUs environment. (#527)
    • Fix an error on indexing scalar metrics in analyze_result.py. (#518)
    • Fix wrong condition judgment in analyze_logs.py and prevent empty curve. (#510)

    Docs Update

    • Fix vit config and model broken links. (#564)
    • Add abstract and image for every paper. (#546)
    • Add mmflow and mim in banner and readme. (#543)
    • Add schedule and runtime tutorial docs. (#499)
    • Add the top-5 acc in ResNet-CIFAR README. (#531)
    • Fix TOC of visualization.md and add example images. (#513)
    • Use docs link of other projects and add MMCV docs. (#511)

    Contributors

    A total of 9 developers contributed to this release.

    @Ezra-Yu @LeoXing1996 @mzr1996 @0x4f5da2 @huoshuai-dot @imyhxy @juanjompz @okotaku @xcnick

    Source code(tar.gz)
    Source code(zip)
  • v0.17.0(Oct 29, 2021)

    Highlights

    • Support Tokens-to-Token ViT backbone and Res2Net backbone. Welcome to use!
    • Support ImageNet21k dataset.
    • Add a pipeline visualization tool. Try it with the tutorials!

    New Features

    • Add Tokens-to-Token ViT backbone and converted checkpoints. (#467)
    • Add Res2Net backbone and converted weights. (#465)
    • Support ImageNet21k dataset. (#461)
    • Support seesaw loss. (#500)
    • Add a pipeline visualization tool. (#406)
    • Add a tool to find broken files. (#482)
    • Add a tool to test TorchServe. (#468)

    Improvements

    • Refator Vision Transformer. (#395)
    • Use context manager to reuse matplotlib figures. (#432)

    Bug Fixes

    • Remove DistSamplerSeedHook if use IterBasedRunner. (#501)
    • Set the priority of EvalHook to "LOW" to avoid a bug when using IterBasedRunner. (#488)
    • Fix a wrong parameter of get_root_logger in apis/train.py. (#486)
    • Fix version check in dataset builder. (#474)

    Docs Update

    • Add English Colab tutorials and update Chinese Colab tutorials. (#483, #497)
    • Add tutuorial for config files. (#487)
    • Add model-pages in Model Zoo. (#480)
    • Add code-spell pre-commit hook and fix a large mount of typos. (#470)

    Contributors

    A total of 6 developers contributed to this release.

    @mzr1996 @Ezra-Yu @tansor @youqingxiaozhua @0x4f5da2 @okotaku

    Source code(tar.gz)
    Source code(zip)
  • v0.16.0(Sep 30, 2021)

    Highlights

    • We have improved compatibility with downstream repositories like MMDetection and MMSegmentation. We will add some examples about how to use our backbones in MMDetection.
    • Add RepVGG backbone and checkpoints. Welcome to use it!
    • Add timm backbones wrapper, now you can simply use backbones of pytorch-image-models in MMClassification!

    New Features

    • Add RepVGG backbone and checkpoints. (#414)
    • Add timm backbones wrapper. (#427)

    Improvements

    • Fix TnT compatibility and verbose warning. (#436)
    • Support setting --out-items in tools/test.py. (#437)
    • Add datetime info and saving model using torch<1.6 format. (#439)
    • Improve downstream repositories compatibility. (#421)
    • Rename the option --options to --cfg-options in some tools. (#425)
    • Add PyTorch 1.9 and Python 3.9 build workflow, and remove some CI. (#422)

    Bug Fixes

    • Fix format error in test.py when metric returns np.ndarray. (#441)
    • Fix publish_model bug if no parent of out_file. (#463)
    • Fix num_classes bug in pytorch2onnx.py. (#458)
    • Fix missing runtime requirement packaging. (#459)
    • Fix saving simplified model bug in ONNX export tool. (#438)

    Docs Update

    • Update getting_started.md and install.md. And rewrite finetune.md. (#466)
    • Use PyTorch style docs theme. (#457)
    • Update metafile and Readme. (#435)
    • Add CITATION.cff. (#428)

    Contributors

    A total of 8 developers contributed to this release. @Charlyo @Ezra-Yu @mzr1996 @amirassov @RangiLyu @zhaoxin111 @uniyushu @zhangrui-wolf

    Source code(tar.gz)
    Source code(zip)
  • v0.15.0(Aug 31, 2021)

    Highlights

    • Support hparams argument in AutoAugment and RandAugment to provide hyperparameters for sub-policies.
    • Support custom squeeze channels in SELayer.
    • Support classwise weight in losses.

    New Features

    • Add hparams argument in AutoAugment and RandAugment and some other improvement. (#398)
    • Support classwise weight in losses (#388)
    • Enhence SELayer to support custom squeeze channels. (#417)

    Code Refactor

    • Better result visualization (#419)
    • Use post_process function to handle pred result processing. (#390)
    • Update digit_version function. (#402)
    • Avoid albumentations to install both opencv and opencv-headless. (#397)
    • Avoid unnecessary listdir when building ImageNet. (#396)
    • Use dynamic mmcv download link in TorchServe dockerfile. (#387)

    Docs Improvement

    • Add readme of some algorithms and update meta yml (#418)
    • Add Copyright information. (#413)
    • Add PR template and modify issue template (#380)

    Contributors

    A total of 5 developers contributed to this release. @azad96 @Ezra-Yu @mzr1996 @mmeendez8 @sovrasov

    Source code(tar.gz)
    Source code(zip)
  • v0.14.0(Aug 4, 2021)

    Highlights

    • Add transformer-in-transformer backbone and pretrain checkpoints, refers to the paper.
    • Add Chinese colab tutorial.
    • Provide dockerfile to build mmcls dev docker image.

    New Features

    • Add transformer in transformer backbone and pretrain checkpoints. (#339)
    • Support mim, welcome to use mim to manage your mmcls project. (#376)
    • Add Dockerfile. (#365)
    • Add ResNeSt configs. (#332)

    Improvements

    • Use the presistent_works option if available, to accelerate training. (#349)
    • Add Chinese ipynb tutorial. (#306)
    • Refactor unit tests. (#321)
    • Support to test mmdet inference with mmcls backbone. (#343)
    • Use zero as default value of thrs in metrics. (#341)

    Bug Fixes

    • Fix ImageNet dataset annotation file parse bug. (#370)
    • Fix docstring typo and init bug in ShuffleNetV1. (#374)
    • Use local ATTENTION registry to avoid conflict with other repositories. (#376)
    • Fix swin transformer config bug. (#355)
    • Fix patch_cfg argument bug in SwinTransformer. (#368)
    • Fix duplicate init_weights call in ViT init function. (#373)
    • Fix broken _base_ link in a resnet config. (#361)
    • Fix vgg-19 model link missing. (#363)

    Contributors

    A total of 8 developers contributed to this release.

    @Ezra-Yu, @HIT-cwh, @Junjun2016, @LXXXXR, @mzr1996, @pvys, @wangruohui, @ZwwWayne

    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(Jul 5, 2021)

    New Features

    • Support Swin-Transformer backbone and add training configs for Swin-Transformer on ImageNet. (#271)
    • Add pretrained model of RegNetX. (#269)
    • Support adding custom hooks in the config file. (#305)
    • Improve and add Chinese translation of CONTRIBUTING.md and all tools tutorials. (#320)
    • Dump config before training. (#282)
    • Add torchscript and torchserve deployment tools. (#279, #284)

    Improvements

    • Improve test tools and add some new tools. (#322)
    • Correct MobilenetV3 backbone structure and add pretained models. (#291)
    • Refactor PatchEmbed and HybridEmbed as independent components. (#330)
    • Refactor mixup and cutmix as Augments to support more funtions. (#278)
    • Refactor weights initialization method. (#270, #318, #319)
    • Refactor LabelSmoothLoss to support multiple calculation formulas. (#285)

    Bug Fixes

    • Fix bug for CPU training. (#286)
    • Fix missing test data when num_imgs can not be evenly divided by num_gpus. (#299)
    • Fix build compatible with pytorch v1.3-1.5. (#301)
    • Fix magnitude_std bug in RandAugment. (#309)
    • Fix bug when samples_per_gpu is 1. (#311)
    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(Jun 3, 2021)

    New Features

    • Improve and add Chinese translation of data_pipeline.md and new_modules.md. (#265)
    • Build Chinese translation on readthedocs. (#267)
    • Add an argument efficientnet_style to RandomResizedCrop and CenterCrop. (#268)

    Improvements

    • Only allow directory operation when rank==0 when testing. (#258)
    • Fix typo in base_head. (#274)
    • Update ResNeXt checkpoints. (#283)

    Bug Fixes

    • Add attribute data.test in MNIST configs. (#264)
    • Download CIFAR/MNIST dataset only on rank 0. (#273)
    • Fix MMCV version compatibility. (#276)
    • Fix CIFAR color channels bug and update checkpoints in model zoo. (#280)
    Source code(tar.gz)
    Source code(zip)
  • v0.11.1(May 21, 2021)

    New Features

    • Add dim argument for GlobalAveragePooling. (#236)
    • Add random noise to RandAugment magnitude. (#240)
    • Refine new_dataset.md and add Chinese translation of finture.md, new_dataset.md. (#243)

    Improvements

    • Refactor arguments passing for Heads. (#239)
    • Allow more flexible magnitude_range in RandAugment. (#249)
    • Inherits MMCV registry so that in the future OpenMMLab repos like MMDet and MMSeg could directly use the backbones supported in MMCls. (#252)

    Bug Fixes

    • Fix typo in analyze_results.py. (#237)
    • Fix typo in unittests. (#238)
    • Check if specified tmpdir exists when testing to avoid deleting existing data. (#242; #258)
    • Add missing config files in MANIFEST.in. (#250; #255)
    • Use temporary directory under shared directory to collect results to avoid unavailability of temporary directory for multi-node testing. (#251)
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(May 1, 2021)

    New Features

    • Support cutmix trick. (#198)
    • Add simplify option in pytorch2onnx.py. (#200)
    • Support random augmentation. (#201)
    • Add config and checkpoint for training ResNet on CIFAR-100. (#208)
    • Add tools/deployment/test.py as a ONNX runtime test tool. (#212)
    • Support ViT backbone and add training configs for ViT on ImageNet. (#214)
    • Add finetuning configs for ViT on ImageNet. (#217)
    • Add device option to support training on CPU. (#219)
    • Add Chinese README.md and some Chinese tutorials. (#221)
    • Add metafile.yml in configs to support interaction with paper with code(PWC) and MMCLI. (#225)
    • Upload configs and converted checkpoints for ViT fintuning on ImageNet. (#230)

    Improvements

    • Fix LabelSmoothLoss so that label smoothing and mixup could be enabled at the same time. (#203)
    • Add cal_acc option in ClsHead. (#206)
    • Check CLASSES in checkpoint to avoid unexpected key error. (#207)
    • Check mmcv version when importing mmcls to ensure compatibility. (#209)
    • Update CONTRIBUTING.md to align with that in MMCV. (#210)
    • Change tags to html comments in configs README.md. (#226)
    • Clean codes in ViT backbone. (#227)
    • Reformat pytorch2onnx.md tutorial. (#229)
    • Update setup.py to support MMCLI. (#232)

    Bug Fixes

    • Fix missing cutmix_prob in ViT configs. (#220)
    • Fix backend for resize in ResNeXt configs. (#222)
    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Apr 1, 2021)

    New Features

    • Add Rotate pipeline for data augmentation. (#167)
    • Add Invert pipeline for data augmentation. (#168)
    • Add Color pipeline for data augmentation. (#171)
    • Add Solarize and Posterize pipeline for data augmentation. (#172)
    • Support fp16 training. (#178)
    • Add tutorials for installation and basic usage of MMClassification.(#176)
    • Support AutoAugmentation, AutoContrast, Equalize, Contrast, Brightness and Sharpness pipelines for data augmentation. (#179)

    Improvements

    • Support dynamic shape export to onnx. (#175)
    • Release training configs and update model zoo for fp16 (#184)
    • Use MMCV's EvalHook in MMClassification (#182)

    Bug Fixes

    • Fix wrong naming in vgg config (#181)
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Mar 1, 2021)

    New Features

    • Implement mixup and provide configs of training ResNet50 using mixup. (#160)
    • Add Shear pipeline for data augmentation. (#163)
    • Add Translate pipeline for data augmentation. (#165)
    • Add tools/onnx2tensorrt.py as a tool to create TensorRT engine from ONNX, run inference and verify outputs in Python. (#153)

    Improvements

    • Add --eval-options in tools/test.py to support eval options override, matching the behavior of other open-mmlab projects. (#158)
    • Support showing and saving painted results in mmcls.apis.test and tools/test.py, matching the behavior of other open-mmlab projects. (#162)

    Bug Fixes

    • Fix configs for VGG, replace checkpoints converted from other repos with the ones trained by ourselves and upload the missing logs in the model zoo. (#161)
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Feb 1, 2021)

    New Features

    • Add evaluation metrics: mAP, CP, CR, CF1, OP, OR, OF1 for multi-label task. (#123)
    • Add BCE loss for multi-label task. (#130)
    • Add focal loss for multi-label task. (#131)
    • Support PASCAL VOC 2007 dataset for multi-label task. (#134)
    • Add asymmetric loss for multi-label task. (#132)
    • Add analyze_results.py to select images for success/fail demonstration. (#142)
    • Support new metric that calculates the total number of occurrences of each label. (#143)
    • Support class-wise evaluation results. (#143)
    • Add thresholds in eval_metrics. (#146)
    • Add heads and a baseline config for multilabel task. (#145)

    Improvements

    • Remove the models with 0 checkpoint and ignore the repeated papers when counting papers to gain more accurate model statistics. (#135)
    • Add tags in README.md. (#137)
    • Fix optional issues in docstring. (#138)
    • Update stat.py to classify papers. (#139)
    • Fix mismatched columns in README.md. (#150)
    • Fix test.py to support more evaluation metrics. (#155)

    Bug Fixes

    • Fix bug in VGG weight_init. (#140)
    • Fix bug in 2 ResNet configs in which outdated heads were used. (#147)
    • Fix bug of misordered height and width in RandomCrop and RandomResizedCrop. (#151)
    • Fix missing meta_keys in Collect. (#149, #152)
    Source code(tar.gz)
    Source code(zip)
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

63 Oct 17, 2022
[CIKM 2021] Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning

Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning. This repo contains the PyTorch code and implementation for the paper E

Akuchi 18 Dec 22, 2022
Human4D Dataset tools for processing and visualization

HUMAN4D: A Human-Centric Multimodal Dataset for Motions & Immersive Media HUMAN4D constitutes a large and multimodal 4D dataset that contains a variet

tofis 15 Nov 09, 2022
BookMyShowPC - Movie Ticket Reservation App made with Tkinter

Book My Show PC What is this? Movie Ticket Reservation App made with Tkinter. Tk

The Nithin Balaji 3 Dec 09, 2022
TrackTech: Real-time tracking of subjects and objects on multiple cameras

TrackTech: Real-time tracking of subjects and objects on multiple cameras This project is part of the 2021 spring bachelor final project of the Bachel

5 Jun 17, 2022
How to Become More Salient? Surfacing Representation Biases of the Saliency Prediction Model

How to Become More Salient? Surfacing Representation Biases of the Saliency Prediction Model

Bogdan Kulynych 49 Nov 05, 2022
VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries

VACA Code repository for the paper "VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries (arXiv)". The impleme

Pablo Sánchez-Martín 16 Oct 10, 2022
Algorithmic encoding of protected characteristics and its implications on disparities across subgroups

Algorithmic encoding of protected characteristics and its implications on disparities across subgroups This repository contains the code for the paper

Team MIRA - BioMedIA 15 Oct 24, 2022
InferPy: Deep Probabilistic Modeling with Tensorflow Made Easy

InferPy: Deep Probabilistic Modeling Made Easy InferPy is a high-level API for probabilistic modeling written in Python and capable of running on top

PGM-Lab 141 Oct 13, 2022
Automates Machine Learning Pipeline with Feature Engineering and Hyper-Parameters Tuning :rocket:

MLJAR Automated Machine Learning Documentation: https://supervised.mljar.com/ Source Code: https://github.com/mljar/mljar-supervised Table of Contents

MLJAR 2.4k Dec 31, 2022
FLVIS: Feedback Loop Based Visual Initial SLAM

FLVIS Feedback Loop Based Visual Inertial SLAM 1-Video EuRoC DataSet MH_05 Handheld Test in Lab FlVIS on UAV Platform 2-Relevent Publication: Under Re

UAV Lab - HKPolyU 182 Dec 04, 2022
SMPL-X: A new joint 3D model of the human body, face and hands together

SMPL-X: A new joint 3D model of the human body, face and hands together [Paper Page] [Paper] [Supp. Mat.] Table of Contents License Description News I

Vassilis Choutas 1k Jan 09, 2023
PyTorch implementation of UPFlow (unsupervised optical flow learning)

UPFlow: Upsampling Pyramid for Unsupervised Optical Flow Learning By Kunming Luo, Chuan Wang, Shuaicheng Liu, Haoqiang Fan, Jue Wang, Jian Sun Megvii

kunming luo 87 Dec 20, 2022
AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition [ArXiv] [Project Page] This repository is the official implementation of AdaMML:

International Business Machines 43 Dec 26, 2022
In this project, we create and implement a deep learning library from scratch.

ARA In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The

22 Aug 23, 2022
Prososdy Morph: A python library for manipulating pitch and duration in an algorithmic way, for resynthesizing speech.

ProMo (Prosody Morph) Questions? Comments? Feedback? Chat with us on gitter! A library for manipulating pitch and duration in an algorithmic way, for

Tim 71 Jan 02, 2023
DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting

DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting Created by Yongming Rao*, Wenliang Zhao*, Guangyi Chen, Yansong Tang, Zheng Z

Yongming Rao 322 Dec 31, 2022
TyXe: Pyro-based BNNs for Pytorch users

TyXe: Pyro-based BNNs for Pytorch users TyXe aims to simplify the process of turning Pytorch neural networks into Bayesian neural networks by leveragi

87 Jan 03, 2023
EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks

EncT5 (Unofficial) Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks About Finetune T5 model for classification & r

Jangwon Park 34 Jan 01, 2023
A Kernel fuzzer focusing on race bugs

Razzer: Finding kernel race bugs through fuzzing Environment setup $ source scripts/envsetup.sh scripts/envsetup.sh sets up necessary environment var

Systems and Software Security Lab at Seoul National University (SNU) 328 Dec 26, 2022