Datasets, Transforms and Models specific to Computer Vision

Related tags

Deep Learningvision
Overview

vision

Datasets, Transforms and Models specific to Computer Vision

Installation

  • First install the nightly version of OneFlow
python3 -m pip install oneflow -f https://staging.oneflow.info/branch/master/cu102
  • Then install the latest stable release of flowvision
pip install flowvision==0.0.4
  • Or install the nightly release of flowvision
pip install -i https://test.pypi.org/simple/ flowvision==0.0.4

Supported Model

All of the supported models can be found in our model summary page here.

Usage

Quick Start
  • list supported model
from flowvision import ModelCreator
ModelCreator.model_table()
  • search supported model by wildcard
from flowvision import ModelCreator
ModelCreator.model_table("*vit*", pretrained=True)
ModelCreator.model_table("*vit*", pretrained=False)
ModelCreator.model_table('alexnet')
  • create model use ModelCreator
from flowvision import ModelCreator
model = ModelCreator.create_model('alexnet', pretrained=True)
ModelCreator
  • Create model in a simple way
from flowvision.models import ModelCreator
model = ModelCreator.create_model('alexnet', pretrained=True)

the pretrained weight will be saved to ./checkpoints

  • Supported model table
from flowvision.models import ModelCreator
ModelCreator.model_table()
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ alexnet      │ true       │
│ vit_b_16_224 │ false      │
│ vit_b_16_384 │ true       │
│ vit_b_32_224 │ false      │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘

show all of the supported model in the table manner

  • List models with pretrained weights
from flowvision.models import ModelCreator
ModelCreator.model_table(pretrained=True)
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ alexnet      │ true       │
│ vit_b_16_384 │ true       │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘
  • Search for model by Wildcard
from flowvision.models import ModelCreator
ModelCreator.model_table('vit*')
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ vit_b_16_224 │ false      │
│ vit_b_16_384 │ true       │
│ vit_b_32_224 │ false      │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘
  • Search for model with pretrained weights by Wildcard
from flowvision.models import ModelCreator
ModelCreator.model_table('vit*', pretrained=True)
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ vit_b_16_384 │ true       │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘

Model Zoo

We have conducted all the tests under the same setting, please refer to the model page here for more details.

Disclaimer on Datasets

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

Comments
  • Support Poolformer

    Support Poolformer

    Support Poolformer

    • [x] build poolformer model
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison oneflow版本过慢,待解决
    New Features Priority: 0 
    opened by thinksoso 16
  • delete flowvision.models._util

    delete flowvision.models._util

    1. flowvision.models下面有_utils.pyutils.py
    2. IntermediateLayerGetter方法在flowvision.models._utils.pyflowvision.models.segmentation.seg_utils.py重复。

    所以删除flowvision.models._utils.py,并暂时引用flowvision.models.segmentation.seg_utils.py

    Priority: 1 Improvements 
    opened by kaijieshi7 9
  • pickle module :EOFError Ran out of input

    pickle module :EOFError Ran out of input

    when I want to use the model of vit_tiny_patch16_224 from flowvison module ,it prompt this EOFError: Ran out of input. 环境就是OneFlow实训平台的3090显卡:oneflow-0.7.0+torch-1.8.1-cu11.2-cudnn8

    opened by WanShaw 8
  • Support UniFormer

    Support UniFormer

    Support Uniformer

    • [x] build uniformer model
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo small_plus
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison
    New Features 
    opened by thinksoso 6
  • add LeViT

    add LeViT

    Add LeViT

    • [x] build model
    • [x] update init.py in models
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update readme
    • [x] update changelog
    • [x] pytorch speed comparison
    opened by kaijieshi7 5
  • 解压预训练权重文件时报错

    解压预训练权重文件时报错

    使用 models 中的模型时,例如 model = vgg11(pretrained=True) ,成功下载 zip 权重文件后,解压过程出错,导致解压中断、参数文件不完整。如果自行将下载的 zip 解压,就能正常使用。多个模型都有同样的问题。

    Traceback (most recent call last):
      File "temp.py", line 77, in <module>
        model = vgg11(pretrained=True)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/vgg.py", line 182, in vgg11
        return _vgg("vgg11", "A", False, pretrained, progress, **kwargs)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/vgg.py", line 156, in _vgg
        state_dict = load_state_dict_from_url(model_urls[arch], progress=progress)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/utils.py", line 146, in load_state_dict_from_url
        return _legacy_zip_load(cached_file, model_dir, map_location, delete_file)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/utils.py", line 78, in _legacy_zip_load
        f.extractall(model_dir)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1636, in extractall
        self._extract_member(zipinfo, path, pwd)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1691, in _extract_member
        shutil.copyfileobj(source, target)
      File "/usr/local/miniconda3/lib/python3.7/shutil.py", line 79, in copyfileobj
        buf = fsrc.read(length)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 930, in read
        data = self._read1(n)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1006, in _read1
        data = self._decompressor.decompress(data, n)
    zlib.error: Error -2 while decompressing data: inconsistent stream state
    
    opened by Alive1024 5
  • module 'flowvision.models' has no attribute 'face_recognition'

    module 'flowvision.models' has no attribute 'face_recognition'

    Hello, I need method for create model iresnet. I saw in documentation that flowvision has model iresnet, but when I import and use resnest50 = flowvision.models.face_recognition.iresnest50(pretrained=False, progress=True), python says module 'flowvision.models' has no attribute 'face_recognition'. What can be problem?

    good first issue Bug Fixes 
    opened by PhilippShemetov 4
  • add model: regionvit

    add model: regionvit

    Add RegionViT

    • [x] build model (F.unfold 算子不支持 https://github.com/Oneflow-Inc/oneflow/issues/3785)
    • [x] update init.py in models
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison
    New Features 
    opened by kaijieshi7 4
  • Add speed test script

    Add speed test script

    脚本运行方式:

    cd ci/check
    bash run_speed_test.sh
    

    结果会输出到 当前目录下的 result 文件里

    目前通过测速脚本发现的问题

    import torch as flow 运行会崩

    • vit
    • conv_mixer
    • crossformer
    • cswin
    • mlp_mixer
    • pvt
    • res_mlp
    • vgg

    本身运行也会报错,输入是 224x224 的时候

    • efficientnet
    • res2net
    Priority: 0 Improvements Bug Fixes 
    opened by Ldpe2G 4
  • add useful model utils

    add useful model utils

    TODO

    Model relative

    • [x] freeze_bn
    • [ ] unfreeze_bn
    • [x] ActivationHook
    • [ ] freeze_unfreeze_fn

    Others

    • [x] random seed

    Test

    • [x] test freeze_bn
    • [ ] test activation_hook
    New Features Priority: 2 
    opened by rentainhe 4
  • bug: module 'oneflow.nn' has no attribute 'ReLU'

    bug: module 'oneflow.nn' has no attribute 'ReLU'

    oneflow/nn/init.py

    from oneflow.python.ops.math_ops import fused_scale_tril from oneflow.python.ops.math_ops import fused_scale_tril_softmax_dropout from oneflow.python.ops.math_ops import relu from oneflow.python.ops.math_ops import tril

    应该是 as ReLU? 难道我的oneflow版本装错了。。 flowvision-0.1.0 oneflow==0.7.0+cu102

    bug 
    opened by zhanggj821 3
  • flow.div 算子和 torch.div 没对齐

    flow.div 算子和 torch.div 没对齐

    image

    import oneflow as flow
    import torch
    import numpy as np
    
    a = np.random.randn(3,3).astype(np.float32)
    
    b = 2
    
    torch_a = torch.from_numpy(a)
    flow_a = flow.from_numpy(a)
    
    print(torch.div(torch_a,b,rounding_mode='floor'))
    print(flow.div(flow_a,b).floor())
    print(flow.div(flow_a,b,rounding_mode='floor'))
    
    opened by triple-Mu 0
  • ResNet-50 训练

    ResNet-50 训练

    ResNet-50 训练

    参照当前 vision 下的 project 复现 resnet-50 训练和精度对齐。

    参考

    主要目标

    • [ ] 2022.05.11 - 2022.5.12:熟悉 vision 下的分类模型训练代码,数据集配置并跑通。
    • [ ] 2022.05.12 - 2022.05.20:对照 timm 和 pytorch 复现 reset-50 训练代码,对齐相关训练条件,测试并使用多卡训练。
    • [ ] 2022.05.21 - 2022.05.27:对比精度差异调整并复现精度,最终将训练好的权重替换为 oneflow 版本。

    项目负责人:林松 预计完成时间:2022.05.27

    相关 PR

    罗列对应的 PR,以为一个 issue 可能会对应多个 PR,所以这里提供的是表格

    | PR | 作者 | reviewer | 日期 | | | ------------------------------------------------------------ | ---- | -------- | -------- | ---- | | 首次上传提交代码 | 林松 | zzzzzzz | 20220510 | |

    opened by triple-Mu 0
  • Vision有效性验证 - 完善Vision下的训练项目

    Vision有效性验证 - 完善Vision下的训练项目

    目前Vision下已经有的一个可以参考的projects,迁移了Swin-T的训练代码,用于Vision下进行模型的训练,但是vision中绝大部分模型的精度复现还无法保证,所以这里开启一个完善训练的projects: 用于复现vision下实现的模型的精度,并且在后续逐渐将vision下迁移的权重替换为oneflow自身训练的权重,这里是暂时的规划,需要2-3位实习生参与完成:

    可参考的projects:

    • https://github.com/rwightman/pytorch-image-models
    • https://github.com/microsoft/Swin-Transformer

    训练的任务,以及首批需要复现精度的模型:

    • 完善Vision下的这个projects: - https://github.com/Oneflow-Inc/vision/tree/main/projects/classification, 熟悉这个projects的用法(与Swin-T基本一致)
    • 这里我们列举一下第一阶段在vision下需要复现精度的模型以及相关paper:

    | Model | Paper | 认领人 | PR | |:----:|:----:|:----:|:----:| | ResNet50 | ResNet strikes back: An improved training procedure in timm | 林松 | | DeiT | Training data-efficient image transformers & distillation through attention | | | Swin-Transformer | Swin Transformer: Hierarchical Vision Transformer using Shifted Windows | 林德铝 | | DeiT III | DeiT III: Revenge of ViT | | |

    • 需要的硬件条件:8卡V100机器,能跑得下单卡256batchsize即可
    opened by rentainhe 0
Releases(v0.1.0)
  • v0.1.0(Feb 17, 2022)

    Flowvision V0.1.0 Stable Release

    New Features

    • Support trunc_normal_ in flowvision.layers.weight_init #92
    • Support DeiT model #115
    • Support PolyLRScheduler and TanhLRScheduler in flowvision.scheduler #85
    • Add resmlp_12_224_dino model and pretrained weight #128
    • Support ConvNeXt model #93
    • Add ReXNet weights #132

    Bug Fixes

    • Fix F.normalize usage in SSD #116
    • Fix bug in EfficientNet and Res2Net #122
    • Fix error pretrained weight usage in vit_small_patch32_384 and res2net50_48w_2s #128

    Improvements

    • Refator trunc_normal_ and linspace usage in Swin-T, Cross-Former, PVT and CSWin models #100
    • Refator Vision Transformer model #115
    • Refine flowvision.models.ModelCreator to support ModelCreator.model_list func #123
    • Refator README #124
    • Refine load_state_dict_from_url in flowvision.models.utils to support downloading pretrained weights to cache dir ~/.oneflow/flowvision_cache #127
    • Rebuild a cleaner model zoo and test all the model with pretrained weights released in flowvision #128

    Docs Update

    • Update Vision Transformer docs #115
    • Add Getting Started docs #124
    • Add resmlp_12_224_dino docs #128
    • Fix VGG docs bug #128
    • Add ConvNeXt docs #93

    Contributors

    A total of 5 developers contributed to this release. Thanks @rentainhe, @simonJJJ, @kaijieshi7, @lixiang007666, @Ldpe2G

    Source code(tar.gz)
    Source code(zip)
Owner
OneFlow
OneFlow
Dados coletados e programas desenvolvidos no processo de iniciação científica

Iniciacao_cientifica_FAPESP_2020-14845-6 Dados coletados e programas desenvolvidos no processo de iniciação científica Os arquivos .py são os programa

1 Jan 10, 2022
STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech

STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech Keon Lee, Ky

Keon Lee 114 Dec 12, 2022
📚 Papermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks.

papermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks. Papermill lets you: parameterize notebooks execute notebooks This

nteract 5.1k Jan 03, 2023
A keras-based real-time model for medical image segmentation (CFPNet-M)

CFPNet-M: A Light-Weight Encoder-Decoder Based Network for Multimodal Biomedical Image Real-Time Segmentation This repository contains the implementat

268 Nov 27, 2022
Investigating Attention Mechanism in 3D Point Cloud Object Detection (arXiv 2021)

Investigating Attention Mechanism in 3D Point Cloud Object Detection (arXiv 2021) This repository is for the following paper: "Investigating Attention

52 Nov 19, 2022
Improving the robustness and performance of biomedical NLP models through adversarial training

RobustBioNLP Improving the robustness and performance of biomedical NLP models through adversarial training In this repository you can find suppliment

Milad Moradi 3 Sep 20, 2022
Deep Learning Slide Captcha

滑动验证码深度学习识别 本项目使用深度学习 YOLOV3 模型来识别滑动验证码缺口,基于 https://github.com/eriklindernoren/PyTorch-YOLOv3 修改。 只需要几百张缺口标注图片即可训练出精度高的识别模型,识别效果样例: 克隆项目 运行命令: git cl

Python3WebSpider 55 Jan 02, 2023
A Light CNN for Deep Face Representation with Noisy Labels

A Light CNN for Deep Face Representation with Noisy Labels Citation If you use our models, please cite the following paper: @article{wulight, title=

Alfred Xiang Wu 715 Nov 05, 2022
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack

FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack Case study of the FCA. The code can be find in FCA. Cas

IDRL 21 Dec 15, 2022
A PyTorch Implementation of the Luna: Linear Unified Nested Attention

Unofficial PyTorch implementation of Luna: Linear Unified Nested Attention The quadratic computational and memory complexities of the Transformer’s at

Soohwan Kim 32 Nov 07, 2022
A practical ML pipeline for data labeling with experiment tracking using DVC.

Auto Label Pipeline A practical ML pipeline for data labeling with experiment tracking using DVC Goals: Demonstrate reproducible ML Use DVC to build a

Todd Cook 4 Mar 08, 2022
PyTorch code for ICPR 2020 paper Future Urban Scene Generation Through Vehicle Synthesis

Future urban scene generation through vehicle synthesis This repository contains Pytorch code for the ICPR2020 paper "Future Urban Scene Generation Th

Alessandro Simoni 4 Oct 11, 2021
Automatic Attendance marker for LMS Practice School Division, BITS Pilani

LMS Attendance Marker Automatic script for lazy people to mark attendance on LMS for Practice School 1. Setup Add your LMS credentials and time slot t

Nihar Bansal 3 Jun 12, 2021
Course about deep learning for computer vision and graphics co-developed by YSDA and Skoltech.

Deep Vision and Graphics This repo supplements course "Deep Vision and Graphics" taught at YSDA @fall'21. The course is the successor of "Deep Learnin

Yandex School of Data Analysis 160 Jan 02, 2023
This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider

SBEVNet: End-to-End Deep Stereo Layout Estimation This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by D

Divam Gupta 19 Dec 17, 2022
Geometric Sensitivity Decomposition

Geometric Sensitivity Decomposition This repo is the official implementation of A Geometric Perspective towards Neural Calibration via Sensitivity Dec

16 Dec 26, 2022
Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning

Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning This is the code for implementing the MADDPG algorithm presented in

97 Dec 21, 2022
Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)

Awesome Visual-Transformer Collect some Transformer with Computer-Vision (CV) papers. If you find some overlooked papers, please open issues or pull r

dkliang 2.8k Jan 08, 2023
Official Implementation of "Learning Disentangled Behavior Embeddings"

DBE: Disentangled-Behavior-Embedding Official implementation of Learning Disentangled Behavior Embeddings (NeurIPS 2021). Environment requirement The

Mishne Lab 12 Sep 28, 2022
Dynamic Environments with Deformable Objects (DEDO)

DEDO - Dynamic Environments with Deformable Objects DEDO is a lightweight and customizable suite of environments with deformable objects. It is aimed

Rika 32 Dec 22, 2022