Fast, modular reference implementation and easy training of Semantic Segmentation algorithms in PyTorch.

Related tags

Deep LearningTorchSeg
Overview

TorchSeg

This project aims at providing a fast, modular reference implementation for semantic segmentation models using PyTorch.

demo image

Highlights

  • Modular Design: easily construct customized semantic segmentation models by combining different components.
  • Distributed Training: >60% faster than the multi-thread parallel method(nn.DataParallel), we use the multi-processing parallel method.
  • Multi-GPU training and inference: support different manners of inference.
  • Provides pre-trained models and implement different semantic segmentation models.

Prerequisites

  • PyTorch 1.0
    • pip3 install torch torchvision
  • Easydict
    • pip3 install easydict
  • Apex
  • Ninja
    • sudo apt-get install ninja-build
  • tqdm
    • pip3 install tqdm

Updates

v0.1.1 (05/14/2019)

  • Release the pre-trained models and all trained models
  • Add PSANet for ADE20K
  • Add support for CamVid, PASCAL-Context datasets
  • Start only supporting the distributed training manner

Model Zoo

Pretrained Model

Supported Model

Performance and Benchmarks

SS:Single Scale MSF:Multi-scale + Flip

PASCAL VOC 2012

Methods Backbone TrainSet EvalSet Mean IoU(ss) Mean IoU(msf) Model
FCN-32s R101_v1c train_aug val 71.26 -
DFN(paper) R101_v1c train_aug val 79.67 80.6*
DFN(ours) R101_v1c train_aug val 79.40 81.40 GoogleDrive

80.6*: this result reported in paper is further finetuned on train dataset.

Cityscapes

Non-real-time Methods

Methods Backbone OHEM TrainSet EvalSet Mean IoU(ss) Mean IoU(msf) Model
DFN(paper) R101_v1c train_fine val 78.5 79.3
DFN(ours) R101_v1c train_fine val 79.09 80.41 GoogleDrive
DFN(ours) R101_v1c train_fine val 79.16 80.53 GoogleDrive
BiSeNet(paper) R101_v1c train_fine val - 80.3
BiSeNet(ours) R101_v1c train_fine val 79.09 80.39 GoogleDrive
BiSeNet(paper) R18 train_fine val 76.21 78.57
BiSeNet(ours) R18 train_fine val 76.28 78.00 GoogleDrive
BiSeNet(paper) X39 train_fine val 70.1 72
BiSeNet(ours)* X39 train_fine val 70.32 72.06 GoogleDrive

Real-time Methods

Methods Backbone OHEM TrainSet EvalSet Mean IoU Model
BiSeNet(paper) R18 train_fine val 74.8
BiSeNet(ours) R18 train_fine val 74.83 GoogleDrive
BiSeNet(paper) X39 train_fine val 69
BiSeNet(ours)* X39 train_fine val 68.51 GoogleDrive

BiSeNet(ours)*: because we didn't pre-train the Xception39 model on ImageNet in PyTorch, we train this experiment from scratch. We will release the pre-trained Xception39 model in PyTorch and the corresponding experiment.

ADE

Methods Backbone TrainSet EvalSet Mean IoU(ss) Accuracy(ss) Model
PSPNet(paper) R50_v1c train val 41.68 80.04
PSPNet(ours) R50_v1c train val 41.65 79.74 GoogleDrive
PSPNet(paper) R101_v1c train val 41.96 80.64
PSPNet(ours) R101_v1c train val 42.89 80.55 GoogleDrive
PSANet(paper) R50_v1c train val 41.92 80.17
PSANet(ours)* R50_v1c train val 41.67 80.09 GoogleDrive
PSANet(paper) R101_v1c train val 42.75 80.71
PSANet(ours) R101_v1c train val 43.04 80.56 GoogleDrive

PSANet(ours)*: The original PSANet in the paper constructs the attention map with over-parameters, while we only predict the attention map with the same size of the feature map. The performance is almost similar to the original one.

To Do

  • offer comprehensive documents
  • support more semantic segmentation models
    • Deeplab v3 / Deeplab v3+
    • DenseASPP
    • EncNet
    • OCNet

Training

  1. create the config file of dataset:train.txt, val.txt, test.txt
    file structure:(split with tab)
    path-of-the-image   path-of-the-groundtruth
    
  2. modify the config.py according to your requirements
  3. train a network:

Distributed Training

We use the official torch.distributed.launch in order to launch multi-gpu training. This utility function from PyTorch spawns as many Python processes as the number of GPUs we want to use, and each Python process will only use a single GPU.

For each experiment, you can just run this script:

export NGPUS=8
python -m torch.distributed.launch --nproc_per_node=$NGPUS train.py

Inference

In the evaluator, we have implemented the multi-gpu inference base on the multi-process. In the inference phase, the function will spawns as many Python processes as the number of GPUs we want to use, and each Python process will handle a subset of the whole evaluation dataset on a single GPU.

  1. evaluate a trained network on the validation set:
    python3 eval.py
  2. input arguments:
    usage: -e epoch_idx -d device_idx [--verbose ] 
    [--show_image] [--save_path Pred_Save_Path]

Disclaimer

This project is under active development. So things that are currently working might break in a future release. However, feel free to open issue if you get stuck anywhere.

Citation

The following are BibTeX references. The BibTeX entry requires the url LaTeX package.

Please consider citing this project in your publications if it helps your research.

@misc{torchseg2019,
  author =       {Yu, Changqian},
  title =        {TorchSeg},
  howpublished = {\url{https://github.com/ycszen/TorchSeg}},
  year =         {2019}
}

Please consider citing the DFN in your publications if it helps your research.

@inproceedings{yu2018dfn,
  title={Learning a Discriminative Feature Network for Semantic Segmentation},
  author={Yu, Changqian and Wang, Jingbo and Peng, Chao and Gao, Changxin and Yu, Gang and Sang, Nong},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2018}
}

Please consider citing the BiSeNet in your publications if it helps your research.

@inproceedings{yu2018bisenet,
  title={Bisenet: Bilateral segmentation network for real-time semantic segmentation},
  author={Yu, Changqian and Wang, Jingbo and Peng, Chao and Gao, Changxin and Yu, Gang and Sang, Nong},
  booktitle={European Conference on Computer Vision},
  pages={334--349},
  year={2018},
  organization={Springer}
}

Why this name, Furnace?

Furnace means the Alchemical Furnace. We all are the Alchemist, so I hope everyone can have a good alchemical furnace to practice the Alchemy. Hope you can be a excellent alchemist.

Comments
  • The problem about FPS

    The problem about FPS

    Hi @ycszen The FPS about BiSeNet in paper abstract is tested on a 2048x1024 input image is 105.

    But, I just get 2 FPS about BiSeNet(Xception) and 9.5 FPS about BiSeNet(ResNet-18) on TiTan Xp.

    opened by MrLinNing 10
  • Training bolcked

    Training bolcked

    When I train my network, the program was blocked after the first epoch. I don't know why this happeded.

    Epoch0/800 Iter20/20: lr=2.00e-02 loss=2.75: [00:44<00:00, 1.32it/s]
    Epoch0/800 Iter20/20: lr=2.00e-02 loss=2.75: [00:44<00:00, 1.31it/s]
    [00:00<?,?it/s]
    
    opened by charlesCXK 9
  • RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/

    RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/

    hi, @ycszen

    Sorry to disturb you again. After some struggle on the code, I was stuck at the Criterion part. It gave RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THCUNN/generic/SpatialClassNLLCriterion.cu:128

    I add the CUDA_LAUNCH_BLOCKING=1 before run the script to enable more accuracy message:

    0] Assertiont >= 0 && t < n_classesfailed. /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [11,0,0], thread: [766,0,0] Assertiont >= 0 && t < n_classesfailed. /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [11,0,0], thread: [767,0,0] Assertiont >= 0 && t < n_classesfailed. /pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [11,0,0], thread: [800,0,0] Assertiont >= 0 && t < n_classes` failed. THCudaCheck FAIL file=/pytorch/aten/src/THCUNN/generic/SpatialClassNLLCriterion.cu line=128 error=59 : device-side assert triggered Traceback (most recent call last):

    loss = model(imgs, gts, cgts)
    

    File "/home/chenp/.pyenv/versions/3.6.8/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/home/chenp/workspace/git/TorchSeg/model/dfn/voc.dfn.R101_v1c/network.py", line 137, in forward loss0 = self.criterion(pred_out[0], label) File "/home/chenp/.pyenv/versions/3.6.8/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/home/chenp/.pyenv/versions/3.6.8/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 904, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/home/chenp/.pyenv/versions/3.6.8/lib/python3.6/site-packages/torch/nn/functional.py", line 1970, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/home/chenp/.pyenv/versions/3.6.8/lib/python3.6/site-packages/torch/nn/functional.py", line 1792, in nll_loss ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THCUNN/generic/SpatialClassNLLCriterion.cu:128

    `

    Do you have any experience or advise on it ?

    opened by blueardour 8
  • resnet50_v1c weight not match

    resnet50_v1c weight not match

    Thanks to your great work! I tried to run pspnet according to your instructions. I downloaded 'resnet50_v1c' from gluon and converted it to pytorch model by running 'python gluon2pytorch.py -m 'resnet50_v1c'. But when I tried to run the pspnet by command 'python train.py -d 0-7', it shows that the weight of the checkpoint does not match that of the current model. The log is as following: RuntimeError: Error(s) in loading state_dict for ResNet: size mismatch for conv1.0.weight: copying a param with shape torch.Size([32, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3]). size mismatch for conv1.1.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for conv1.1.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for conv1.1.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for conv1.1.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for conv1.3.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for conv1.4.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for conv1.4.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for conv1.4.running_mean: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for conv1.4.running_var: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for conv1.6.weight: copying a param with shape torch.Size([64, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]). size mismatch for bn1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for bn1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for bn1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for bn1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for layer1.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]). size mismatch for layer1.0.downsample.0.weight: copying a param with shape torch.Size([256, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 1, 1]). Could you help me find out where it goes wrong? Thanks very much!

    opened by wangziyukobe 6
  • WRN Missing key(s) in state_dict

    WRN Missing key(s) in state_dict

    @ycszen When I run the train.py from cityscapes.bisenet.R18.speed, the following tip appears:

    WRN Missing key(s) in state_dict: layer3.0.bn1.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer4.0.bn2.num_batches_tracked, bn1.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer2.1.bn1.num_batches_tracked
    

    How should I deal with this problem?

    opened by bjchen666 5
  • software dependence error

    software dependence error

    hi, @ycszen

    Sorry to disturb you. This project is so attractive that I want to re-produce the result with it. However, when I tried to run train.py in TorchSeg/model/dfn/voc.dfn.R101_v1c. It gave several warnings and errors. They were the software dependence issues. This I wonder if you could share your software version in your environment.

    My is: centos7.5 + python3.6.8 + pytorch1.0 + cuda9.0 + gcc-4.9.4.

    Error message:

    ` /home/cat/.pyenv/versions/3.6.8/lib/python3.6/site-packages/torch/utils/cpp_extension.py:166: UserWarning:

                               !! WARNING !!
    

    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Your compiler (c++) is not compatible with the compiler Pytorch was built with for this platform, which is g++ on linux. Please use g++ to to compile your extension. Alternatively, you may compile PyTorch from source using c++, and then you can also use c++ to compile your extension.

    See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help with compiling PyTorch from source. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    platform=sys.platform)) Traceback (most recent call last): File "train.py", line 24, in from apex.parallel import DistributedDataParallel, SyncBatchNorm ModuleNotFoundError: No module named 'apex'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "train.py", line 27, in "Please install apex from https://www.github.com/nvidia/apex .") ImportError: Please install apex from https://www.github.com/nvidia/apex .

    `

    When I tried to install apex by pip install apex, it gave In file included from /home/chenp/.pyenv/versions/3.6.8/include/python3.6m/Python.h:39:0, from cryptacular/bcrypt/_bcrypt.c:26: crypt_blowfish-1.2/crypt.h:17:23: fatal error: gnu-crypt.h: No such file or directory #include <gnu-crypt.h> ^ compilation terminated. error: command 'gcc' failed with exit status 1

    opened by blueardour 5
  • How to train on a single gpu?

    How to train on a single gpu?

    I have modified train.py as #11 said, but I find that there is no code model = DataParallelModel(model, device_ids=engine.devices) and error occoured: Traceback (most recent call last): File "train.py", line 33, in <module> with Engine(custom_parser=parser) as engine: File "/home/rose/projects/TorchSeg/furnace/engine/engine.py", line 69, in __init__ self.devices = parse_devices(self.args.devices) File "/home/rose/projects/TorchSeg/furnace/utils/pyt_utils.py", line 99, in parse_devices device = int(d) ValueError: invalid literal for int() with base 10: '' Then I used the unrevised 'train.py' and changed NGPUS=1. It made the same error. Could someone can tell me why?

    opened by DRosemei 4
  •  module 'torch.distributed' has no attribute 'ReduceOp'

    module 'torch.distributed' has no attribute 'ReduceOp'

    Hi everyone,

    I installed all requirements and when I run python eval.py I got this error: module 'torch.distributed' has no attribute 'ReduceOp' my Torch version is 1.1.0 thanks.

    opened by sctrueew 4
  • Why do not use the predict value of Border network to help the segment result in DFN when in evaluation step?

    Why do not use the predict value of Border network to help the segment result in DFN when in evaluation step?

    Hi, thanks for your open source. I have a question about why in the evalution step, you do not use the predict of Border network to help the segment result in DFN. I still have a question about if i want to use the predict of Border network to help the segment result, what should i do ? I can't figure out a prefect Strategy about this. I am looking forward to your reply, Thanks.

    opened by pdoublerainbow 3
  • very low miou using your uploaded model params

    very low miou using your uploaded model params

    i down load https://drive.google.com/file/d/1hFF-J9qoXlbVRRUr29aWeQpL4Lwn45mU/view, then put it in the corresponding folder and rename it epoch-last.pth. eval result is very bad as below: 08 11:40:17 using devices 0 08 11:40:17 Load Model: /home/mengzhibin/learn/TorchSeg/log/cityscapes.bisenet.R18/snapshot/epoch-last.pth 08 11:40:20 Load model, Time usage: IO: 2.575568914413452, initialize parameters: 0.02306199073791504 08 11:40:20 GPU 0 handle 500 data. 0%| | 0/500 [00:00<?, ?it/s]08 11:40:20 Load Model on Device 0: 0.00s 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 500/500 [05:37<00:00, 1.47it/s] 1 road 0.000% 2 sidewalk 1.077% 3 building 1.571% 4 wall 0.472% 5 fence 2.899% 6 pole 0.482% 7 traffic light 0.000% 8 traffic sign 0.007% 9 vegetation 0.244% 10 terrain 0.913% 11 sky 0.000% 12 person 0.175% 13 rider 0.050% 14 car 0.279% 15 truck 0.000% 16 bus 0.060% 17 train 0.000% 18 motorcycle 0.052% 19 bicycle 0.142% ---------------------------- mean_IU 0.443% mean_IU_no_back 0.468% mean_pixel_ACC 0.855% 08 11:45:58 Evaluation Elapsed Time: 337.67s is there something wrong?

    opened by mengzhibin 3
  • FileNotFoundError: [Errno 2] No such file or directory: '/home/bixin/a_project/TorchSeg/log/cityscapes.bisenet.R18.speed/val_2019_06_18_21_54_58.log'

    FileNotFoundError: [Errno 2] No such file or directory: '/home/bixin/a_project/TorchSeg/log/cityscapes.bisenet.R18.speed/val_2019_06_18_21_54_58.log'

    i just want to test this code,when I run eval.py,How to solve this problem please? I have created a directory /log ,but it's still caan't work.Can someone help me please?

    opened by EchoAmor 3
  • 训练问题

    训练问题

    @ycszen 作者你好,我在训练的过程成一直报页面文件太小,无法操作,显存爆了的问题,但是我是6g显存,单卡,而且是训练的res18网络,我把batch_size都设置成1了,还是会报这样的错,请问是问题出在哪了呢?感谢!

    OSError: [WinError 1455] 页面文件太小,无法完成操作。

    opened by miscedence12 0
  • ModuleNotFoundError: No module named 'utils.pyt_utils'

    ModuleNotFoundError: No module named 'utils.pyt_utils'

    Trying to run bash script.sh I got the following error

    Traceback (most recent call last): File "train.py", line 17, in from dataloader import get_train_loader File "/media/D/users/Idan/Suha_Maryam/TorchSemiSeg-main/exp.voc/voc8.res50v3+.CPS/dataloader.py", line 8, in from utils.img_utils import generate_random_crop_pos, random_crop_pad_to_shape ModuleNotFoundError: No module named 'utils.img_utils' Traceback (most recent call last): File "eval.py", line 13, in from utils.pyt_utils import ensure_dir, link_file, load_model, parse_devices ModuleNotFoundError: No module named 'utils.pyt_utils'

    Adding [to script.sh before import utils] PYTHONPATH="/TorchSemiSeg-main/furnace/"
    Did not solve the problem

    What could be the problem? Thanks Moran

    opened by artzimy 0
  • Difference between realtime res18 and non-realtime res18 model

    Difference between realtime res18 and non-realtime res18 model

    Hi @ycszen , thank you for providing this great repo. I have a question about the difference between realtime res18 and non-realtime res18 model. As shown in the readme, realtime res18 has 74.8 mIoU, while non-realtime res18 has 76.2 mIoU. I didn't see any difference between these two models, so why is one of them realtime and the other is not?

    I searched the issues, and found your answer to this one (https://github.com/ycszen/TorchSeg/issues/24), saying realtime res18 is doing whole evaluation while non-realtime res18 is doing sliding evaluation. This makes sense at first. But when I downloaded these two models, I found their sizes are different, one is 105M, the other is 108M. So the models are actually different, it is not just the evaluation strategy.

    So I'm confused here. Could you help to clarify the difference between these two models? Thank you very much.

    opened by bryanyzhu 1
  • Training Parameter - Large dataset

    Training Parameter - Large dataset

    I have one question, I trained model for one class(0 & 1) with 2000 images of dimension 512x512. The accuracy of the output is somewhat good. After that I have trained a model with 13600 images and the accuracy is very bad when compared with the model trained on 2000 images. I have trained algorithm with different lr, steps and epochs, still couldn't able to figure out the problem

    Any recommendations for setting training configuration?

    opened by sarathsrk 0
Releases(v0.1.1)
  • v0.1.1(May 15, 2019)

    Highlights

    • Release the pre-trained models and all trained models
    • Add PSANet for ADE20K
    • Add support for CamVid, PASCAL-Context datasets
    • Start only supporting the distributed training manner and adjust the relevant settings
    • Fix bugs
    Source code(tar.gz)
    Source code(zip)
Owner
ycszen
ycszen
Official Implementation of "LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks"

LUNAR Official Implementation of "LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks" Adam Goodge, Bryan Hooi, Ng See Kiong and

Adam Goodge 25 Dec 28, 2022
EMNLP 2021 Findings' paper, SCICAP: Generating Captions for Scientific Figures

SCICAP: Scientific Figures Dataset This is the Github repo of the EMNLP 2021 Findings' paper, SCICAP: Generating Captions for Scientific Figures (Hsu

Edward 26 Nov 21, 2022
Applicator Kit for Modo allow you to apply Apple ARKit Face Tracking data from your iPhone or iPad to your characters in Modo.

Applicator Kit for Modo Applicator Kit for Modo allow you to apply Apple ARKit Face Tracking data from your iPhone or iPad with a TrueDepth camera to

Andrew Buttigieg 3 Aug 24, 2021
A data annotation pipeline to generate high-quality, large-scale speech datasets with machine pre-labeling and fully manual auditing.

About This repository provides data and code for the paper: Scalable Data Annotation Pipeline for High-Quality Large Speech Datasets Development (subm

Appen Repos 86 Dec 07, 2022
MOpt-AFL provided by the paper "MOPT: Optimized Mutation Scheduling for Fuzzers"

MOpt-AFL 1. Description MOpt-AFL is a AFL-based fuzzer that utilizes a customized Particle Swarm Optimization (PSO) algorithm to find the optimal sele

172 Dec 18, 2022
Face and Body Tracking for VRM 3D models on the web.

Kalidoface 3D - Face and Full-Body tracking for Vtubing on the web! A sequal to Kalidoface which supports Live2D avatars, Kalidoface 3D is a web app t

Rich 257 Jan 02, 2023
PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending"

Bridging the Visual Gap: Wide-Range Image Blending PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending".

Chia-Ni Lu 69 Dec 20, 2022
Improving Object Detection by Estimating Bounding Box Quality Accurately

Improving Object Detection by Estimating Bounding Box Quality Accurately Abstrac

2 Apr 14, 2022
Gapmm2: gapped alignment using minimap2 (align transcripts to genome)

gapmm2: gapped alignment using minimap2 This tool is a wrapper for minimap2 to r

Jon Palmer 2 Jan 27, 2022
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Nurettin Sinanoğlu 2 Mar 04, 2022
ACL'2021: LM-BFF: Better Few-shot Fine-tuning of Language Models

LM-BFF (Better Few-shot Fine-tuning of Language Models) This is the implementation of the paper Making Pre-trained Language Models Better Few-shot Lea

Princeton Natural Language Processing 607 Jan 07, 2023
Source code of article "Towards Toxic and Narcotic Medication Detection with Rotated Object Detector"

Towards Toxic and Narcotic Medication Detection with Rotated Object Detector Introduction This is the source code of article: Towards Toxic and Narcot

Woody. Wang 3 Oct 29, 2022
Aydin is a user-friendly, feature-rich, and fast image denoising tool

Aydin is a user-friendly, feature-rich, and fast image denoising tool that provides a number of self-supervised, auto-tuned, and unsupervised image denoising algorithms.

Royer Lab 99 Dec 14, 2022
A highly efficient and modular implementation of Gaussian Processes in PyTorch

GPyTorch GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian

3k Jan 02, 2023
Official implementation of CVPR2020 paper "Deep Generative Model for Robust Imbalance Classification"

Deep Generative Model for Robust Imbalance Classification Deep Generative Model for Robust Imbalance Classification Xinyue Wang, Yilin Lyu, Liping Jin

9 Nov 01, 2022
Start-to-finish tutorial for interactive music co-creation in PyTorch and Tensorflow.js

Start-to-finish tutorial for interactive music co-creation in PyTorch and Tensorflow.js

Chris Donahue 98 Dec 14, 2022
DC540 hacking challenge 0x00005a.

dc540-0x00005a DC540 hacking challenge 0x00005a. PROMOTIONAL VIDEO - WATCH NOW HERE ON YOUTUBE CRITICAL PART 5A VIDEO - WATCH NOW HERE ON YOUTUBE Prio

Kevin Thomas 3 May 09, 2022
A demo of how to use JAX to create a simple gravity simulation

JAX Gravity This repo contains a demo of how to use JAX to create a simple gravity simulation. It uses JAX's experimental ode package to solve the dif

Cristian Garcia 16 Sep 22, 2022
An adaptive hierarchical energy management strategy for hybrid electric vehicles

An adaptive hierarchical energy management strategy This project contains the source code of an adaptive hierarchical EMS combining heuristic equivale

19 Dec 13, 2022
Locally Differentially Private Distributed Deep Learning via Knowledge Distillation (LDP-DL)

Locally Differentially Private Distributed Deep Learning via Knowledge Distillation (LDP-DL) A preprint version of our paper: Link here This is a samp

Di Zhuang 3 Jan 08, 2023