YOLOX Win10 Project

Overview

Introduction

这是一个用于Windows训练YOLOX的项目,相比于官方项目,做了一些适配和修改:

1、解决了Windows下import yolox失败,No such file or directory: 'xxx.xml'等路径问题

2、CUDA out of memory等显存不够问题

3、增加eval.txt,可以输出IoU=0.5-0.95的AP值,以及Map50和Map50:95

Benchmark

Model size mAPval
0.5:0.95
mAPtest
0.5:0.95
Speed V100
(ms)
Params
(M)
FLOPs
(G)
weights
YOLOX-s 640 40.5 40.5 9.8 9.0 26.8 github
YOLOX-m 640 46.9 47.2 12.3 25.3 73.8 github
YOLOX-l 640 49.7 50.1 14.5 54.2 155.6 github
YOLOX-x 640 51.1 51.5 17.3 99.1 281.9 github
YOLOX-Darknet53 640 47.7 48.0 11.1 63.7 185.3 github

Training on custom data

1、准备数据集

以VOC数据集为例,数据目录如下图所示,datasets/VOCdevkit/VOC2021/(不建议修改年份,如需要修改,则对应修改yolox_voc_s.py中的年份),该文件夹下有三个文件夹,分别为Annotations、JPEGImages、ImageSets,特别注意ImageSets文件夹下须新建Main文件夹,运行dataset_cls.py(注意切换到datasets路径下,可以修改训练集和测试集比例)会自动生成训练文件trainval.txttest.txt

2、修改配置文件

修改exps/example/yolox_voc/yolox_voc_s.py文件 self.num_classes和其他配置变量(自选)

class Exp(MyExp):
    def __init__(self):
        super(Exp, self).__init__()
        self.num_classes = 42         #修改成自己的类别
        self.depth = 0.33
        self.width = 0.50
        self.warmup_epochs = 1

此Exp类体继承MyExp类体,且可以对MyExp的变量重写(因此有更高的优先级),对按住ctrl点击MyExp跳转

class Exp(BaseExp):
    def __init__(self):
        super().__init__()

        # ---------------- model config ---------------- #
        self.num_classes = 80  #因为在yolox_voc_s.py中已经重新赋值,此处不用修改
        self.depth = 1.00
        self.width = 1.00
        self.act = 'silu'

        # ---------------- dataloader config ---------------- #
        # set worker to 4 for shorter dataloader init time
        self.data_num_workers = 1
        self.input_size = (640, 640)  # (height, width)
        # Actual multiscale ranges: [640-5*32, 640+5*32].
        # To disable multiscale training, set the
        # self.multiscale_range to 0.
        self.multiscale_range = 5 #五种输入大小随机调整
        # You can uncomment this line to specify a multiscale range
        # self.random_size = (14, 26)
        self.data_dir = None
        self.train_ann = "instances_train2017.json"
        self.val_ann = "instances_val2017.json"

        # --------------- transform config ----------------- #
        self.mosaic_prob = 1.0   #数据增强概率,可以根据需要调整
        self.mixup_prob = 1.0
        self.hsv_prob = 1.0
        self.flip_prob = 0.5
        self.degrees = 10.0
        self.translate = 0.1
        self.mosaic_scale = (0.1, 2)
        self.mixup_scale = (0.5, 1.5)
        self.shear = 2.0
        self.enable_mixup = True

        # --------------  training config --------------------- #
        self.warmup_epochs = 5
        self.max_epoch = 100  #设置训练轮数
        self.warmup_lr = 0
        self.basic_lr_per_img = 0.01 / 64.0
        self.scheduler = "yoloxwarmcos"
        self.no_aug_epochs = 15 #不适用数据增强轮数
        self.min_lr_ratio = 0.05
        self.ema = True

        self.weight_decay = 5e-4
        self.momentum = 0.9
        self.print_interval = 10 #每隔十步打印输出一次训练信息
        self.eval_interval = 1 #每隔1轮保存一次
        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]

        # -----------------  testing config ------------------ #
        self.test_size = (640, 640)
        self.test_conf = 0.01
        self.nmsthre = 0.65

可以对上述类体变量进行调整,其中关键变量有input_size、max_epoch、eval_interval

3、开始训练

输入以下命令开始训练,-c 表示加载预训练权重

python tools/train.py  -c /path/to/yolox_s.pth

你也可以对其他参数进行调整,例如:

python tools/train.py  -d 1 -b 8 --fp16 -c /path/to/yolox_s.pth

-d 表示用几块显卡,-b 表示设置batch_size,--fp16 表示半精度训练,-c 表示加载预训练权重,如果在显存不足的情况下,谨慎输入 -o 参数,会占用较多显存

如果训练一半终止后,想继续断点训练,可以输入

python tools/train.py --resume

Evaluation

输入以下代码默认对精度最高模型评估,评估后,可以在YOLOX_outputs/yolox_voc_s/eval.txt中看到IoU=0.5-0.95的AP值,文件最后可以看到Map50Map50:95

python tools/eval.py

如需对设定其他参数,可以输入以下代码,参数意义同训练

python tools/eval.py -n  yolox-s -c yolox_s.pth -b 8 -d 1 --conf 0.001 
                         yolox-m
                         yolox-l
                         yolox-x

Reference

https://github.com/Megvii-BaseDetection/YOLOX

Simple, but essential Bayesian optimization package

BayesO: A Bayesian optimization framework in Python Simple, but essential Bayesian optimization package. http://bayeso.org Online documentation Instal

Jungtaek Kim 74 Dec 05, 2022
CrossNorm and SelfNorm for Generalization under Distribution Shifts (ICCV 2021)

CrossNorm (CN) and SelfNorm (SN) (Accepted at ICCV 2021) This is the official PyTorch implementation of our CNSN paper, in which we propose CrossNorm

100 Dec 28, 2022
Python library for loading and using triangular meshes.

Trimesh is a pure Python (2.7-3.4+) library for loading and using triangular meshes with an emphasis on watertight surfaces. The goal of the library i

Michael Dawson-Haggerty 2.2k Jan 07, 2023
Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training"

Saliency Guided Training Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training" by Aya Abdelsalam Ismail, Hector Cor

8 Sep 22, 2022
Official Keras Implementation for UNet++ in IEEE Transactions on Medical Imaging and DLMIA 2018

UNet++: A Nested U-Net Architecture for Medical Image Segmentation UNet++ is a new general purpose image segmentation architecture for more accurate i

Zongwei Zhou 1.8k Dec 27, 2022
git《Investigating Loss Functions for Extreme Super-Resolution》(CVPR 2020) GitHub:

Investigating Loss Functions for Extreme Super-Resolution NTIRE 2020 Perceptual Extreme Super-Resolution Submission. Our method ranked first and secon

Sejong Yang 0 Oct 17, 2022
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

Karan Desai 105 Nov 25, 2022
[ICLR 2021] Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments.

[ICLR 2021] RAPID: A Simple Approach for Exploration in Reinforcement Learning This is the Tensorflow implementation of ICLR 2021 paper Rank the Episo

Daochen Zha 48 Nov 21, 2022
A PyTorch Implementation of ViT (Vision Transformer)

ViT - Vision Transformer This is an implementation of ViT - Vision Transformer by Google Research Team through the paper "An Image is Worth 16x16 Word

Quan Nguyen 7 May 11, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
Phylogeny Partners

Phylogeny-Partners Two states models Instalation You may need to install the cython, networkx, numpy, scipy package: pip install cython, networkx, num

1 Sep 19, 2022
A scikit-learn-compatible module for estimating prediction intervals.

MAPIE - Model Agnostic Prediction Interval Estimator MAPIE allows you to easily estimate prediction intervals (or prediction sets) using your favourit

588 Jan 04, 2023
A Distributional Approach To Controlled Text Generation

A Distributional Approach To Controlled Text Generation This is the repository code for the ICLR 2021 paper "A Distributional Approach to Controlled T

NAVER 102 Jan 07, 2023
Contextualized Perturbation for Textual Adversarial Attack, NAACL 2021

Contextualized Perturbation for Textual Adversarial Attack Introduction This is a PyTorch implementation of Contextualized Perturbation for Textual Ad

cookielee77 30 Jan 01, 2023
Cognition-aware Cognate Detection

Cognition-aware Cognate Detection The repository which contains our code for our EACL 2021 paper titled, "Cognition-aware Cognate Detection". This wor

Prashant K. Sharma 1 Feb 01, 2022
Code for Deep Single-image Portrait Image Relighting

Deep Single-Image Portrait Relighting [Project Page] Hao Zhou, Sunil Hadap, Kalyan Sunkavalli, David W. Jacobs. In ICCV, 2019 Overview Test script for

438 Jan 05, 2023
Inferring Lexicographically-Ordered Rewards from Preferences

Inferring Lexicographically-Ordered Rewards from Preferences Code author: Alihan Hüyük ([e

Alihan Hüyük 1 Feb 13, 2022
The code from the paper Character Transformations for Non-Autoregressive GEC Tagging

Character Transformations for Non-Autoregressive GEC Tagging Milan Straka, Jakub Náplava, Jana Straková Charles University Faculty of Mathematics and

ÚFAL 5 Dec 10, 2022
Permute Me Softly: Learning Soft Permutations for Graph Representations

Permute Me Softly: Learning Soft Permutations for Graph Representations

Giannis Nikolentzos 7 Jul 10, 2022
Episodic-memory - Ego4D Episodic Memory Benchmark

Ego4D Episodic Memory Benchmark EGO4D is the world's largest egocentric (first p

3 Feb 18, 2022