当前位置:网站首页>pytorch_ Yolox pruning [with code]
pytorch_ Yolox pruning [with code]
2022-07-06 22:25:00 【Meat loving Peng】
Catalog
Fine tuning training after pruning
Conv And BN The fusion reasoning of layer accelerates
Environmental Science
pytorch 1.7
loguru 0.5.3
NVIDIA 1650 4G
intel i5-9th
torch-pruning 0.2.7
Installation package
pip install torch_pruning
Note: This project is in b standing up Lord Bubbliiiing Heyuan YOLOX The official code is integrated .
1. Added feature Visualization
2. It can be turned on during training EMA function
3. Network pruning ( Support s,m,l,x)
3.1 Support single convolution pruning
3.2 Support network layer pruning
4. Fine tuning training after pruning
5.Conv And BN The fusion reasoning of layer accelerates
6. preservation log Information
Dataset format : use voc Dataset format
feature Visualization
stay tools/Net_Vision.py Implement for visual code . You can import NV function , Realize channel visualization .
eg:
features = [out_features[f] for f in self.in_features]
[x2, x1, x0] = features # shape is (batch_size,channels,w,h)
NV(x2)

Network pruning
Reference paper :Pruning Filters for Efficient ConvNets
Import the pruning tool
import torch_pruning as tp
If you need to see yolov4 Of , You can see :YOLOv4 prune 【 The attached code 】_ Meat loving Peng's blog -CSDN Blog _yolov4 prune
Adopt channel pruning , Instead of weight pruning .
Before pruning, you need to pass tools/prunmodel.py save_whole_model(weights_path, num_classes) Function saves the weight and structure of the model .
weights_path: Weight path
num_classes: Own category quantity
model = YOLOX(num_classes, 's') # It needs to be modified according to the number of classes s finger yolox-s
Support the pruning of a convolution : call Conv_pruning(whole_model_weights):
pruning_idxs = strategy(v, amount=0.4) # 0.4 Is the pruning rate Modify as needed , The larger the number, the more you cut
For the pruning of a single convolution , Two local values need to be modified , The convolution layer here needs to be obtained by printing the model , Don't guess blindly :
if k == 'backbone.backbone.dark2.0.conv.weight'pruning_plan = DG.get_pruning_plan((model.backbone.backbone.dark2)[0].conv, tp.prune_conv, idxs=pruning_idxs)
Support network layer pruning : call layer_pruning(whole_model_weights):
included_layers = list((model.backbone.backbone.dark2.modules())) # Prune for a certain layer
Note: After successful pruning , It will print the parameter variation of the model ! If not printed , That means you cut it wrong , Check it out !
After pruning log The log file will be saved in logs Under the document
Fine tuning training after pruning
take train.py Medium pruned_train Set to True.
False For normal training , And then modify it yourself batch_size.
Pay attention to revision model_path and classes_path, Otherwise, it will report a mistake !
The size of network input before pruning must be consistent with that during fine tuning training and reasoning !
Train your own dataset
If you have Bubbliiiing up Code of master , You will soon be able to get started . The dataset uses VOC In the form of
VOCdevkit/ `-- VOC2007 |-- Annotations ( Deposit xml Label file ) |-- ImageSets | `-- Main `-- JPEGImages ( Store image )
stay model_data Create a new one in new_classes.txt, Write your own class inside . function voc_annotation.py, Will generate... In the current directory 2007_train.txt Document and 2007_val.txt file .( You can check whether it is successfully generated )
stay train.py in , take classes_path It is amended as follows model_data/new_classes.txt【 Wait for the prediction , Also needed in the yolo.py Modify here 】
Then modify other super parameters as needed to train , Training weights will be saved in logs In file ( Save weights by default , Without network structure )
forecast

Parameter description : The inputs of the following terminals are optional
--predict: Prediction model
--pruned: Turn on pruning prediction or training
--image: Image detection
--video: Start video detection
--video_path: Video path
--camid: camera id Default 0
--fps:FPS test
--dir_predict: Predict the image under a folder
--phi: You can choose s,m,l,x etc.
--input_shape: Network input size , Default 640
--confidence: Confidence threshold
--nms_iou:iou threshold
--num_classes: Number of categories , Default 80
--fuse: Whether the convolution layer and BN Layer fusion accelerates , Default False
The input terminal :
# Image prediction python demo.py --predict --image
# Video prediction python demo.py --predict --video --camdi 0
# fps test python demo.py --predict --fps
The default forecasts are yolox_s, If you want to specify another network , Input :( It is important to note that in yolo.py Modify the weight path , If it is your own data set , It needs to be revised classes_path)
# Use yolox_l forecast python demo.py --predict --image --phi l
Conv And BN The fusion reasoning of layer accelerates
Other commands can be used together , For example, using conv and bn Reasoning in the way of fusion
python demo.py --predict --image --fuse
Found by test FPS Promoted 3 frame /s about ( my GPU yes 1650)
Saving of log files
This project adopts loguru Tools capture logs , Some log records during detection and training will be automatically recorded , Save in logs Under the document , One log The maximum size of the file I set is 1 MB, If beyond this range , Will automatically generate a new .log file , You can modify this value by yourself , Or modify the log save time ( So as not to save too many logs ). If you don't want this feature , You can find the corresponding position and comment it out .
Here is just to help you build wheels , Implement some functions with as simple code as possible , You don't have to look at complex engineering code anymore , The final effect needs to be adjusted patiently , Take your time “ Alchemy ”!
The weight
link : Baidu SkyDrive Please enter the extraction code
https://pan.baidu.com/s/1Jbq8dCv893rZ7RkaANUZgQ Extraction code :yypn
Code ( If it helps , Excuse me star Chant ~):
边栏推荐
- sizeof关键字
- Classic sql50 questions
- AI enterprise multi cloud storage architecture practice | Shenzhen potential technology sharing
- NetXpert XG2帮您解决“布线安装与维护”难题
- GPS from getting started to giving up (12), Doppler constant speed
- 如何用程序确认当前系统的存储模式?
- MariaDB database management system learning (I) installation diagram
- 3DMax指定面贴图
- zabbix 代理服务器 与 zabbix-snmp 监控
- Data storage (1)
猜你喜欢

第4章:再谈类的加载器

Leetcode question brushing (XI) -- sequential questions brushing 51 to 55

2500个常用中文字符 + 130常用中英文字符

基於 QEMUv8 搭建 OP-TEE 開發環境
![[10:00 public class]: basis and practice of video quality evaluation](/img/d8/a367c26b51d9dbaf53bf4fe2a13917.png)
[10:00 public class]: basis and practice of video quality evaluation
![[MySQL] online DDL details](/img/7e/97098d7ed5802c446bbadaf7035981.png)
[MySQL] online DDL details

RESNET rs: Google takes the lead in tuning RESNET, and its performance comprehensively surpasses efficientnet series | 2021 arXiv

将MySQL的表数据纯净方式导出

signed、unsigned关键字

Barcodex (ActiveX print control) v5.3.0.80 free version
随机推荐
Insert sort and Hill sort
Seata aggregates at, TCC, Saga and XA transaction modes to create a one-stop distributed transaction solution
中国固态氧化物燃料电池技术进展与发展前景报告(2022版)
The nearest common ancestor of binary (search) tree ●●
return 关键字
Aardio - 不声明直接传float数值的方法
Chapter 4: talk about class loader again
Anaconda installs third-party packages
volatile关键字
C # realizes crystal report binding data and printing 4-bar code
3DMAX assign face map
Web APIs DOM 时间对象
AI 企业多云存储架构实践 | 深势科技分享
case 关键字后面的值有什么要求吗?
基于 QEMUv8 搭建 OP-TEE 开发环境
MySQL数据库基本操作-DML
414. The third largest digital buckle
Unity3d Learning Notes 6 - GPU instantiation (1)
Force buckle 575 Divide candy
[leetcode daily clock in] 1020 Number of enclaves
https://github.com/YINYIPENG-EN/Pruning_for_YOLOX