当前位置:网站首页>Micronet practice: image classification using micronet

Micronet practice: image classification using micronet

2022-06-10 19:42:00 AI Hao

Abstract

In this paper, through the actual examples of plant seedling classification to feel MicroNet Effect of model . The model comes from the official , I wrote it myself train and test part . Judging from the score , This model is very good , I choose to use MicroNet-M3 Model , Size only 6M, however ACC stay 95% about , The results were amazing !!!

 Insert picture description here

This article will lead you hand in hand to complete the training and testing from the perspective of actual combat . Through this article, you can learn :

  1. How to use data to enhance , Include transforms The enhancement of 、CutOut、MixUp、CutMix Other enhancement means ?
  2. How to configure MicroNet Model implementation training ?
  3. How to use pytorch Built in mixing accuracy ?
  4. How to use gradient clipping to prevent gradient explosion ?
  5. How to use DP Multi graphics training ?
  6. How to draw loss and acc curve ?
  7. How to generate val Evaluation report of ?
  8. How to write test scripts ?
  9. How to use cosine annealing strategy to adjust learning rate ?
  10. How to use AverageMeter Class Statistics ACC and loss And other custom variables ?
  11. How to understand and count ACC1 and ACC5?

Installation package

1、 install timm

Use pip Just go , command :

pip install timm

2、 install yacs

pip install yacs

yacs The author of the famous Ross Girshick,faster-rcnn The author of .github Address :https://github.com/rbgirshick/yacs
yacs Is a lightweight open source library for defining and managing system configurations , It is a commonly used parameter configuration in scientific experiment software . Machine learning 、 Super parameter configuration in the process of deep learning model training ( The depth of convolutional neural network , Initial learning rate, etc ). The reproducibility of scientific experiments is very important , therefore , It is necessary to record the parameter settings during the experiment , In order to reproduce the experiment in the later stage .yacs Use a simple , Readable yaml Format .

Data to enhance Cutout and Mixup

In order to improve my grades, I added Cutout and Mixup These two enhancements . Both installation enhancements are required torchtoolbox. Installation command :

pip install torchtoolbox

Cutout Realization , stay transforms in .

from torchtoolbox.transform import Cutout
#  Data preprocessing 
transform = transforms.Compose([
    transforms.Resize((224, 224)),
    Cutout(),
    transforms.ToTensor(),
    transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])

])

Package import required :from timm.data.mixup import Mixup,

Definition Mixup, and SoftTargetCrossEntropy

  mixup_fn = Mixup(
    mixup_alpha=0.8, cutmix_alpha=1.0, cutmix_minmax=None,
    prob=0.1, switch_prob=0.5, mode='batch',
    label_smoothing=0.1, num_classes=12)
 criterion_train = SoftTargetCrossEntropy()

Parameters, :

mixup_alpha (float): mixup alpha value , If > 0, be mixup Active .

cutmix_alpha (float):cutmix alpha value , If > 0,cutmix Active .

cutmix_minmax (List[float]):cutmix Minimum / Maximum image ratio ,cutmix Active , If not None, Use this vs alpha.

If set cutmix_minmax be cutmix_alpha The default is 1.0

prob (float): Apply... Per batch or element mixup or cutmix Probability .

switch_prob (float): Switch when both are active cutmix and mixup Probability .

mode (str): How to apply mixup/cutmix Parameters ( Every ’batch’,‘pair’( Element pair ),‘elem’( Elements ).

correct_lam (bool): When cutmix bbox Apply when clipped by the image border . lambda correction

label_smoothing (float): Apply label smoothing to the mixed target tensor .

num_classes (int): Number of classes targeted .

Project structure

ConvMAE_demo
├─data
│  ├─Black-grass
│  ├─Charlock
│  ├─Cleavers
│  ├─Common Chickweed
│  ├─Common wheat
│  ├─Fat Hen
│  ├─Loose Silky-bent
│  ├─Maize
│  ├─Scentless Mayweed
│  ├─Shepherds Purse
│  ├─Small-flowered Cranesbill
│  └─Sugar beet
├─models
│  ├─__init__.py
│  ├─micronet.py
│  ├─activation.py
│  └─microconfig.py
├─utils
│  ├─__init__.py
│  └─defaults.py
├─checkpoint.pth
├─mean_std.py
├─makedata.py
├─train.py
└─test.py

mean_std.py: Calculation mean and std Value .

makedata.py: Generate data set .

models Under folder micronet.py、activation.py and microconfig.py: From the official pytorch Version code .
- micronet.py: The network file .
- activation.py: Activate function file , Defined DYShiftMax Activation function .
- microconfig.py: Network configuration parameters . Defined m0 To m3 Network parameters .
utils Under folder defaults.py Defined cfg Parameters of , These parameters are m0 To m3 Network setting parameters .
The detailed parameter settings are as follows :
M0:

MODEL.MICRONETS.BLOCK="DYMicroBlock"                                            
MODEL.MICRONETS.NET_CONFIG="msnx_dy6_exp4_4M_221" 
MODEL.MICRONETS.STEM_CH=4 
MODEL.MICRONETS.STEM_GROUPS=[2,2] 
MODEL.MICRONETS.STEM_DILATION=1 
MODEL.MICRONETS.STEM_MODE="spatialsepsf" 
MODEL.MICRONETS.OUT_CH=640 
MODEL.MICRONETS.DEPTHSEP=True 
MODEL.MICRONETS.POINTWISE='group' 
MODEL.MICRONETS.DROPOUT=0.05 
MODEL.ACTIVATION.MODULE="DYShiftMax" 
MODEL.ACTIVATION.ACT_MAX=2.0 
MODEL.ACTIVATION.LINEARSE_BIAS=False 
MODEL.ACTIVATION.INIT_A_BLOCK3=[1.0,0.0] 
MODEL.ACTIVATION.INIT_A=[1.0,1.0] 
MODEL.ACTIVATION.INIT_B=[0.0,0.0] 
MODEL.ACTIVATION.REDUCTION=8 
MODEL.MICRONETS.SHUFFLE=True 

M1:

MODEL.MICRONETS.BLOCK="DYMicroBlock"                                            
MODEL.MICRONETS.NET_CONFIG="msnx_dy6_exp6_6M_221" 
MODEL.MICRONETS.STEM_CH=6 
MODEL.MICRONETS.STEM_GROUPS=[3,2] 
MODEL.MICRONETS.STEM_DILATION=1 
MODEL.MICRONETS.STEM_MODE="spatialsepsf" 
MODEL.MICRONETS.OUT_CH=960
MODEL.MICRONETS.DEPTHSEP=True 
MODEL.MICRONETS.POINTWISE='group' 
MODEL.MICRONETS.DROPOUT=0.05 
MODEL.ACTIVATION.MODULE="DYShiftMax" 
MODEL.ACTIVATION.ACT_MAX=2.0 
MODEL.ACTIVATION.LINEARSE_BIAS=False 
MODEL.ACTIVATION.INIT_A_BLOCK3=[1.0,0.0] 
MODEL.ACTIVATION.INIT_A=[1.0,1.0] 
MODEL.ACTIVATION.INIT_B=[0.0,0.0] 
MODEL.ACTIVATION.REDUCTION=8 
MODEL.MICRONETS.SHUFFLE=True 

M2:

MODEL.MICRONETS.BLOCK="DYMicroBlock"                                            
MODEL.MICRONETS.NET_CONFIG="msnx_dy9_exp6_12M_221" 
MODEL.MICRONETS.STEM_CH=8 
MODEL.MICRONETS.STEM_GROUPS=[4,2] 
MODEL.MICRONETS.STEM_DILATION=1 
MODEL.MICRONETS.STEM_MODE="spatialsepsf" 
MODEL.MICRONETS.OUT_CH=1024
MODEL.MICRONETS.DEPTHSEP=True 
MODEL.MICRONETS.POINTWISE='group' 
MODEL.MICRONETS.DROPOUT=0.1 
MODEL.ACTIVATION.MODULE="DYShiftMax" 
MODEL.ACTIVATION.ACT_MAX=2.0 
MODEL.ACTIVATION.LINEARSE_BIAS=False 
MODEL.ACTIVATION.INIT_A_BLOCK3=[1.0,0.0] 
MODEL.ACTIVATION.INIT_A=[1.0,1.0] 
MODEL.ACTIVATION.INIT_B=[0.0,0.0] 
MODEL.ACTIVATION.REDUCTION=8 
MODEL.MICRONETS.SHUFFLE=True 

M3:

MODEL.MICRONETS.BLOCK="DYMicroBlock"                                            
MODEL.MICRONETS.NET_CONFIG="msnx_dy12_exp6_20M_020" 
MODEL.MICRONETS.STEM_CH=12
MODEL.MICRONETS.STEM_GROUPS=[4,3] 
MODEL.MICRONETS.STEM_DILATION=1 
MODEL.MICRONETS.STEM_MODE="spatialsepsf" 
MODEL.MICRONETS.OUT_CH=1024
MODEL.MICRONETS.DEPTHSEP=True 
MODEL.MICRONETS.POINTWISE='group' 
MODEL.MICRONETS.DROPOUT=0.1 
MODEL.ACTIVATION.MODULE="DYShiftMax" 
MODEL.ACTIVATION.ACT_MAX=2.0 
MODEL.ACTIVATION.LINEARSE_BIAS=False 
MODEL.ACTIVATION.INIT_A_BLOCK3=[1.0,0.0] 
MODEL.ACTIVATION.INIT_A=[1.0,0.5] 
MODEL.ACTIVATION.INIT_B=[0.0,0.5] 
MODEL.ACTIVATION.REDUCTION=8 
MODEL.MICRONETS.SHUFFLE=True 

In order to carry on DP Use blend precision in , It also needs to be in the forward Add before function @autocast().
 Insert picture description here

Calculation mean and std

In order to make the model converge more quickly , We need to figure out mean and std Value , newly build mean_std.py, Insert code :

from torchvision.datasets import ImageFolder
import torch
from torchvision import transforms

def get_mean_and_std(train_data):
    train_loader = torch.utils.data.DataLoader(
        train_data, batch_size=1, shuffle=False, num_workers=0,
        pin_memory=True)
    mean = torch.zeros(3)
    std = torch.zeros(3)
    for X, _ in train_loader:
        for d in range(3):
            mean[d] += X[:, d, :, :].mean()
            std[d] += X[:, d, :, :].std()
    mean.div_(len(train_data))
    std.div_(len(train_data))
    return list(mean.numpy()), list(std.numpy())

if __name__ == '__main__':
    train_dataset = ImageFolder(root=r'data1', transform=transforms.ToTensor())
    print(get_mean_and_std(train_dataset))

Data set structure :

image-20220221153058619

Running results :

([0.3281186, 0.28937867, 0.20702125], [0.09407319, 0.09732835, 0.106712654])

Record the result , For the back !

Generate data set

The data set structure of our image classification is as follows

data
├─Black-grass
├─Charlock
├─Cleavers
├─Common Chickweed
├─Common wheat
├─Fat Hen
├─Loose Silky-bent
├─Maize
├─Scentless Mayweed
├─Shepherds Purse
├─Small-flowered Cranesbill
└─Sugar beet

pytorch and keras The default loading method is ImageNet Dataset format , The format is

├─data
│  ├─val
│  │   ├─Black-grass
│  │   ├─Charlock
│  │   ├─Cleavers
│  │   ├─Common Chickweed
│  │   ├─Common wheat
│  │   ├─Fat Hen
│  │   ├─Loose Silky-bent
│  │   ├─Maize
│  │   ├─Scentless Mayweed
│  │   ├─Shepherds Purse
│  │   ├─Small-flowered Cranesbill
│  │   └─Sugar beet
│  └─train
│      ├─Black-grass
│      ├─Charlock
│      ├─Cleavers
│      ├─Common Chickweed
│      ├─Common wheat
│      ├─Fat Hen
│      ├─Loose Silky-bent
│      ├─Maize
│      ├─Scentless Mayweed
│      ├─Shepherds Purse
│      ├─Small-flowered Cranesbill
│      └─Sugar beet

New format conversion script makedata.py, Insert code :

import glob
import os
import shutil

image_list=glob.glob('data1/*/*.png')
print(image_list)
file_dir='data'
if os.path.exists(file_dir):
    print('true')
    #os.rmdir(file_dir)
    shutil.rmtree(file_dir)# Delete and re create 
    os.makedirs(file_dir)
else:
    os.makedirs(file_dir)

from sklearn.model_selection import train_test_split
trainval_files, val_files = train_test_split(image_list, test_size=0.3, random_state=42)
train_dir='train'
val_dir='val'
train_root=os.path.join(file_dir,train_dir)
val_root=os.path.join(file_dir,val_dir)
for file in trainval_files:
    file_class=file.replace("\\","/").split('/')[-2]
    file_name=file.replace("\\","/").split('/')[-1]
    file_class=os.path.join(train_root,file_class)
    if not os.path.isdir(file_class):
        os.makedirs(file_class)
    shutil.copy(file, file_class + '/' + file_name)

for file in val_files:
    file_class=file.replace("\\","/").split('/')[-2]
    file_name=file.replace("\\","/").split('/')[-1]
    file_class=os.path.join(val_root,file_class)
    if not os.path.isdir(file_class):
        os.makedirs(file_class)
    shutil.copy(file, file_class + '/' + file_name)

After completing the above content, you can start training and testing , See the link below :

原网站

版权声明
本文为[AI Hao]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/161/202206101838040276.html