当前位置:网站首页>Semantic segmentation model base segmentation_ models_ Detailed introduction to pytorch

Semantic segmentation model base segmentation_ models_ Detailed introduction to pytorch

2022-07-08 00:45:00 Ten thousand miles and a bright future arrived in an instant

segmentation_models_pytorch( Later referred to as" smp) It is a high-level model base for semantic segmentation , Support 9 A semantic segmentation network ,400 Multi seat encoder , Originally, bloggers were not interested in this model library that only supports individual networks . however , In view of several competitions top All schemes use unet++(efficientnet Make encoder ), Bloggers are excited . If you want to use more and more comprehensive semantic segmentation models, it is recommended to use MMSegmentation, hold MMSegmentation As a semantic segmentation model base, you can refer to pytorch 24 hold MMSegmentation As pytorch The semantic segmentation model base uses ( The training and deployment of the model have been realized )_ A flash of hope to my blog -CSDN Blog _mmsegmentation Use MMSegmentation It is a semantic segmentation library suite launched by Shangtang technology , Belong to OpenMMLab Part of the project , There are many semantic segmentation models . Why do bloggers have such ideas ? There are two reasons , The first point :MMSegmentation The package is too perfect , As a dependent on pytorch Our model base is different in training , The free play space of users is very small ( For example, customization loss、 Custom learning rate scheduler 、 Custom data loader , Learn everything again ), This makes it difficult for bloggers to accept ; Second point :github Others given on pytorch The model library is too hip , The number of models supported is limited , Most only support 2020 Model years ago , This is right https://hpg123.blog.csdn.net/article/details/124459439 Back to the point , Next, describe smp The use and construction of .smp The project address of is :https://github.com/qubvel/segmentation_models.pytorch

install smp:

pip install segmentation-models-pytorch

1、 Basic introduction

9 A model architecture ( Including the legendary Unet):

113 Available encoders ( As well as from timm Of 400 Multiple encoders , All encoders have pre trained weights , To converge faster and better ): It is mainly a variety of versions of the following Networks

ResNet
ResNeXt
ResNeSt
Res2Ne(X)t
RegNet(x/y)
GERNet
SE-Net
SK-ResNe(X)t
DenseNet
Inception
EfficientNet
MobileNet
DPN
VGG

Common losses of training routine : Supported by loss As shown below ( stay segmentation_models_pytorch.losses in )

from .jaccard import JaccardLoss
from .dice import DiceLoss
from .focal import FocalLoss
from .lovasz import LovaszLoss
from .soft_bce import SoftBCEWithLogitsLoss
from .soft_ce import SoftCrossEntropyLoss
from .tversky import TverskyLoss
from .mcc import MCCLoss

Common indicators of training routine : stay segmentation_models_pytorch.metrics in


from .functional import (
    get_stats,
    fbeta_score,
    f1_score,
    iou_score,
    accuracy,
    precision,
    recall,
    sensitivity,
    specificity,
    balanced_accuracy,
    positive_predictive_value,
    negative_predictive_value,
    false_negative_rate,
    false_positive_rate,
    false_discovery_rate,
    false_omission_rate,
    positive_likelihood_ratio,
    negative_likelihood_ratio,
)

2、 Model construction

smp The construction of the model in is very convenient , Input decoder type , Weight type , Enter the number of channels 、 The number of output channels is enough .

import segmentation_models_pytorch as smp

model = smp.Unet(
    encoder_name="resnet34",        # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
    encoder_weights="imagenet",     # use `imagenet` pre-trained weights for encoder initialization
    in_channels=1,                  # model input channels (1 for gray-scale images, 3 for RGB, etc.)
    classes=3,                      # model output channels (number of classes in your dataset)
)

But sometimes , The network structure needs to be modified , You can specify more detailed parameters ( Such as the depth of the network , Whether auxiliary head is needed 【 The default auxiliary headers here are classification headers 】). It should be noted that , stay unet The depth of encoder and decoder in the network stage The number must be the same (stage Medium filter num It can be modified according to the situation )

 3、 The concrete structure of the model

stay smp in , All models have the following structure (encoder,decoder and segmentation_head).encoder It is controlled by transferring parameters ,decoder By specific model Class determination ( all smp Model decoder The output of is a tensor, non-existent list【 Such as pspnet The multi-scale features of , stay decoder Used before output conv It's fused 】),segmentation_head By the incoming classes determine ( It's just a simple one conv layer )

all smp Model forward The process is shown in the figure below

原网站

版权声明
本文为[Ten thousand miles and a bright future arrived in an instant]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/189/202207072244123089.html