当前位置:网站首页>A structured random inactivation UNET for retinal vascular segmentation
A structured random inactivation UNET for retinal vascular segmentation
2022-07-29 09:04:00 【Salty salty】
1. Purpose : Reduce UNet Over fitting problem , Improve the ability of end-to-end segmentation of blood vessels
2 Main work :
(1) suffer DropBlock Heuristic uses structured random deactivation after convolution of each layer ;
(2) Tested on three retinal image datasets SD-UNet Performance of , Namely DRIVE,STARE,CHASE_DB1;
(3) This paper SD-UNet The performance is better than UNet, And in DRIVE and CHASE-DB1 Reached on dataset SOTA.(sota It's actually State of the arts Abbreviation , It refers to doing in a certain field Performance first-class model, Generally refers to in some benchmark Those models with very high scores on the data set )
3, Model

Fig 1 It shows SD-UNet Detailed structure of , You can see in the U The shape network includes three layers of lower sampling module and upper sampling module , Through the middle skip connection Connect . The lower sampling module of each layer contains two 3x3 Convolution and one-time maximum pooling , And then there was DropBlock Layer and the ReLU layer . The structure of the upper sampling part is similar , The difference is that the pooled layer is replaced by transposed convolution .
4,Dropout And Dropblock
Over fitting problem , The fundamental reason is the characteristic dimension ( Or parameter ) Too much , A perfectly trained set of functions leading to fitting , But the prediction of new data is poor .
About Dropout
In order to prevent over fitting in computer vision, deep neural networks often use different regularization methods , Random deactivation DropOut It is a simple but effective method to prevent over fitting .
The neural network contains multiple nonlinear hidden layers , This enables it to learn the complex relationship between input and output , But we can't deny that : In practice , Even if the test set and the training set come from the same distribution , There will still be noise in the training set , Then the network will learn the distribution of data and noise at the same time , This will easily lead to over fitting .
In machine learning , We usually use model combination to improve the performance of models . However , For large neural networks , Averaging the output of multiple network models is time-consuming and space consuming . So I put forward Dropout.Dropout Is to randomly discard the neurons in the neural network , The so-called discard is to remove neurons from the neural network , Including its connection with forward and backward , Which neuron is discarded is random .

but Dropout Applicable to the whole connection layer , Does not apply to convolution , Adjacent elements in the feature graph of convolution layer share semantic information in space , The reason for this phenomenon is that convolution can extract features , With receptive field, that is to say, the network can still learn the corresponding semantic information through the adjacent location elements of the location . Therefore, the convolution network needs to be structurally and randomly inactivated , By deactivating a part of the adjacent whole area in the feature map , In this way, the network loses this part of the overall information , You can pay more attention to the learning of other parts' features .

a It's the original picture ,b The figure uses Dropout,c Use Dropblock,X Represents the discarded location . therefore DropBlock layer Contains two parameters :block_size and γ. among block_size=1 Time and tradition dropout equally , and block_size= The whole feature map is actually Spatial Dropout;γ Parameters are responsible for controlling how many activation points in a feature map can be deactivated . The calculation formula is as follows :

keep-prob: The probability of a neuron being preserved ( Given in advance )
w,h Represent the width and height of the feature graph respectively ,
block-size: Size of deactivation block .
5, experiment :
This section compares the systems SD-UNet With the original UNet Effect on different data sets , Will also SD-UNet With some current SOTA Compare the framework of .
Data expansion : Random rotation 、 Color change 、 Increase Gaussian noise 、 level / Flip vertically
Performance indicators : PPV( Accuracy ),TNR( Specificity ),TPR( Recall rate ),Acc,AUC,F1score,JS( Similarity coefficient )

SR Represents the segmentation result ,GT It represents the actual situation on the ground .
result :
UNet Indicates not used dropout Original UNet
UNet* Indicates that random deactivation is used UNet Random inactivation rate 0.25
Table IV V VI Also shows the SD-UNet And others SOTA Comparison of methods , You can see SU-UNet Its performance is better than other methods .

summary :
This paper presents a method based on full convolution UNet Network structure -SD-UNet Used for pixel by pixel segmentation of retinal vessels , Mainly in the UNet add DropBlock To improve the performance of the model .
among UNet Be responsible for capturing context information during down sampling , In the process of upsampling, different levels of features can be effectively fused ; At the same time with the help of DropBlock,SD-UNet The semantic features of some regions can be discarded during training , So as to reduce the over fitting problem .
This paper realizes SD-UNet For retinal vascular segmentation , To verify the effectiveness of this method in three open source datasets DRIVE,STARE,CHASE_DB1 Has been tested , And compared with other methods ,SD-UNet And we got it SOTA Result .
边栏推荐
- Floweable advanced
- 2022 Teddy cup data mining challenge C project and post game summary
- [unity entry program] C # and unity - understand classes and objects
- How does xjson implement four operations?
- Arfoundation Getting Started tutorial 7-url dynamically loading image tracking Library
- Requests library simple method usage notes
- 多重背包,朴素及其二进制优化
- SAP ooalv-sd module actual development case (add, delete, modify and check)
- C# 使用数据库对ListView控件数据绑定
- Sword finger offer 32 - ii Print binary tree II from top to bottom
猜你喜欢

BI data analysis practitioners learn financial knowledge from scratch? What introductory books are recommended

2022电工(初级)考题模拟考试平台操作

Flowable 基础篇1

Tesseract text recognition -- simple

(视频+图文)机器学习入门系列-第3章 逻辑回归
![[unity entry program] C # and unity - understand classes and objects](/img/bd/13cc90638c6e6ad4a45507b0ed2fb7.png)
[unity entry program] C # and unity - understand classes and objects

Analysis of zorder sampling partition process in Hudi - "deepnova developer community"

2022年R2移动式压力容器充装考题模拟考试平台操作

Quick sorting (quick sorting) (implemented in C language)

四元数与其在Unity中的简单应用
随机推荐
Regular expression verification version number
1.2.24 fastjson deserialization templatesimpl uses chain analysis (very detailed)
2022电工(初级)考题模拟考试平台操作
201803-3 Full Score solution of CCF URL mapping
(视频+图文)机器学习入门系列-第2章 线性回归
QT learning: use non TS files such as json/xml to realize multilingual internationalization
Could not receive a message from the daemon
(视频+图文)机器学习入门系列-第3章 逻辑回归
7.2-function-overloading
【Unity入门计划】C#与Unity-了解类和对象
Design of distributed (cluster) file system
【Unity入门计划】常用学习网址收藏
Flowable 基础篇1
Shutter gradient
Analysis of zorder sampling partition process in Hudi - "deepnova developer community"
Data representation and calculation (base)
2022 R2 mobile pressure vessel filling test question simulation test platform operation
工业测控设备内生信息安全技术研究综述
多标签用户画像分析跑得快的关键在哪里?
Flowable UI制作流程图