当前位置:网站首页>A structured random inactivation UNET for retinal vascular segmentation
A structured random inactivation UNET for retinal vascular segmentation
2022-07-29 09:04:00 【Salty salty】
1. Purpose : Reduce UNet Over fitting problem , Improve the ability of end-to-end segmentation of blood vessels
2 Main work :
(1) suffer DropBlock Heuristic uses structured random deactivation after convolution of each layer ;
(2) Tested on three retinal image datasets SD-UNet Performance of , Namely DRIVE,STARE,CHASE_DB1;
(3) This paper SD-UNet The performance is better than UNet, And in DRIVE and CHASE-DB1 Reached on dataset SOTA.(sota It's actually State of the arts Abbreviation , It refers to doing in a certain field Performance first-class model, Generally refers to in some benchmark Those models with very high scores on the data set )
3, Model

Fig 1 It shows SD-UNet Detailed structure of , You can see in the U The shape network includes three layers of lower sampling module and upper sampling module , Through the middle skip connection Connect . The lower sampling module of each layer contains two 3x3 Convolution and one-time maximum pooling , And then there was DropBlock Layer and the ReLU layer . The structure of the upper sampling part is similar , The difference is that the pooled layer is replaced by transposed convolution .
4,Dropout And Dropblock
Over fitting problem , The fundamental reason is the characteristic dimension ( Or parameter ) Too much , A perfectly trained set of functions leading to fitting , But the prediction of new data is poor .
About Dropout
In order to prevent over fitting in computer vision, deep neural networks often use different regularization methods , Random deactivation DropOut It is a simple but effective method to prevent over fitting .
The neural network contains multiple nonlinear hidden layers , This enables it to learn the complex relationship between input and output , But we can't deny that : In practice , Even if the test set and the training set come from the same distribution , There will still be noise in the training set , Then the network will learn the distribution of data and noise at the same time , This will easily lead to over fitting .
In machine learning , We usually use model combination to improve the performance of models . However , For large neural networks , Averaging the output of multiple network models is time-consuming and space consuming . So I put forward Dropout.Dropout Is to randomly discard the neurons in the neural network , The so-called discard is to remove neurons from the neural network , Including its connection with forward and backward , Which neuron is discarded is random .

but Dropout Applicable to the whole connection layer , Does not apply to convolution , Adjacent elements in the feature graph of convolution layer share semantic information in space , The reason for this phenomenon is that convolution can extract features , With receptive field, that is to say, the network can still learn the corresponding semantic information through the adjacent location elements of the location . Therefore, the convolution network needs to be structurally and randomly inactivated , By deactivating a part of the adjacent whole area in the feature map , In this way, the network loses this part of the overall information , You can pay more attention to the learning of other parts' features .

a It's the original picture ,b The figure uses Dropout,c Use Dropblock,X Represents the discarded location . therefore DropBlock layer Contains two parameters :block_size and γ. among block_size=1 Time and tradition dropout equally , and block_size= The whole feature map is actually Spatial Dropout;γ Parameters are responsible for controlling how many activation points in a feature map can be deactivated . The calculation formula is as follows :

keep-prob: The probability of a neuron being preserved ( Given in advance )
w,h Represent the width and height of the feature graph respectively ,
block-size: Size of deactivation block .
5, experiment :
This section compares the systems SD-UNet With the original UNet Effect on different data sets , Will also SD-UNet With some current SOTA Compare the framework of .
Data expansion : Random rotation 、 Color change 、 Increase Gaussian noise 、 level / Flip vertically
Performance indicators : PPV( Accuracy ),TNR( Specificity ),TPR( Recall rate ),Acc,AUC,F1score,JS( Similarity coefficient )

SR Represents the segmentation result ,GT It represents the actual situation on the ground .
result :
UNet Indicates not used dropout Original UNet
UNet* Indicates that random deactivation is used UNet Random inactivation rate 0.25
Table IV V VI Also shows the SD-UNet And others SOTA Comparison of methods , You can see SU-UNet Its performance is better than other methods .

summary :
This paper presents a method based on full convolution UNet Network structure -SD-UNet Used for pixel by pixel segmentation of retinal vessels , Mainly in the UNet add DropBlock To improve the performance of the model .
among UNet Be responsible for capturing context information during down sampling , In the process of upsampling, different levels of features can be effectively fused ; At the same time with the help of DropBlock,SD-UNet The semantic features of some regions can be discarded during training , So as to reduce the over fitting problem .
This paper realizes SD-UNet For retinal vascular segmentation , To verify the effectiveness of this method in three open source datasets DRIVE,STARE,CHASE_DB1 Has been tested , And compared with other methods ,SD-UNet And we got it SOTA Result .
边栏推荐
- Want to know how to open an account through online stock? Excuse me, is it safe to open a stock account by mobile phone?
- 【Unity入门计划】常用学习网址收藏
- Demonstration and solution of dirty reading, unrepeatable reading and unreal reading
- [unity entry program] collection of common learning websites
- Network knowledge summary
- Memory leaks and common solutions
- [C language] DataGridView binding data
- 2022 Teddy cup data mining challenge C project and post game summary
- Is the sub database and sub table really suitable for your system? Talk about how to select sub databases, sub tables and newsql
- Restful style details
猜你喜欢

Leetcode:132. split palindrome string II

Could not receive a message from the daemon

Quick sorting (quick sorting) (implemented in C language)

2022年P气瓶充装考试模拟100题模拟考试平台操作

2022电工(初级)考题模拟考试平台操作

Gutcloud technology restcloud completed the pre-A round of financing of tens of millions of yuan

LeetCode刷题(6)

用户身份标识与账号体系实践

smart-webcomponents 14.2.0 Crack

Flowable UI制作流程图
随机推荐
Mathematical modeling clustering
Fastjson's tojsonstring() source code analysis for special processing of time classes - "deepnova developer community"
Sword finger offer 26. substructure of tree
One article tells you the salary after passing the PMP Exam
[[first blog]p controller implementation instructions in UDA course]
Could not receive a message from the daemon
2022.7.9 quick view of papers
Intel will gradually end the optane storage business and will not develop new products in the future
[unity entry program] collection of common learning websites
Reptile practice (10): send daily news
Sudoku (DFS)
QT learning: use non TS files such as json/xml to realize multilingual internationalization
Error reporting when adding fields to sap se11 transparent table: structural changes at the field level (conversion table xxxxx)
2022 P cylinder filling test simulation 100 questions simulation test platform operation
RESTful 风格详解
Summary of some experiences in the process of R & D platform splitting
7.1-default-arguments
Floweable advanced
2022 question bank and answers of operation certificate examination for main principals of hazardous chemical business units
Sword finger offer 27. image of binary tree