当前位置:网站首页>[paper reading] semi supervised left atrium segmentation with mutual consistency training
[paper reading] semi supervised left atrium segmentation with mutual consistency training
2022-07-07 05:33:00 【xiongxyowo】
[ Address of thesis ] [ Code ] [MICCAI 21]
Abstract
Semi supervised learning has attracted great attention in the field of machine learning , Especially for the task of medical image segmentation , Because it reduces the heavy burden of collecting a large number of dense annotation data for training . However , Most existing methods underestimate challenging areas during training ( Such as small branches or fuzzy edges ) Importance . We think , These unmarked areas may contain more critical information , To minimize the uncertainty of the model prediction , And should be emphasized in the training process . therefore , In this paper , We propose a new mutual consistent network (MC-Net), For from 3D MR Semi supervised segmentation of left atrium in image . especially , our MC-Net It consists of an encoder and two slightly different decoders , The prediction difference between the two decoders is transformed into unsupervised loss by our cyclic pseudo tag scheme , To encourage mutual consistency . This mutual consistency encourages the two decoders to have consistent 、 Low entropy prediction , And enable the model to gradually capture generalization features from these unmarked challenging areas . We are in the public left atrium (LA) Our MC-Net, It has achieved impressive performance improvements by effectively utilizing unlabeled data . our MC-Net In terms of left atrial segmentation, it is superior to the recent six semi supervised methods , And in LA New and most advanced performance has been created in the database .
Method
The general idea of this paper is to design a better pseudo tag to improve the semi supervised performance , The process is as follows :
The first is how to measure uncertainty (uncertainty) The problem of . This paper believes that popular methods such as MC-Dropout You need to reason many times during training , It will bring extra time overhead , So here is a " Space for time " The way , That is, an auxiliary decoder is designed D B D_B DB, The decoder is structurally " Very simple ", It is directly multiple up sampling interpolation to obtain the final result . And the original decoder D A D_A DA Then with V − N e t V-Net V−Net bring into correspondence with .
It's like this , Without introducing large network parameters ( Because the structure of the auxiliary decoder is too simple ), The model can obtain two different results in the case of one reasoning , Obviously, the result of the auxiliary decoder will be " Worse "( This can also be seen from the picture ). In the final calculation of uncertainty, we only need to compare the differences between the two results .
Although this approach seems very simple , But it's amazing to think about it ; One is strong and the other is weak , If the sample is simple , So weak classification header can also get a better result , At this time, the difference between the two results is small , The degree of uncertainty is low . For some samples with large amount of information , The result of weak classification header is poor , At this time, there is a big difference between the two results , The uncertainty is higher .
And for the two results obtained , First, use a sharpening function to deal with it , To eliminate some potential noise in the prediction results . The sharpening function is defined as follows : s P L = P 1 / T P 1 / T + ( 1 − P ) 1 / T s P L=\frac{P^{1 / T}}{P^{1 / T}+(1-P)^{1 / T}} sPL=P1/T+(1−P)1/TP1/T When using false label supervision , Then use B To monitor A, Use A To monitor the results B. In this way, the strong decoder D A D_A DA Can learn the invariant features in the weak encoder to reduce over fitting , Weak encoder D B D_B DB You can also learn strong encoder D A D_A DA Advanced features in .
边栏推荐
- 淘宝店铺发布API接口(新),淘宝oAuth2.0店铺商品API接口,淘宝商品发布API接口,淘宝商品上架API接口,一整套发布上架店铺接口对接分享
- 1.AVL树:左右旋-bite
- Lombok插件
- The year of the tiger is coming. Come and make a wish. I heard that the wish will come true
- 利用OPNET进行网络仿真时网络层协议(以QoS为例)的使用、配置及注意点
- Mysql database learning (8) -- MySQL content supplement
- Linkedblockingqueue source code analysis - initialization
- 照片选择器CollectionView
- Intelligent annotation scheme of entity recognition based on hugging Face Pre training model: generate doccano request JSON format
- Writing process of the first paper
猜你喜欢
随机推荐
Senior programmers must know and master. This article explains in detail the principle of MySQL master-slave synchronization, and recommends collecting
Harmonyos fourth training
Flink SQL 实现读写redis,并动态生成Hset key
高级程序员必知必会,一文详解MySQL主从同步原理,推荐收藏
Addressable pre Download
Design, configuration and points for attention of network unicast (one server, multiple clients) simulation using OPNET
Summary of the mean value theorem of higher numbers
1.AVL树:左右旋-bite
app clear data源码追踪
Two methods of thread synchronization
Two person game based on bevy game engine and FPGA
np. random. Shuffle and np Use swapaxis or transfer with caution
How Alibaba cloud's DPCA architecture works | popular science diagram
[JS component] date display.
纪念下,我从CSDN搬家到博客园啦!
Pytest testing framework -- data driven
Torch optimizer small parsing
《5》 Table
Where is NPDP product manager certification sacred?
CentOS 7.9 installing Oracle 21C Adventures