当前位置:网站首页>[paper reading] semi supervised left atrium segmentation with mutual consistency training
[paper reading] semi supervised left atrium segmentation with mutual consistency training
2022-07-07 05:33:00 【xiongxyowo】
[ Address of thesis ] [ Code ] [MICCAI 21]
Abstract
Semi supervised learning has attracted great attention in the field of machine learning , Especially for the task of medical image segmentation , Because it reduces the heavy burden of collecting a large number of dense annotation data for training . However , Most existing methods underestimate challenging areas during training ( Such as small branches or fuzzy edges ) Importance . We think , These unmarked areas may contain more critical information , To minimize the uncertainty of the model prediction , And should be emphasized in the training process . therefore , In this paper , We propose a new mutual consistent network (MC-Net), For from 3D MR Semi supervised segmentation of left atrium in image . especially , our MC-Net It consists of an encoder and two slightly different decoders , The prediction difference between the two decoders is transformed into unsupervised loss by our cyclic pseudo tag scheme , To encourage mutual consistency . This mutual consistency encourages the two decoders to have consistent 、 Low entropy prediction , And enable the model to gradually capture generalization features from these unmarked challenging areas . We are in the public left atrium (LA) Our MC-Net, It has achieved impressive performance improvements by effectively utilizing unlabeled data . our MC-Net In terms of left atrial segmentation, it is superior to the recent six semi supervised methods , And in LA New and most advanced performance has been created in the database .
Method
The general idea of this paper is to design a better pseudo tag to improve the semi supervised performance , The process is as follows :
The first is how to measure uncertainty (uncertainty) The problem of . This paper believes that popular methods such as MC-Dropout You need to reason many times during training , It will bring extra time overhead , So here is a " Space for time " The way , That is, an auxiliary decoder is designed D B D_B DB, The decoder is structurally " Very simple ", It is directly multiple up sampling interpolation to obtain the final result . And the original decoder D A D_A DA Then with V − N e t V-Net V−Net bring into correspondence with .
It's like this , Without introducing large network parameters ( Because the structure of the auxiliary decoder is too simple ), The model can obtain two different results in the case of one reasoning , Obviously, the result of the auxiliary decoder will be " Worse "( This can also be seen from the picture ). In the final calculation of uncertainty, we only need to compare the differences between the two results .
Although this approach seems very simple , But it's amazing to think about it ; One is strong and the other is weak , If the sample is simple , So weak classification header can also get a better result , At this time, the difference between the two results is small , The degree of uncertainty is low . For some samples with large amount of information , The result of weak classification header is poor , At this time, there is a big difference between the two results , The uncertainty is higher .
And for the two results obtained , First, use a sharpening function to deal with it , To eliminate some potential noise in the prediction results . The sharpening function is defined as follows : s P L = P 1 / T P 1 / T + ( 1 − P ) 1 / T s P L=\frac{P^{1 / T}}{P^{1 / T}+(1-P)^{1 / T}} sPL=P1/T+(1−P)1/TP1/T When using false label supervision , Then use B To monitor A, Use A To monitor the results B. In this way, the strong decoder D A D_A DA Can learn the invariant features in the weak encoder to reduce over fitting , Weak encoder D B D_B DB You can also learn strong encoder D A D_A DA Advanced features in .
边栏推荐
- CentOS 7.9 installing Oracle 21C Adventures
- How does mapbox switch markup languages?
- [PM products] what is cognitive load? How to adjust cognitive load reasonably?
- 一条 update 语句的生命经历
- 实现网页内容可编辑
- Digital innovation driven guide
- app clear data源码追踪
- 利用OPNET进行网络任意源组播(ASM)仿真的设计、配置及注意点
- Leetcode (417) -- Pacific Atlantic current problem
- Lombok插件
猜你喜欢

Two person game based on bevy game engine and FPGA

ThinkPHP Association preload with

DOM-节点对象+时间节点 综合案例

DOM node object + time node comprehensive case

Record a pressure measurement experience summary

《4》 Form

10 distributed databases that take you to the galaxy

在米家、欧瑞博、苹果HomeKit趋势下,智汀如何从中脱颖而出?

4. 对象映射 - Mapping.Mapster

阿里云的神龙架构是怎么工作的 | 科普图解
随机推荐
Design, configuration and points for attention of network arbitrary source multicast (ASM) simulation using OPNET
np. random. Shuffle and np Use swapaxis or transfer with caution
利用OPNET进行网络仿真时网络层协议(以QoS为例)的使用、配置及注意点
Y58. Chapter III kubernetes from entry to proficiency - continuous integration and deployment (Sany)
App clear data source code tracking
JVM (19) -- bytecode and class loading (4) -- talk about class loader again
Leetcode (46) - Full Permutation
Unity让摄像机一直跟随在玩家后上方
基于NCF的多模块协同实例
Leetcode: maximum number of "balloons"
English语法_名词 - 所有格
Zhang Ping'an: accelerate cloud digital innovation and jointly build an industrial smart ecosystem
Autowired注解用于List时的现象解析
JSP setting header information export to excel
Dbsync adds support for mongodb and ES
在米家、欧瑞博、苹果HomeKit趋势下,智汀如何从中脱颖而出?
Life experience of an update statement
Leetcode 1189 maximum number of "balloons" [map] the leetcode road of heroding
[QT] custom control loading
If you want to choose some departments to give priority to OKR, how should you choose pilot departments?