当前位置:网站首页>[paper reading] semi supervised left atrium segmentation with mutual consistency training
[paper reading] semi supervised left atrium segmentation with mutual consistency training
2022-07-07 05:33:00 【xiongxyowo】
[ Address of thesis ] [ Code ] [MICCAI 21]
Abstract
Semi supervised learning has attracted great attention in the field of machine learning , Especially for the task of medical image segmentation , Because it reduces the heavy burden of collecting a large number of dense annotation data for training . However , Most existing methods underestimate challenging areas during training ( Such as small branches or fuzzy edges ) Importance . We think , These unmarked areas may contain more critical information , To minimize the uncertainty of the model prediction , And should be emphasized in the training process . therefore , In this paper , We propose a new mutual consistent network (MC-Net), For from 3D MR Semi supervised segmentation of left atrium in image . especially , our MC-Net It consists of an encoder and two slightly different decoders , The prediction difference between the two decoders is transformed into unsupervised loss by our cyclic pseudo tag scheme , To encourage mutual consistency . This mutual consistency encourages the two decoders to have consistent 、 Low entropy prediction , And enable the model to gradually capture generalization features from these unmarked challenging areas . We are in the public left atrium (LA) Our MC-Net, It has achieved impressive performance improvements by effectively utilizing unlabeled data . our MC-Net In terms of left atrial segmentation, it is superior to the recent six semi supervised methods , And in LA New and most advanced performance has been created in the database .
Method
The general idea of this paper is to design a better pseudo tag to improve the semi supervised performance , The process is as follows :
The first is how to measure uncertainty (uncertainty) The problem of . This paper believes that popular methods such as MC-Dropout You need to reason many times during training , It will bring extra time overhead , So here is a " Space for time " The way , That is, an auxiliary decoder is designed D B D_B DB, The decoder is structurally " Very simple ", It is directly multiple up sampling interpolation to obtain the final result . And the original decoder D A D_A DA Then with V − N e t V-Net V−Net bring into correspondence with .
It's like this , Without introducing large network parameters ( Because the structure of the auxiliary decoder is too simple ), The model can obtain two different results in the case of one reasoning , Obviously, the result of the auxiliary decoder will be " Worse "( This can also be seen from the picture ). In the final calculation of uncertainty, we only need to compare the differences between the two results .
Although this approach seems very simple , But it's amazing to think about it ; One is strong and the other is weak , If the sample is simple , So weak classification header can also get a better result , At this time, the difference between the two results is small , The degree of uncertainty is low . For some samples with large amount of information , The result of weak classification header is poor , At this time, there is a big difference between the two results , The uncertainty is higher .
And for the two results obtained , First, use a sharpening function to deal with it , To eliminate some potential noise in the prediction results . The sharpening function is defined as follows : s P L = P 1 / T P 1 / T + ( 1 − P ) 1 / T s P L=\frac{P^{1 / T}}{P^{1 / T}+(1-P)^{1 / T}} sPL=P1/T+(1−P)1/TP1/T When using false label supervision , Then use B To monitor A, Use A To monitor the results B. In this way, the strong decoder D A D_A DA Can learn the invariant features in the weak encoder to reduce over fitting , Weak encoder D B D_B DB You can also learn strong encoder D A D_A DA Advanced features in .
边栏推荐
- NPDP产品经理认证,到底是何方神圣?
- [question] Compilation Principle
- JVM(十九) -- 字节码与类的加载(四) -- 再谈类的加载器
- Leakage relay jelr-250fg
- Wonderful express | Tencent cloud database June issue
- The founder has a debt of 1billion. Let's start the class. Is it about to "end the class"?
- 《2》 Label
- Mybaits之多表查询(联合查询、嵌套查询)
- Digital innovation driven guide
- np. random. Shuffle and np Use swapaxis or transfer with caution
猜你喜欢
LabVIEW is opening a new reference, indicating that the memory is full
CentOS 7.9 installing Oracle 21C Adventures
数字化创新驱动指南
The year of the tiger is coming. Come and make a wish. I heard that the wish will come true
A cool "ghost" console tool
English语法_名词 - 所有格
DOM-节点对象+时间节点 综合案例
Unity让摄像机一直跟随在玩家后上方
一条 update 语句的生命经历
论文阅读【Open-book Video Captioning with Retrieve-Copy-Generate Network】
随机推荐
JD commodity details page API interface, JD commodity sales API interface, JD commodity list API interface, JD app details API interface, JD details API interface, JD SKU information interface
The navigation bar changes colors according to the route
数字化如何影响工作流程自动化
Cve-2021-3156 vulnerability recurrence notes
Design, configuration and points for attention of network arbitrary source multicast (ASM) simulation using OPNET
10 distributed databases that take you to the galaxy
做自媒体视频剪辑,专业的人会怎么寻找背景音乐素材?
Leakage relay llj-100fs
Preliminary practice of niuke.com (9)
Design, configuration and points for attention of network unicast (one server, multiple clients) simulation using OPNET
《2》 Label
Addressable pre Download
How does mapbox switch markup languages?
5阶多项式轨迹
【oracle】简单的日期时间的格式化与排序问题
Lombok插件
论文阅读【MM21 Pre-training for Video Understanding Challenge:Video Captioning with Pretraining Techniqu】
Design, configuration and points for attention of network specified source multicast (SSM) simulation using OPNET
Tablayout modification of customized tab title does not take effect
[JS component] custom select