当前位置:网站首页>【MEDICAL】Attend to Medical Ontologies: Content Selection for Clinical Abstractive Summarization
【MEDICAL】Attend to Medical Ontologies: Content Selection for Clinical Abstractive Summarization
2022-07-02 07:20:00 【lwgkzl】
Mission :
According to the author , The medical image report in English has two descriptions at the same time , One is FINDINGS, The details and characteristics of the whole image are described , The other one is IMPRESSION, Only focus on the key information in the image , These key information is contained in FINDINGS Inside . All in all , What this paper does is to use this FINDINGS To generate IMPRESSION, That is, the application of text summarization in the medical field .
Model :

Content Selector:
This selector is implemented in the form of sequence annotation , Whole FINDINGS Each word in the sequence has 0,1 Two kinds of labels . If the current word is a medical proprietary entity , And he is corresponding IMPRESSION If it happens , Then mark this position as 1, Otherwise, it is marked as 0. Learn a model like this , It can be used to mark on the test set FINDINGS, And select some key medical proprietary entities .
Then see the figure
On the left is encoder,encoder The upper one inside lstm be used for encoder Whole FINDINGS, Get the code of each position hi, Then the one below lstm be used for encoder Content Selector Selected key medical proprietary entities . This of the medical proprietary entity LSTM Get the final one hl0 Vector , This vector represents the current FINDINGS It contains information about all significant medical proprietary entities . Then use this information to fuse to the above code FINDINGS Of LSTM in , Get a new vector for each position :

hi It's the top LSTM Output of each position , hl0 It's below LSTM Output of the last position . Merge , Then do with the output of each position before element-wise The multiplication operation of , That is, the circle of the second formula . After fusion hi' And finally decoder Sit in every position attention, To guide decoder Generation .
After reading the summary :
Whole article paper Our model is not difficult , It's still old-fashioned seq2seq Infrastructure for , And it uses lstm ecoder and lstm decoder The pattern of . The only bright spot is Content Selector The idea of sequence tagging is used , This is a bit like copy Mechanism , But it's not the last step to modify the distribution . The whole paper has only made such a small change , Then we can prove that he is really effective , That's all right. .
边栏推荐
猜你喜欢
随机推荐
JSP intelligent community property management system
叮咚,Redis OM对象映射框架来了
软件开发模式之敏捷开发(scrum)
Oracle EBS database monitoring -zabbix+zabbix-agent2+orabbix
Yaml file of ingress controller 0.47.0
IDEA2020中测试PySpark的运行出错
SSM student achievement information management system
【信息检索导论】第六章 词项权重及向量空间模型
How to efficiently develop a wechat applet
TCP attack
数仓模型事实表模型设计
MapReduce concepts and cases (Shang Silicon Valley Learning Notes)
Cognitive science popularization of middle-aged people
Oracle EBS ADI development steps
【Torch】解决tensor参数有梯度,weight不更新的若干思路
CSRF attack
Module not found: Error: Can't resolve './$$_ gendir/app/app. module. ngfactory'
Oracle EBS DataGuard setup
如何高效开发一款微信小程序
Classloader and parental delegation mechanism









