当前位置:网站首页>Loss function and positive and negative sample allocation in target detection: retinanet and focal loss
Loss function and positive and negative sample allocation in target detection: retinanet and focal loss
2022-07-07 05:53:00 【cartes1us】
RetinaNet
In the field of target detection , For the first time, the accuracy of single-stage algorithm exceeds that of two-stage algorithm , Namely RetinaNet.
Network structure :
The network structure designed by the author is not very innovative , This is what the article says :
The design of our RetinaNet detector shares many similarities with
previous dense detectors, in particular the concept of ‘anchors’
introduced by RPN [3] and use of features pyramids as in SSD [9] and
FPN [4]. We emphasize that our simple detector achieves top results
not based on innovations in network design but due to our novel loss.
Detection head is Classification and BBox Regression decoupling Of , And it is based on anchor frame , after FPN After that, five characteristic maps with different scales are output , Each layer corresponds to 32~512 Anchor frame of scale , And each layer is based on scale and ratios There are different combinations of 9 Seed anchor frame , Finally, the anchor frame size of the whole network is 32 ~ 813 Between . Use the offset predicted by the network relative to the anchor box to calculate BBox Methods and Faster R-CNN identical . The following figure is the picture of thunderbolt .
The structure diagram in the paper is as follows , Only by FPN The characteristic diagrams of three scales are drawn ,W,H,K,A Respectively represent the width of the feature map , high , Number of categories ( Does not contain background classes ), Number of anchor frames (9).
Positive and negative samples match
Positive sample : Predicted BBox And gt IoU>=0.5,
Negative sample : Predicted BBox And gt IoU<0.4,
Other samples are discarded
prospects , Background quantity imbalance
CE Variants of loss
The biggest innovation of this work :Focal loss, Rewrite the classic cross entropy loss , Apply to class subnet Branch , The weight of the loss of easily classified samples is greatly reduced , Beautiful form . In the paper γ \gamma γ Recommended 2, if γ \gamma γ take 0, be FL It degenerates into CE.
Loss :
The first category loss is to calculate all samples ( Including positive and negative ) Of Focal loss, Then remove the number of positive samples N p o s N_{pos} Npos.BBox The return loss is Fast R-CNN Proposed in smooth L1 loss.
To be continued
边栏推荐
猜你喜欢
Harmonyos practice - Introduction to development, analysis of atomized services
Paper reading [open book video captioning with retrieve copy generate network]
爬虫练习题(三)
力扣102题:二叉树的层序遍历
毕业之后才知道的——知网查重原理以及降重举例
三级菜单数据实现,实现嵌套三级菜单数据
JD commodity details page API interface, JD commodity sales API interface, JD commodity list API interface, JD app details API interface, JD details API interface, JD SKU information interface
Realize GDB remote debugging function between different network segments
Determine whether the file is a DICOM file
EMMC打印cqhci: timeout for tag 10提示分析与解决
随机推荐
产业金融3.0:“疏通血管”的金融科技
消息队列:重复消息如何处理?
404 not found service cannot be reached in SAP WebService test
I didn't know it until I graduated -- the principle of HowNet duplication check and examples of weight reduction
关于服装ERP,你知道多少?
目标检测中的损失函数与正负样本分配:RetinaNet与Focal loss
集群、分布式、微服務的區別和介紹
[solved] record an error in easyexcel [when reading the XLS file, no error will be reported when reading the whole table, and an error will be reported when reading the specified sheet name]
SAP ABAP BDC (batch data communication) -018
Codeforces Round #416 (Div. 2) D. Vladik and Favorite Game
集群、分布式、微服务的区别和介绍
Web architecture design process
win配置pm2开机自启node项目
SQL Server 2008 各种DateTime的取值范围
力扣102题:二叉树的层序遍历
三级菜单数据实现,实现嵌套三级菜单数据
Reptile exercises (III)
Flask1.1.4 Werkzeug1.0.1 源码分析:启动流程
zabbix_get测试数据库失败
make makefile cmake qmake都是什么,有什么区别?