当前位置:网站首页>Intelligent target detection 59 -- detailed explanation of pytoch focal loss and its implementation in yolov4

Intelligent target detection 59 -- detailed explanation of pytoch focal loss and its implementation in yolov4

2022-07-05 07:08:00 Bubbliiiing

Learn foreword

to YoloV4 Add one in the warehouse of Focal Loss have a look , I've been hearing that Focal Loss stay Yolo Series is useless , But practice makes true knowledge . And many people ask , It's better to add .
 Insert picture description here

What is? Focal Loss

Focal Loss It's a kind of Loss Calculation scheme . It has two important characteristics .

1、 Control the weight of positive and negative samples
2、 Control the weights of easy and difficult samples

The concept of positive and negative samples is as follows :
The essence of target detection is dense sampling , Generate thousands of a priori boxes in an image ( Or feature points ), Match the real box with a partial a priori box , The prior box on the match is the positive sample , What doesn't match is the negative sample .

The concepts of easy classification and difficult classification samples are as follows :
Suppose there is a binary classification problem , sample 1 And the sample 2 All are categories 1. In the prediction results of the network , sample 1 Belong to the category 1 Probability =0.9, sample 2 Belong to the category 1 Probability =0.6, The former prediction is more accurate , It is a sample that is easy to classify ; The latter is not accurate enough , It is difficult to classify samples .

How to realize weight control , Please look down :

One 、 Control the weight of positive and negative samples

The following is the commonly used cross entropy loss, Take two categories as an example :
 Insert picture description here
We can use the following Pt Simplified cross entropy loss.
 Insert picture description here
here :
 Insert picture description here
Want to reduce the impact of negative samples , You can add a coefficient before the conventional loss function αt. And Pt similar :
When label=1 When ,αt=α;
When label=otherwise When ,αt=1 - α.
 Insert picture description here
a The range is 0 To 1. At this point, we can set α Realize the control of positive and negative sample pairs loss The contribution of . Insert picture description here
It's just :
 Insert picture description here

Two 、 Control the weights of easy and difficult samples

The sample belongs to a certain class , And the greater the probability of this kind in the prediction results , The easier it is to classify , In dichotomous problems , The label of the positive sample is 1, The label of the negative sample is 0,p The representative sample is 1 The probability of a class .

For a positive sample ,1-p The greater the value of , The harder it is to classify samples .
For negative samples ,p The greater the value of , The harder it is to classify samples .

Pt Is defined as follows :
 Insert picture description here
So the use of 1-Pt We can calculate whether each sample is easy to classify or difficult to classify .

The specific implementation is as follows .
 Insert picture description here
among :
( 1 − p t ) γ (1-p_{t})^{γ} (1pt)γ
It is the degree to which each sample is easy to distinguish , γ γ γ It's called modulation coefficient

1、 When pt Tend to 0 When , The modulation coefficient tends to 1, For the whole loss My contribution is great . When pt Tend to 1 When , The modulation coefficient tends to 0, That is, for the total loss My contribution is very small .
2、 When γ=0 When ,focal loss It's the traditional cross entropy loss , Can be adjusted by γ Realize the change of modulation coefficient .

3、 ... and 、 The two weight control methods are combined

It can be achieved by the following formula Control the weight of positive and negative samples and Control the weights of easy and difficult samples .
 Insert picture description here

Realization way

This article takes Pytorch Version of YoloV4 For example , Let's analyze it ,YoloV4 The coordinates of are as follows :
https://github.com/bubbliiiing/yolov4-pytorch

First position YoloV4 in , The loss part distinguished by positive and negative samples ,YoloV4 The loss of consists of three parts , Respectively :
loss_loc( Return to loss )
loss_conf( Loss of target confidence )
loss_cls( Category loss )
The loss part distinguished by positive and negative samples yes confidence_loss( Loss of target confidence ), So we add Focal Loss.

First, the probability in the positioning formula p.prediction Represents the prediction result of each feature point , Take out the part that belongs to confidence , take sigmoid, It's probability p

conf = torch.sigmoid(prediction[..., 4])

First, balance the positive and negative samples , Setting parameters alpha.

torch.where(obj_mask, torch.ones_like(conf) * self.alpha, torch.ones_like(conf) * (1 - self.alpha))

Then balance the difficult and easy classification samples , Setting parameters gamma.

torch.where(obj_mask, torch.ones_like(conf) - conf, conf) ** self.gamma

Multiply by the original cross entropy loss .

ratio       = torch.where(obj_mask, torch.ones_like(conf) * self.alpha, torch.ones_like(conf) * (1 - self.alpha)) * torch.where(obj_mask, torch.ones_like(conf) - conf, conf) ** self.gamma
loss_conf   = torch.mean((self.BCELoss(conf, obj_mask.type_as(conf)) * ratio)[noobj_mask.bool() | obj_mask])
原网站

版权声明
本文为[Bubbliiiing]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/186/202207050636218545.html