当前位置:网站首页>Pytorch common loss function
Pytorch common loss function
2022-07-06 18:58:00 【m0_ sixty-one million eight hundred and ninety-nine thousand on】
Reproduced in :
Regression loss function - You know (zhihu.com)
PyTorch Summary of loss functions in | Dreamhouse blog (dreamhomes.top)
1、L1 loss
def mean_absolute_error(y_true, y_pred):
return K.mean(K.abs(y_pred - y_true), axis=-1)
# The sample code
import torch
from torch import nn
input_data = torch.FloatTensor([[3], [4], [5]]) # batch_size, output
target_data = torch.FloatTensor([[2], [5], [8]]) # batch_size, output
loss_func = nn.L1Loss()
loss = loss_func(input_data, target_data)
print(loss) # 1.6667
# Verification code
print((abs(3-2) + abs(4-5) + abs(5-8)) / 3) # 1.6666
2、L2 loss
def mean_squared_error(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
# The sample code
import torch
from torch import nn
input_data = torch.FloatTensor([[3], [4], [5]]) # batch_size, output
target_data = torch.FloatTensor([[2], [5], [8]]) # batch_size, output
loss_func = nn.MSELoss()
loss = loss_func(input_data, target_data)
print(loss) # 3.6667
# verification
print(((3-2)**2 + (4-5)**2 + (5-8)**2)/3) # 3.6666666666666665
3、smooth L1 loss
stay Faster RCNN and SSD Used in smooth L1 Loss function .
# The sample code
import torch
from torch import nn
input_data = torch.FloatTensor([[3], [4], [5]]) # batch_size, output
target_data = torch.FloatTensor([[2], [4.1], [8]]) # batch_size, output
loss_func = nn.SmoothL1Loss()
loss = loss_func(input_data, target_data)
print(loss) # Output :1.0017
4、NLLLoss
# Sample code
# Three samples Make three categories Use NLLLoss
import torch
from torch import nn
input = torch.randn(3, 3)
print(input)
# tensor([[ 0.0550, -0.5005, -0.4188],
# [ 0.7060, 1.1139, -0.0016],
# [ 0.3008, -0.9968, 0.5147]])
label = torch.LongTensor([0, 2, 1]) # real label
loss_func = nn.NLLLoss()
loss = loss_func(temp, label)
print(loss) # Loss 1.6035
# Verification code
output = torch.FloatTensor([
[ 0.0550, -0.5005, -0.4188],
[ 0.7060, 1.1139, -0.0016],
[ 0.3008, -0.9968, 0.5147]]
)
# 1. softmax + log = torch.log_softmax()
sm = nn.Softmax(dim=1)
temp = torch.log(sm(input))
print(temp)
# tensor([[-0.7868, -1.3423, -1.2607],
# [-1.0974, -0.6896, -1.8051],
# [-0.9210, -2.2185, -0.7070]])
# 2. because label by [0, 2, 1]
# So the first line takes the first value -0.7868. The second line takes the third value -1.8051, The third line takes the second value -2.2185. Then throw the minus sign away . Frankly speaking That is, the negative value of the logarithm corresponds to label That's cross entropy .
print((0.7868 + 1.8051 + 2.2185) / 3) # Output 1.6034666666666666
5、CrossEntropyLoss
# The sample code
# Three samples are classified Same as the above data
import torch
from torch import nn
loss_func1 = nn.CrossEntropyLoss()
output = torch.FloatTensor([
[ 0.0550, -0.5005, -0.4188],
[ 0.7060, 1.1139, -0.0016],
[ 0.3008, -0.9968, 0.5147]]
)
true_label = torch.LongTensor([0, 2, 1]) # Notice the label id Must be from 0 Start Can't say label id yes 1,2,3 Must be 0,1,2
loss = loss_func1(output, true_label)
print(loss) # Output : 1.6035
6、BCELoss
One sample multi label classification
# Sample code One sample multi label classification
import torch
from torch import nn
bce = nn.BCELoss()
output = torch.FloatTensor(
[
[ 0.0550, -0.5005, -0.4188],
[ 0.7060, 1.1139, -0.0016],
[ 0.3008, -0.9968, 0.5147]
]
)
# Be careful Output needs to go through sigmoid
s = nn.Sigmoid()
output = s(output)
# Suppose it is the classification of multiple labels for one piece of data
label = torch.FloatTensor(
[
[1, 0, 1],
[0, 0, 1],
[1, 1, 0]
]
)
loss = bce(output, label)
print(loss) # Output : 0.9013
# Verification code
# 1. Model output
output = torch.FloatTensor(
[
[ 0.0550, -0.5005, -0.4188],
[ 0.7060, 1.1139, -0.0016],
[ 0.3008, -0.9968, 0.5147]
]
)
# 2. after sigmoid
s = nn.Sigmoid()
output = s(output)
# print(output)
# tensor([[0.5137, 0.3774, 0.3968],
# [0.6695, 0.7529, 0.4996],
# [0.5746, 0.2696, 0.6259]])
label = torch.FloatTensor(
[
[1, 0, 1],
[0, 0, 1],
[1, 1, 0]
]
)
# According to the label and sigmoid Calculate calculate
# first line
sum_1 = 0
sum_1 += 1 * torch.log(torch.tensor(0.5137)) + (1 - 1) * torch.log(torch.tensor(1 - 0.5137)) # First column
sum_1 += 0 * torch.log(torch.tensor(0.3774)) + (1 - 0) * torch.log(torch.tensor(1 - 0.3774)) # Second column
sum_1 += 1 * torch.log(torch.tensor(0.3968)) + (1 - 1) * torch.log(torch.tensor(1 - 0.3968)) # The third column
avg_1 = sum_1 / 3
# The second line
sum_2 = 0
sum_2 += 0 * torch.log(torch.tensor(0.6695)) + (1 - 0) * torch.log(torch.tensor(1 - 0.6695)) # First column
sum_2 += 0 * torch.log(torch.tensor(0.7529)) + (1 - 0) * torch.log(torch.tensor(1 - 0.7529)) # Second column
sum_2 += 1 * torch.log(torch.tensor(0.4996)) + (1 - 1) * torch.log(torch.tensor(1 - 0.4996)) # The third column
avg_2 = sum_2 / 3
# The third line
sum_3 = 0
sum_3 += 1 * torch.log(torch.tensor(0.5746)) + (1 - 1) * torch.log(torch.tensor(1 - 0.5746)) # First column
sum_3 += 1 * torch.log(torch.tensor(0.2696)) + (1 - 1) * torch.log(torch.tensor(1 - 0.2696)) # Second column
sum_3 += 0 * torch.log(torch.tensor(0.6259)) + (1 - 0) * torch.log(torch.tensor(1 - 0.6259)) # The third column
avg_3 = sum_3 / 3
result = -(avg_1 + avg_2 + avg_3) / 3
print(result) # Output 0.9013
Dichotomous problem
# Sample code
# Two samples , Two classification
import torch
from torch import nn
bce = nn.BCELoss()
output = torch.FloatTensor(
[
[ 0.0550, -0.5005],
[ 0.7060, 1.1139]
]
)
# Be careful Output needs to go through sigmoid
s = nn.Sigmoid()
output = s(output)
# Suppose it is the classification of multiple labels for one piece of data
label = torch.FloatTensor(
[
[1, 0],
[0, 1]
]
)
loss = bce(output, label)
print(loss) # Output 0.6327
# Verification code
output = torch.FloatTensor(
[
[ 0.0550, -0.5005],
[ 0.7060, 1.1139]
]
)
# Be careful Output needs to go through sigmoid
s = nn.Sigmoid()
output = s(output)
# print(output)
# tensor([[0.5137, 0.3774],
# [0.6695, 0.7529]])
# true_label = [[1, 0], [0, 1]]
sum_1 = 0
sum_1 += 1 * torch.log(torch.tensor(0.5137)) + (1 - 1) * torch.log(torch.tensor(1 - 0.5137))
sum_1 += 0 * torch.log(torch.tensor(0.3774)) + (1 - 0) * torch.log(torch.tensor(1 - 0.3774))
avg_1 = sum_1 / 2
sum_2 = 0
sum_2 += 0 * torch.log(torch.tensor(0.6695)) + (1 - 0) * torch.log(torch.tensor(1 - 0.6695))
sum_2 += 1 * torch.log(torch.tensor(0.7529)) + (1 - 1) * torch.log(torch.tensor(1 - 0.7529))
avg_2 = sum_2 / 2
print(-(avg_1 + avg_2) / 2) # Output 0.6327
7、BCEWithLogitsLoss
# The sample code
# Use the above two samples to classify the data
import torch
from torch import nn
bce_logit = nn.BCEWithLogitsLoss()
output = torch.FloatTensor(
[
[ 0.0550, -0.5005],
[ 0.7060, 1.1139]
]
) # without Sigmoid
label = torch.FloatTensor(
[
[1, 0],
[0, 1]
]
)
loss = bce_logit(output, label)
print(loss) # tensor(0.6327)
8、Focal Loss
# Code implementation
import torch
import torch.nn.functional as F
def reduce_loss(loss, reduction):
reduction_enum = F._Reduction.get_enum(reduction)
# none: 0, elementwise_mean:1, sum: 2
if reduction_enum == 0:
return loss
elif reduction_enum == 1:
return loss.mean()
elif reduction_enum == 2:
return loss.sum()
def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None):
if weight is not None:
loss = loss * weight
if avg_factor is None:
loss = reduce_loss(loss, reduction)
else:
# if reduction is mean, then average the loss by avg_factor
if reduction == 'mean':
loss = loss.sum() / avg_factor
# if reduction is 'none', then do nothing, otherwise raise an error
elif reduction != 'none':
raise ValueError('avg_factor can not be used with reduction="sum"')
return loss
def py_sigmoid_focal_loss(pred, target, weight=None, gamma=2.0, alpha=0.25, reduction='mean', avg_factor=None):
# Be careful Input pred No need to go through sigmoid
pred_sigmoid = pred.sigmoid()
target = target.type_as(pred)
pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target)
focal_weight = (alpha * target + (1 - alpha) *
(1 - target)) * pt.pow(gamma)
# Let's find this function of cross entropy Yes pred the sigmoid
loss = F.binary_cross_entropy_with_logits(
pred, target, reduction='none') * focal_weight
# print(loss)
''' Output
tensor([[0.0394, 0.0506],
[0.3722, 0.0043]])
'''
loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
return loss
if __name__ == '__main__':
output = torch.FloatTensor(
[
[0.0550, -0.5005],
[0.7060, 1.1139]
]
)
label = torch.FloatTensor(
[
[1, 0],
[0, 1]
]
)
loss = py_sigmoid_focal_loss(output, label)
print(loss)
9、GHM Loss
Code implementation
# Code implementation
import torch
from torch import nn
import torch.nn.functional as F
class GHM_Loss(nn.Module):
def __init__(self, bins, alpha):
super(GHM_Loss, self).__init__()
self._bins = bins
self._alpha = alpha
self._last_bin_count = None
def _g2bin(self, g):
return torch.floor(g * (self._bins - 0.0001)).long()
def _custom_loss(self, x, target, weight):
raise NotImplementedError
def _custom_loss_grad(self, x, target):
raise NotImplementedError
def forward(self, x, target):
g = torch.abs(self._custom_loss_grad(x, target)).detach()
bin_idx = self._g2bin(g)
bin_count = torch.zeros((self._bins))
for i in range(self._bins):
bin_count[i] = (bin_idx == i).sum().item()
N = (x.size(0) * x.size(1))
if self._last_bin_count is None:
self._last_bin_count = bin_count
else:
bin_count = self._alpha * self._last_bin_count + (1 - self._alpha) * bin_count
self._last_bin_count = bin_count
nonempty_bins = (bin_count > 0).sum().item()
gd = bin_count * nonempty_bins
gd = torch.clamp(gd, min=0.0001)
beta = N / gd
return self._custom_loss(x, target, beta[bin_idx])
class GHMC_Loss(GHM_Loss):
# Classified loss
def __init__(self, bins, alpha):
super(GHMC_Loss, self).__init__(bins, alpha)
def _custom_loss(self, x, target, weight):
return F.binary_cross_entropy_with_logits(x, target, weight=weight)
def _custom_loss_grad(self, x, target):
return torch.sigmoid(x).detach() - target
class GHMR_Loss(GHM_Loss):
# Return to loss
def __init__(self, bins, alpha, mu):
super(GHMR_Loss, self).__init__(bins, alpha)
self._mu = mu
def _custom_loss(self, x, target, weight):
d = x - target
mu = self._mu
loss = torch.sqrt(d * d + mu * mu) - mu
N = x.size(0) * x.size(1)
return (loss * weight).sum() / N
def _custom_loss_grad(self, x, target):
d = x - target
mu = self._mu
return d / torch.sqrt(d * d + mu * mu)
if __name__ == '__main__':
# This loss function does not need to be done by itself sigmoid
output = torch.FloatTensor(
[
[0.0550, -0.5005],
[0.7060, 1.1139]
]
)
label = torch.FloatTensor(
[
[1, 0],
[0, 1]
]
)
loss_func = GHMC_Loss(bins=10, alpha=0.75)
loss = loss_func(output, label)
print(loss)
10、mean_absolute_percentage_error
mape: and mae The difference is that , Add the difference between the predicted value and the actual value divided by the actual value , Then find the mean .
def mean_absolute_percentage_error(y_true, y_pred):
diff = K.abs((y_true - y_pred) / K.clip(K.abs(y_true),
K.epsilon(),
None))
return 100. * K.mean(diff, axis=-1)
11、mean_squared_logarithmic_error
msle: Take the logarithm , Make a difference , square , Accumulate and average .
def mean_squared_logarithmic_error(y_true, y_pred):
first_log = K.log(K.clip(y_pred, K.epsilon(), None) + 1.)
second_log = K.log(K.clip(y_true, K.epsilon(), None) + 1.)
return K.mean(K.square(first_log - second_log), axis=-1)
12、Huber Loss
13、Log-Cosh Loss
14、Quantile Loss Quantile loss
15、charbonnier
class L1_Charbonnier_loss(torch.nn.Module):
"""L1 Charbonnierloss."""
def __init__(self):
super(L1_Charbonnier_loss, self).__init__()
self.eps = 1e-6
def forward(self, X, Y):
diff = torch.add(X, -Y)
error = torch.sqrt(diff * diff + self.eps)
loss = torch.mean(error)
return loss
16、wing loss
边栏推荐
- 三年Android开发,2022疫情期间八家大厂的Android面试经历和真题整理
- Reproduce ThinkPHP 2 X Arbitrary Code Execution Vulnerability
- If you have any problems, you can contact me. A rookie ~
- [depth first search] Ji suanke: Square
- 监控界的最强王者,没有之一!
- 裕太微冲刺科创板:拟募资13亿 华为与小米基金是股东
- Unlock 2 live broadcast themes in advance! Today, I will teach you how to complete software package integration Issues 29-30
- 如何提高网站权重
- test about BinaryTree
- Markdown syntax for document editing (typera)
猜你喜欢
SQL injection - access injection, access offset injection
[matlab] Simulink the input and output variables of the same module cannot have the same name
测试行业的小伙伴,有问题可以找我哈。菜鸟一枚~
Medical image segmentation
手写一个的在线聊天系统(原理篇1)
Breadth first traversal of graph
如何提高网站权重
This article discusses the memory layout of objects in the JVM, as well as the principle and application of memory alignment and compression pointer
[paper notes] transunet: transformers make strongencoders for medical image segmentation
爬虫玩得好,牢饭吃到饱?这3条底线千万不能碰!
随机推荐
Implementation of AVL tree
图之广度优先遍历
Master Xuan joined hands with sunflower to remotely control enabling cloud rendering and GPU computing services
About NPM install error 1
Noninvasive and cuff free blood pressure measurement for telemedicine [translation]
使用map函数、split函数一行键入多个元素
AcWing 3537.树查找 完全二叉树
Nuc11 cheetah Canyon setting U disk startup
Some understandings of tree LSTM and DGL code implementation
ROS custom message publishing subscription example
Markdown syntax for document editing (typera)
三年Android开发,2022疫情期间八家大厂的Android面试经历和真题整理
Penetration test information collection - App information
DOM Brief
R语言ggplot2可视化:使用ggpubr包的ggstripchart函数可视化分组点状条带图(dot strip plot)、设置add参数为不同水平点状条带图添加箱图
How does crmeb mall system help marketing?
If you have any problems, you can contact me. A rookie ~
Digital "new" operation and maintenance of energy industry
二叉搜索树
QLabel 跑马灯文字显示