当前位置:网站首页>Pytorch common loss function
Pytorch common loss function
2022-07-06 18:58:00 【m0_ sixty-one million eight hundred and ninety-nine thousand on】
Reproduced in :
Regression loss function - You know (zhihu.com)
PyTorch Summary of loss functions in | Dreamhouse blog (dreamhomes.top)
1、L1 loss


def mean_absolute_error(y_true, y_pred):
return K.mean(K.abs(y_pred - y_true), axis=-1)
# The sample code
import torch
from torch import nn
input_data = torch.FloatTensor([[3], [4], [5]]) # batch_size, output
target_data = torch.FloatTensor([[2], [5], [8]]) # batch_size, output
loss_func = nn.L1Loss()
loss = loss_func(input_data, target_data)
print(loss) # 1.6667
# Verification code
print((abs(3-2) + abs(4-5) + abs(5-8)) / 3) # 1.6666
2、L2 loss


def mean_squared_error(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
# The sample code
import torch
from torch import nn
input_data = torch.FloatTensor([[3], [4], [5]]) # batch_size, output
target_data = torch.FloatTensor([[2], [5], [8]]) # batch_size, output
loss_func = nn.MSELoss()
loss = loss_func(input_data, target_data)
print(loss) # 3.6667
# verification
print(((3-2)**2 + (4-5)**2 + (5-8)**2)/3) # 3.6666666666666665
3、smooth L1 loss


stay Faster RCNN and SSD Used in smooth L1 Loss function .

# The sample code
import torch
from torch import nn
input_data = torch.FloatTensor([[3], [4], [5]]) # batch_size, output
target_data = torch.FloatTensor([[2], [4.1], [8]]) # batch_size, output
loss_func = nn.SmoothL1Loss()
loss = loss_func(input_data, target_data)
print(loss) # Output :1.00174、NLLLoss

# Sample code
# Three samples Make three categories Use NLLLoss
import torch
from torch import nn
input = torch.randn(3, 3)
print(input)
# tensor([[ 0.0550, -0.5005, -0.4188],
# [ 0.7060, 1.1139, -0.0016],
# [ 0.3008, -0.9968, 0.5147]])
label = torch.LongTensor([0, 2, 1]) # real label
loss_func = nn.NLLLoss()
loss = loss_func(temp, label)
print(loss) # Loss 1.6035
# Verification code
output = torch.FloatTensor([
[ 0.0550, -0.5005, -0.4188],
[ 0.7060, 1.1139, -0.0016],
[ 0.3008, -0.9968, 0.5147]]
)
# 1. softmax + log = torch.log_softmax()
sm = nn.Softmax(dim=1)
temp = torch.log(sm(input))
print(temp)
# tensor([[-0.7868, -1.3423, -1.2607],
# [-1.0974, -0.6896, -1.8051],
# [-0.9210, -2.2185, -0.7070]])
# 2. because label by [0, 2, 1]
# So the first line takes the first value -0.7868. The second line takes the third value -1.8051, The third line takes the second value -2.2185. Then throw the minus sign away . Frankly speaking That is, the negative value of the logarithm corresponds to label That's cross entropy .
print((0.7868 + 1.8051 + 2.2185) / 3) # Output 1.6034666666666666
5、CrossEntropyLoss

# The sample code
# Three samples are classified Same as the above data
import torch
from torch import nn
loss_func1 = nn.CrossEntropyLoss()
output = torch.FloatTensor([
[ 0.0550, -0.5005, -0.4188],
[ 0.7060, 1.1139, -0.0016],
[ 0.3008, -0.9968, 0.5147]]
)
true_label = torch.LongTensor([0, 2, 1]) # Notice the label id Must be from 0 Start Can't say label id yes 1,2,3 Must be 0,1,2
loss = loss_func1(output, true_label)
print(loss) # Output : 1.6035
6、BCELoss

One sample multi label classification
# Sample code One sample multi label classification
import torch
from torch import nn
bce = nn.BCELoss()
output = torch.FloatTensor(
[
[ 0.0550, -0.5005, -0.4188],
[ 0.7060, 1.1139, -0.0016],
[ 0.3008, -0.9968, 0.5147]
]
)
# Be careful Output needs to go through sigmoid
s = nn.Sigmoid()
output = s(output)
# Suppose it is the classification of multiple labels for one piece of data
label = torch.FloatTensor(
[
[1, 0, 1],
[0, 0, 1],
[1, 1, 0]
]
)
loss = bce(output, label)
print(loss) # Output : 0.9013
# Verification code
# 1. Model output
output = torch.FloatTensor(
[
[ 0.0550, -0.5005, -0.4188],
[ 0.7060, 1.1139, -0.0016],
[ 0.3008, -0.9968, 0.5147]
]
)
# 2. after sigmoid
s = nn.Sigmoid()
output = s(output)
# print(output)
# tensor([[0.5137, 0.3774, 0.3968],
# [0.6695, 0.7529, 0.4996],
# [0.5746, 0.2696, 0.6259]])
label = torch.FloatTensor(
[
[1, 0, 1],
[0, 0, 1],
[1, 1, 0]
]
)
# According to the label and sigmoid Calculate calculate
# first line
sum_1 = 0
sum_1 += 1 * torch.log(torch.tensor(0.5137)) + (1 - 1) * torch.log(torch.tensor(1 - 0.5137)) # First column
sum_1 += 0 * torch.log(torch.tensor(0.3774)) + (1 - 0) * torch.log(torch.tensor(1 - 0.3774)) # Second column
sum_1 += 1 * torch.log(torch.tensor(0.3968)) + (1 - 1) * torch.log(torch.tensor(1 - 0.3968)) # The third column
avg_1 = sum_1 / 3
# The second line
sum_2 = 0
sum_2 += 0 * torch.log(torch.tensor(0.6695)) + (1 - 0) * torch.log(torch.tensor(1 - 0.6695)) # First column
sum_2 += 0 * torch.log(torch.tensor(0.7529)) + (1 - 0) * torch.log(torch.tensor(1 - 0.7529)) # Second column
sum_2 += 1 * torch.log(torch.tensor(0.4996)) + (1 - 1) * torch.log(torch.tensor(1 - 0.4996)) # The third column
avg_2 = sum_2 / 3
# The third line
sum_3 = 0
sum_3 += 1 * torch.log(torch.tensor(0.5746)) + (1 - 1) * torch.log(torch.tensor(1 - 0.5746)) # First column
sum_3 += 1 * torch.log(torch.tensor(0.2696)) + (1 - 1) * torch.log(torch.tensor(1 - 0.2696)) # Second column
sum_3 += 0 * torch.log(torch.tensor(0.6259)) + (1 - 0) * torch.log(torch.tensor(1 - 0.6259)) # The third column
avg_3 = sum_3 / 3
result = -(avg_1 + avg_2 + avg_3) / 3
print(result) # Output 0.9013
Dichotomous problem
# Sample code
# Two samples , Two classification
import torch
from torch import nn
bce = nn.BCELoss()
output = torch.FloatTensor(
[
[ 0.0550, -0.5005],
[ 0.7060, 1.1139]
]
)
# Be careful Output needs to go through sigmoid
s = nn.Sigmoid()
output = s(output)
# Suppose it is the classification of multiple labels for one piece of data
label = torch.FloatTensor(
[
[1, 0],
[0, 1]
]
)
loss = bce(output, label)
print(loss) # Output 0.6327
# Verification code
output = torch.FloatTensor(
[
[ 0.0550, -0.5005],
[ 0.7060, 1.1139]
]
)
# Be careful Output needs to go through sigmoid
s = nn.Sigmoid()
output = s(output)
# print(output)
# tensor([[0.5137, 0.3774],
# [0.6695, 0.7529]])
# true_label = [[1, 0], [0, 1]]
sum_1 = 0
sum_1 += 1 * torch.log(torch.tensor(0.5137)) + (1 - 1) * torch.log(torch.tensor(1 - 0.5137))
sum_1 += 0 * torch.log(torch.tensor(0.3774)) + (1 - 0) * torch.log(torch.tensor(1 - 0.3774))
avg_1 = sum_1 / 2
sum_2 = 0
sum_2 += 0 * torch.log(torch.tensor(0.6695)) + (1 - 0) * torch.log(torch.tensor(1 - 0.6695))
sum_2 += 1 * torch.log(torch.tensor(0.7529)) + (1 - 1) * torch.log(torch.tensor(1 - 0.7529))
avg_2 = sum_2 / 2
print(-(avg_1 + avg_2) / 2) # Output 0.63277、BCEWithLogitsLoss

# The sample code
# Use the above two samples to classify the data
import torch
from torch import nn
bce_logit = nn.BCEWithLogitsLoss()
output = torch.FloatTensor(
[
[ 0.0550, -0.5005],
[ 0.7060, 1.1139]
]
) # without Sigmoid
label = torch.FloatTensor(
[
[1, 0],
[0, 1]
]
)
loss = bce_logit(output, label)
print(loss) # tensor(0.6327)8、Focal Loss



# Code implementation
import torch
import torch.nn.functional as F
def reduce_loss(loss, reduction):
reduction_enum = F._Reduction.get_enum(reduction)
# none: 0, elementwise_mean:1, sum: 2
if reduction_enum == 0:
return loss
elif reduction_enum == 1:
return loss.mean()
elif reduction_enum == 2:
return loss.sum()
def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None):
if weight is not None:
loss = loss * weight
if avg_factor is None:
loss = reduce_loss(loss, reduction)
else:
# if reduction is mean, then average the loss by avg_factor
if reduction == 'mean':
loss = loss.sum() / avg_factor
# if reduction is 'none', then do nothing, otherwise raise an error
elif reduction != 'none':
raise ValueError('avg_factor can not be used with reduction="sum"')
return loss
def py_sigmoid_focal_loss(pred, target, weight=None, gamma=2.0, alpha=0.25, reduction='mean', avg_factor=None):
# Be careful Input pred No need to go through sigmoid
pred_sigmoid = pred.sigmoid()
target = target.type_as(pred)
pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target)
focal_weight = (alpha * target + (1 - alpha) *
(1 - target)) * pt.pow(gamma)
# Let's find this function of cross entropy Yes pred the sigmoid
loss = F.binary_cross_entropy_with_logits(
pred, target, reduction='none') * focal_weight
# print(loss)
''' Output
tensor([[0.0394, 0.0506],
[0.3722, 0.0043]])
'''
loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
return loss
if __name__ == '__main__':
output = torch.FloatTensor(
[
[0.0550, -0.5005],
[0.7060, 1.1139]
]
)
label = torch.FloatTensor(
[
[1, 0],
[0, 1]
]
)
loss = py_sigmoid_focal_loss(output, label)
print(loss)9、GHM Loss








Code implementation
# Code implementation
import torch
from torch import nn
import torch.nn.functional as F
class GHM_Loss(nn.Module):
def __init__(self, bins, alpha):
super(GHM_Loss, self).__init__()
self._bins = bins
self._alpha = alpha
self._last_bin_count = None
def _g2bin(self, g):
return torch.floor(g * (self._bins - 0.0001)).long()
def _custom_loss(self, x, target, weight):
raise NotImplementedError
def _custom_loss_grad(self, x, target):
raise NotImplementedError
def forward(self, x, target):
g = torch.abs(self._custom_loss_grad(x, target)).detach()
bin_idx = self._g2bin(g)
bin_count = torch.zeros((self._bins))
for i in range(self._bins):
bin_count[i] = (bin_idx == i).sum().item()
N = (x.size(0) * x.size(1))
if self._last_bin_count is None:
self._last_bin_count = bin_count
else:
bin_count = self._alpha * self._last_bin_count + (1 - self._alpha) * bin_count
self._last_bin_count = bin_count
nonempty_bins = (bin_count > 0).sum().item()
gd = bin_count * nonempty_bins
gd = torch.clamp(gd, min=0.0001)
beta = N / gd
return self._custom_loss(x, target, beta[bin_idx])
class GHMC_Loss(GHM_Loss):
# Classified loss
def __init__(self, bins, alpha):
super(GHMC_Loss, self).__init__(bins, alpha)
def _custom_loss(self, x, target, weight):
return F.binary_cross_entropy_with_logits(x, target, weight=weight)
def _custom_loss_grad(self, x, target):
return torch.sigmoid(x).detach() - target
class GHMR_Loss(GHM_Loss):
# Return to loss
def __init__(self, bins, alpha, mu):
super(GHMR_Loss, self).__init__(bins, alpha)
self._mu = mu
def _custom_loss(self, x, target, weight):
d = x - target
mu = self._mu
loss = torch.sqrt(d * d + mu * mu) - mu
N = x.size(0) * x.size(1)
return (loss * weight).sum() / N
def _custom_loss_grad(self, x, target):
d = x - target
mu = self._mu
return d / torch.sqrt(d * d + mu * mu)
if __name__ == '__main__':
# This loss function does not need to be done by itself sigmoid
output = torch.FloatTensor(
[
[0.0550, -0.5005],
[0.7060, 1.1139]
]
)
label = torch.FloatTensor(
[
[1, 0],
[0, 1]
]
)
loss_func = GHMC_Loss(bins=10, alpha=0.75)
loss = loss_func(output, label)
print(loss)10、mean_absolute_percentage_error
mape: and mae The difference is that , Add the difference between the predicted value and the actual value divided by the actual value , Then find the mean .
def mean_absolute_percentage_error(y_true, y_pred):
diff = K.abs((y_true - y_pred) / K.clip(K.abs(y_true),
K.epsilon(),
None))
return 100. * K.mean(diff, axis=-1)11、mean_squared_logarithmic_error
msle: Take the logarithm , Make a difference , square , Accumulate and average .
def mean_squared_logarithmic_error(y_true, y_pred):
first_log = K.log(K.clip(y_pred, K.epsilon(), None) + 1.)
second_log = K.log(K.clip(y_true, K.epsilon(), None) + 1.)
return K.mean(K.square(first_log - second_log), axis=-1)12、Huber Loss

13、Log-Cosh Loss

14、Quantile Loss Quantile loss


15、charbonnier


class L1_Charbonnier_loss(torch.nn.Module):
"""L1 Charbonnierloss."""
def __init__(self):
super(L1_Charbonnier_loss, self).__init__()
self.eps = 1e-6
def forward(self, X, Y):
diff = torch.add(X, -Y)
error = torch.sqrt(diff * diff + self.eps)
loss = torch.mean(error)
return loss16、wing loss


边栏推荐
- UFIDA OA vulnerability learning - ncfindweb directory traversal vulnerability
- AIRIOT物联网平台赋能集装箱行业构建【焊接工位信息监控系统】
- 人体骨骼点检测:自顶向下(部分理论)
- There is a sound prompt when inserting a USB flash disk under win10 system, but the drive letter is not displayed
- Method of accessing mobile phone storage location permission under non root condition
- 驼峰式与下划线命名规则(Camel case With hungarian notation)
- 用于远程医疗的无创、无袖带血压测量【翻译】
- R语言ggplot2可视化:使用ggpubr包的ggstripchart函数可视化分组点状条带图(dot strip plot)、设置add参数为不同水平点状条带图添加箱图
- Stm32+mfrc522 completes IC card number reading, password modification, data reading and writing
- 【论文笔记】TransUNet: Transformers Make StrongEncoders for Medical Image Segmentation
猜你喜欢

Describe the process of key exchange

美庐生物IPO被终止:年营收3.85亿 陈林为实控人

CSRF vulnerability analysis

pychrm社区版调用matplotlib.pyplot.imshow()函数图像不弹出的解决方法

视频化全链路智能上云?一文详解什么是阿里云视频云「智能媒体生产」

Multithreading Basics: basic concepts of threads and creation of threads

多线程基础:线程基本概念与线程的创建
![[depth first search] Ji suanke: a joke of replacement](/img/f9/10dbbc2f6fed2095d2b155ecf87b75.jpg)
[depth first search] Ji suanke: a joke of replacement

十、进程管理

How to improve website weight
随机推荐
Using block to realize the traditional values between two pages
Atcoder a mountaineer
Huawei 0 foundation - image sorting
AUTOCAD——中心线绘制、CAD默认线宽是多少?可以修改吗?
美庐生物IPO被终止:年营收3.85亿 陈林为实控人
安装及管理程序
Estimate blood pressure according to PPG using spectral spectrum time depth neural network [turn]
一种用于夜间和无袖测量血压手臂可穿戴设备【翻译】
提前解锁 2 大直播主题!今天手把手教你如何完成软件包集成?|第 29-30 期
Tongyu Xincai rushes to Shenzhen Stock Exchange: the annual revenue is 947million Zhang Chi and Su Shiguo are the actual controllers
Video based full link Intelligent Cloud? This article explains in detail what Alibaba cloud video cloud "intelligent media production" is
RedisSystemException:WRONGTYPE Operation against a key holding the wrong kind of value
MySQL查询请求的执行过程——底层原理
AIRIOT物联网平台赋能集装箱行业构建【焊接工位信息监控系统】
Reptiles have a good time. Are you full? These three bottom lines must not be touched!
Installation and management procedures
抽象类与抽象方法
10、 Process management
Some recruitment markets in Shanghai refuse to recruit patients with covid-19 positive
三年Android开发,2022疫情期间八家大厂的Android面试经历和真题整理