当前位置:网站首页>Talking about label smoothing technology
Talking about label smoothing technology
2022-07-05 09:06:00 【aelum】
Author's brief introduction : Non Coban transcoding , We are constantly enriching our technology stack
️ Blog home page :https://raelum.blog.csdn.net
Main areas :NLP、RS、GNN
If this article helps you , Can pay attention to ️ + give the thumbs-up + Collection + Leaving a message. , This will be the biggest motivation for my creation
Catalog
One 、 from One-Hot To Label Smoothing
Consider the cross entropy loss of a single sample
H ( p , q ) = − ∑ i = 1 C p i log q i H(p,q)=-\sum_{i=1}^C p_i\log q_i H(p,q)=−i=1∑Cpilogqi
among C C C Represents the number of categories , p i p_i pi It's a real distribution ( namely target
), q i q_i qi Is the predicted distribution ( That is, the output of neural network prediction
).
If the real distribution adopts the traditional One-Hot vector , Then its component is not 0 0 0 namely 1 1 1. We might as well set a second k k k A place is 1 1 1, The rest are 0 0 0, At this point, the cross entropy loss becomes
H ( p , q ) = − log q k H(p,q)=-\log q_k H(p,q)=−logqk
It is not difficult to find some problems from the above expression :
- The relationship between real tags and other tags is ignored , Some useful knowledge cannot be learned ;
- One-Hot Tends to make the model overconfident (Overconfidence), It is easy to cause over fitting , This leads to the degradation of generalization performance ;
- Mislabeled samples ( namely
target
error ) It is easier to affect the training of the model ; - One-Hot Yes “ ready to accept either course ” The sample characterization of is poor .
The way to alleviate these problems is to adopt Label Smoothing Technology , It is also a regularization technique , As follows :
p i : = { 1 − ϵ , i = k ϵ / ( C − 1 ) , i ≠ k p_i:= \begin{cases} 1-\epsilon,& i=k \\ \epsilon/(C-1),&i\neq k\\ \end{cases} pi:={ 1−ϵ,ϵ/(C−1),i=ki=k
among ϵ \epsilon ϵ Is a small positive number .
for example , Set original target
by [ 0 , 0 , 1 , 0 , 0 , 0 ] [0,0,1,0,0,0] [0,0,1,0,0,0], take ϵ = 0.1 \epsilon=0.1 ϵ=0.1, Past the Label Smoothing after target
Turn into [ 0.02 , 0.02 , 0.9 , 0.02 , 0.02 , 0.02 ] [0.02,0.02,0.9,0.02,0.02,0.02] [0.02,0.02,0.9,0.02,0.02,0.02].
The original One-Hot Vectors are often called Hard Target( or Hard Label), After the label is smoothed, it is usually called Soft Target( or Soft Label)
Two 、Label Smoothing Simple implementation of
import torch
def label_smoothing(label, eps):
label[label == 1] = 1 - eps
label[label == 0] = eps / (len(label) - 1)
return label
a = torch.tensor([0, 0, 1, 0, 0, 0], dtype=torch.float)
print(label_smoothing(a, 0.1))
# tensor([0.0200, 0.0200, 0.9000, 0.0200, 0.0200, 0.0200])
3、 ... and 、Label Smoothing Advantages and disadvantages
advantage :
- To some extent, it can alleviate the model Overconfidence The problem of , In addition, it also has certain anti noise ability ;
- Provides the relationship between categories in training data ( Data to enhance );
- It may enhance the generalization ability of the model to a certain extent .
shortcoming :
- Simply add random noise , It can't reflect the relationship between labels , Therefore, the improvement of the model is limited , There is even a risk of under fitting ;
- In some cases Soft Label It doesn't help us build better Neural Networks ( Not as good as Hard Label).
Four 、 When to use Label Smoothing?
- Huge data sets inevitably have noise ( That is, the marking is wrong ), In order to avoid the noise learned by the model, you can add Label Smoothing;
- For fuzzy case In general, we can introduce Label Smoothing( For example, in the cat and dog classification task , There may be some pictures that look like both dogs and cats );
- Prevention model Overconfidence.
边栏推荐
- nodejs_ 01_ fs. readFile
- Adaboost使用
- LLVM之父Chris Lattner:为什么我们要重建AI基础设施软件
- 优先级队列(堆)
- C [essential skills] use of configurationmanager class (use of file app.config)
- 2011-11-21 training record personal training (III)
- Introduction Guide to stereo vision (4): DLT direct linear transformation of camera calibration [recommended collection]
- 2311. 小于等于 K 的最长二进制子序列
- 编辑器-vi、vim的使用
- MPSoC QSPI flash upgrade method
猜你喜欢
什么是防火墙?防火墙基础知识讲解
Nodejs modularization
[daiy4] copy of JZ35 complex linked list
Applet (subcontracting)
容易混淆的基本概念 成员变量 局部变量 全局变量
Halcon snap, get the area and position of coins
Applet (use of NPM package)
Generate confrontation network
The combination of deep learning model and wet experiment is expected to be used for metabolic flux analysis
C [essential skills] use of configurationmanager class (use of file app.config)
随机推荐
Halcon blob analysis (ball.hdev)
Blogger article navigation (classified, real-time update, permanent top)
[code practice] [stereo matching series] Classic ad census: (4) cross domain cost aggregation
Introduction Guide to stereo vision (2): key matrix (essential matrix, basic matrix, homography matrix)
Pearson correlation coefficient
The combination of deep learning model and wet experiment is expected to be used for metabolic flux analysis
Oracle advanced (III) detailed explanation of data dictionary
Nodejs modularization
Ros- learn basic knowledge of 0 ROS - nodes, running ROS nodes, topics, services, etc
Wechat H5 official account to get openid climbing account
微信H5公众号获取openid爬坑记
容易混淆的基本概念 成员变量 局部变量 全局变量
Dynamic dimensions required for input: input, but no shapes were provided. Automatically overriding
Introduction Guide to stereo vision (5): dual camera calibration [no more collection, I charge ~]
Halcon affine transformations to regions
OpenFeign
牛顿迭代法(解非线性方程)
c#比较两张图像的差异
Programming implementation of ROS learning 5-client node
[technical school] spatial accuracy of binocular stereo vision system: accurate quantitative analysis