当前位置:网站首页>Talking about label smoothing technology
Talking about label smoothing technology
2022-07-05 09:06:00 【aelum】
Author's brief introduction : Non Coban transcoding , We are constantly enriching our technology stack
️ Blog home page :https://raelum.blog.csdn.net
Main areas :NLP、RS、GNN
If this article helps you , Can pay attention to ️ + give the thumbs-up + Collection + Leaving a message. , This will be the biggest motivation for my creation
Catalog
One 、 from One-Hot To Label Smoothing
Consider the cross entropy loss of a single sample
H ( p , q ) = − ∑ i = 1 C p i log q i H(p,q)=-\sum_{i=1}^C p_i\log q_i H(p,q)=−i=1∑Cpilogqi
among C C C Represents the number of categories , p i p_i pi It's a real distribution ( namely target
), q i q_i qi Is the predicted distribution ( That is, the output of neural network prediction
).
If the real distribution adopts the traditional One-Hot vector , Then its component is not 0 0 0 namely 1 1 1. We might as well set a second k k k A place is 1 1 1, The rest are 0 0 0, At this point, the cross entropy loss becomes
H ( p , q ) = − log q k H(p,q)=-\log q_k H(p,q)=−logqk
It is not difficult to find some problems from the above expression :
- The relationship between real tags and other tags is ignored , Some useful knowledge cannot be learned ;
- One-Hot Tends to make the model overconfident (Overconfidence), It is easy to cause over fitting , This leads to the degradation of generalization performance ;
- Mislabeled samples ( namely
target
error ) It is easier to affect the training of the model ; - One-Hot Yes “ ready to accept either course ” The sample characterization of is poor .
The way to alleviate these problems is to adopt Label Smoothing Technology , It is also a regularization technique , As follows :
p i : = { 1 − ϵ , i = k ϵ / ( C − 1 ) , i ≠ k p_i:= \begin{cases} 1-\epsilon,& i=k \\ \epsilon/(C-1),&i\neq k\\ \end{cases} pi:={ 1−ϵ,ϵ/(C−1),i=ki=k
among ϵ \epsilon ϵ Is a small positive number .
for example , Set original target
by [ 0 , 0 , 1 , 0 , 0 , 0 ] [0,0,1,0,0,0] [0,0,1,0,0,0], take ϵ = 0.1 \epsilon=0.1 ϵ=0.1, Past the Label Smoothing after target
Turn into [ 0.02 , 0.02 , 0.9 , 0.02 , 0.02 , 0.02 ] [0.02,0.02,0.9,0.02,0.02,0.02] [0.02,0.02,0.9,0.02,0.02,0.02].
The original One-Hot Vectors are often called Hard Target( or Hard Label), After the label is smoothed, it is usually called Soft Target( or Soft Label)
Two 、Label Smoothing Simple implementation of
import torch
def label_smoothing(label, eps):
label[label == 1] = 1 - eps
label[label == 0] = eps / (len(label) - 1)
return label
a = torch.tensor([0, 0, 1, 0, 0, 0], dtype=torch.float)
print(label_smoothing(a, 0.1))
# tensor([0.0200, 0.0200, 0.9000, 0.0200, 0.0200, 0.0200])
3、 ... and 、Label Smoothing Advantages and disadvantages
advantage :
- To some extent, it can alleviate the model Overconfidence The problem of , In addition, it also has certain anti noise ability ;
- Provides the relationship between categories in training data ( Data to enhance );
- It may enhance the generalization ability of the model to a certain extent .
shortcoming :
- Simply add random noise , It can't reflect the relationship between labels , Therefore, the improvement of the model is limited , There is even a risk of under fitting ;
- In some cases Soft Label It doesn't help us build better Neural Networks ( Not as good as Hard Label).
Four 、 When to use Label Smoothing?
- Huge data sets inevitably have noise ( That is, the marking is wrong ), In order to avoid the noise learned by the model, you can add Label Smoothing;
- For fuzzy case In general, we can introduce Label Smoothing( For example, in the cat and dog classification task , There may be some pictures that look like both dogs and cats );
- Prevention model Overconfidence.
边栏推荐
- Chris LATTNER, the father of llvm: why should we rebuild AI infrastructure software
- kubeadm系列-00-overview
- Multiple linear regression (gradient descent method)
- 深度学习模型与湿实验的结合,有望用于代谢通量分析
- Confusing basic concepts member variables local variables global variables
- Programming implementation of ROS learning 6 -service node
- Codeworks round 681 (Div. 2) supplement
- Wechat H5 official account to get openid climbing account
- Luo Gu p3177 tree coloring [deeply understand the cycle sequence of knapsack on tree]
- Meta tag details
猜你喜欢
我从技术到产品经理的几点体会
Halcon Chinese character recognition
Blogger article navigation (classified, real-time update, permanent top)
nodejs_ 01_ fs. readFile
Ros-10 roslaunch summary
Solutions of ordinary differential equations (2) examples
TF coordinate transformation of common components of ros-9 ROS
Ros-11 common visualization tools
Redis implements a high-performance full-text search engine -- redisearch
L'information et l'entropie, tout ce que vous voulez savoir est ici.
随机推荐
Halcon snap, get the area and position of coins
多元线性回归(sklearn法)
使用arm Neon操作,提高内存拷贝速度
Driver's license physical examination hospital (114-2 hang up the corresponding hospital driver physical examination)
什么是防火墙?防火墙基础知识讲解
File server migration scheme of a company
Halcon blob analysis (ball.hdev)
Solution to the problems of the 17th Zhejiang University City College Program Design Competition (synchronized competition)
Count of C # LINQ source code analysis
uni-app 实现全局变量
How many checks does kubedm series-01-preflight have
[code practice] [stereo matching series] Classic ad census: (4) cross domain cost aggregation
Ros-10 roslaunch summary
Wechat H5 official account to get openid climbing account
JS asynchronous error handling
[technical school] spatial accuracy of binocular stereo vision system: accurate quantitative analysis
ROS learning 4 custom message
特征工程
520 diamond Championship 7-4 7-7 solution
嗨 FUN 一夏,与 StarRocks 一起玩转 SQL Planner!