当前位置:网站首页>4、 Application of one hot and loss function
4、 Application of one hot and loss function
2022-07-29 06:08:00 【My hair is messy】
List of articles
Preface
I have nothing to do today , Because there are many problems in learning AI , Entanglement starts from that problem ? But don't worry , I already have the answer . therefore , I decided to summarize the main points of recent learning , To ease the tense learning atmosphere . Ha ha ha ~~, Come on , Get to the point .
/font>
Tips : Let's talk about data preprocessing One-Hot. The following is the main body of this article , The following cases can be used for reference
One 、One-hot What is it? ?
Example : For this question , Google before , It also involves registers (one-hot Encoding is N The bit status register is N A way to encode States ).. Really speechless . I won't talk about those at the bottom , We just need to understand one-hot Coding is a form of transforming category variables into machine learning algorithms that are easy to deal with !
The concept is too abstract , Yes, too much , Let's illustrate with practical examples .
We have two characteristics as follows :
We see two special evidences named :animal And food, Explain the meaning of two columns of values , The first column represents the names of animals , The second column is the number of food , Like the first line cat 2 Describe the cat eating two foods , Here is the test data , Mainly want to pass , These data give intuitive understanding and practical operation .
And do the above data one-hot The result after coding is :
animal The column data type is string , And the second column is numerical , If we can use these eigenvalues 0/1 Express , Is it in machine learning , It is very helpful for these discontinuous values . Sum up , We infer , If you are processing data , Through the step of Feature Engineering , Be able to distinguish the types of features , What is continuous , Which are discontinuous , Then we can deal with it in a special way , Like here one-hot code !
Two 、 Application scenarios
In model training , The training effect is often regarded as good or bad according to the size of the loss function . You usually use :
- Mean absolute error (MAEloss),
- Mean square error (MSEloss), Need to do one-hot And joining softmax Output function .
- Two categories cross entropy (BCELoss), Need to do one-hot And joining softmax Output function .
- Two categories cross entropy (BCEWithLogitsLoss, Do it automatically for the input value sigmoid, But it needs to be done one-hot.
- Multi category cross entropy (CrossEntropyLoss), Automatic processing of input data one-hot and softmax.
3、 ... and 、One-hot Create method (3 Kind of )
1.for Cycle generation
The code is as follows ( Example ):
def one_hot(w,h,arr):
zero_arr = np.zeros([w,h])#w That's ok ,h Column
for i in range(w):
j = int(arr[i])
zero_arr[i][j]=1
return zero_arr
arr = np.array([5, 2, 8, 6])
one_hot(len(arr),max(arr)+1,arr)2.arange Traversal methods
The premise of this method is :arr = np.array([5, 2, 8, 6]) This is the type , Array type .
The code is as follows ( Example ):
# Law 1
arr = np.array([5, 2, 8, 6])
zero_arr = torch.zeros(len(arr),max(arr)+1)
zero_arr[torch.arange(len(arr)),arr]=1 #arange Very similar range function
# Or in one step
# Law 2
torch.zeros(len(arr), max(arr) + 1)[torch.arange(len(arr)),arr]=1# First level title
3.scatter_ Method
The premise of this method is :arr = torch.tensor([5, 2, 8, 6]) This is the type , That is, tensor (tensor) type .
The code is as follows ( Example ):
arr = torch.tensor([5, 2, 8, 6])
torch_out =torch.zeros(len(arr),max(arr)+1).scatter_(1,arr.reshape(-1,1),1)
'''scatter_(input, dim, index, src) It can be understood as placing elements or modifying elements
dim: Along which dimension is the index . It's usually 1 or -1
index: be used for scatter The element index of
src: What content is used to modify the exchange , It can be a scalar or a tensor '''
'''reshape(-1,1) convert to 1 Column :
reshape(1,-1) Turn it into 1 That's ok :for
reshape(2,-1) Convert to two lines :
reshape(-1,2) Convert to two columns ''';Four 、 summary
Tips : Here is a summary of the article :
1. Multi category cross entropy CrossEntropyLoss(), Don't need to do ong-hot, Do it automatically onehot and softmax.
2. Mean square error MSELoss, Need to do onehot And output functions .
3. Two categories cross entropy BCELoss(), Need to do onehot And output functions .
4. Two categories cross entropy BCEWithLogitsLoss,onehot, But there is no need to do output function .
边栏推荐
- 二、深度学习数据增强方法汇总
- 【Transformer】TransMix: Attend to Mix for Vision Transformers
- GA-RPN:引导锚点的建议区域网络
- 【Transformer】AdaViT: Adaptive Vision Transformers for Efficient Image Recognition
- 【语义分割】Mapillary 数据集简介
- torch.nn.Embedding()详解
- How to obtain openid of wechat applet in uni app project
- MarkDown简明语法手册
- 【卷积核设计】Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
- [ml] PMML of machine learning model -- Overview
猜你喜欢

Beijing Baode & taocloud jointly build the road of information innovation

Wechat built-in browser prohibits caching

C # judge whether the user accesses by mobile phone or computer

tensorboard使用

备份谷歌或其他浏览器插件

Spring, summer, autumn and winter with Miss Zhang (2)

【Transformer】SegFormer:Simple and Efficient Design for Semantic Segmentation with Transformers

Flink connector Oracle CDC synchronizes data to MySQL in real time (oracle19c)

【Attention】Visual Attention Network

一、常见损失函数的用法
随机推荐
[competition website] collect machine learning / deep learning competition website (continuously updated)
一、网页端文件流的传输
FFmpeg创作GIF表情包教程来了!赶紧说声多谢乌蝇哥?
[clustmaps] visitor statistics
一、Focal Loss理论及代码实现
clion+opencv+aruco+cmake配置
一、PyTorch Cookbook(常用代码合集)
【Transformer】SOFT: Softmax-free Transformer with Linear Complexity
迁移学习—— Transfer Feature Learning with Joint Distribution Adaptation
C # judge whether the user accesses by mobile phone or computer
【Transformer】TransMix: Attend to Mix for Vision Transformers
Spring, summer, autumn and winter with Miss Zhang (2)
【Transformer】AdaViT: Adaptive Vision Transformers for Efficient Image Recognition
torch.nn.Parameter()函数理解
【语义分割】SETR_Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformer
How to perform POC in depth with full flash distribution?
PyTorch基础知识(可入门)
【bug】XLRDError: Excel xlsx file; not supported
迁移学习——Transitive Transfer Learning
性能优化之趣谈线程池:线程开的越多就越好吗?