当前位置:网站首页>Pytoch learning (II)
Pytoch learning (II)
2022-06-30 02:27:00 【Master Ma】
One 、Pycharm View function parameters and usage
1、 Right click to view function information
1.1. Detailed parameters
Place the mouse over the function : Right click —>Go To—>Declaration or Usages It will jump to the source code of the function . You can also use shortcut keys Ctrl+B
1.2. Function usage
Place the mouse over the function : Right click —>Find Usages The usage of this function will be output on the console . You can also use shortcut keys Alt+F7

2. Use Ctrl View function information
2.1. Detailed parameters
Hold down Ctrl Place the mouse over the function you want to view , The required parameters of the function and other brief information will appear . To view the detailed parameters, click on the function , Will jump directly to the source code of the function .
Two 、nn.Dropout
dropout yes Hinton The old man put forward one for training trick. stay pytorch in , In addition to the original usage , And the use of data enhancement ( It will be mentioned later ).
The first thing to know is ,dropout It's designed for training . In the reasoning stage , Need to put dropout Turn off the , and model.eval() Will do this .
Link to the original text : https://arxiv.org/abs/1207.0580
In the usual sense dropout Interpreted as : In the forward propagation of the training process , Let each neuron with a certain probability p In an inactive state . To achieve the effect of reducing over fitting .
However , stay pytorch in ,dropout There is another use . If you put dropout Add it to the input tensor :
x = torch.randn(20, 16)
dropout = nn.Dropout(p=0.2)
x_drop = dropout(x)
1.Dropout It is set to prevent over fitting
2.Dropout As the name suggests, it means to lose
3.nn.Dropout(p = 0.3) # Indicates that each neuron has 0.3 The possibility of not being activated
4.Dropout It can only be used in the training part, not in the testing part
5.Dropout It is generally used after the fully connected neural network mapping layer , Such as code nn.Linear(20, 30) after
import torch
import torch.nn as nn
a = torch.randn(4, 4)
print(a)
"""
tensor([[ 1.2615, -0.6423, -0.4142, 1.2982],
[ 0.2615, 1.3260, -1.1333, -1.6835],
[ 0.0370, -1.0904, 0.5964, -0.1530],
[ 1.1799, -0.3718, 1.7287, -1.5651]])
"""
dropout = nn.Dropout()
b = dropout(a)
print(b)
"""
tensor([[ 2.5230, -0.0000, -0.0000, 2.5964],
[ 0.0000, 0.0000, -0.0000, -0.0000],
[ 0.0000, -0.0000, 1.1928, -0.3060],
[ 0.0000, -0.7436, 0.0000, -3.1303]])
"""
From the above code we can see Dropout You can also partially tensor Set the value in to 0
3、 ... and 、BatchNorm1d
torch.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
num_features – Feature dimension
eps – A value added to the denominator for numerical stability .
momentum – Moving average momentum value .
affine – A Boolean value , When set to true , This module has learnable affine parameters .
Four 、nn.CrossEntropyLoss() And NLLLoss
In the picture list label classification , Input m A picture , Output one mN Of Tensor, among N Is the number of categories . Such as input 3 A picture , There are three categories , The final output is a 33 Of Tensor, for instance :
nn.CrossEntropyLoss() Function to calculate the cross entropy loss
usage :
# output It's the output of the network ,size=[batch_size, class]
# Such as network batch size by 128, The data is divided into 10 class , be size=[128, 10]
# target Is the real label of the data , It's scalar ,size=[batch_size]
# Such as network batch size by 128, be size=[128]
crossentropyloss=nn.CrossEntropyLoss()
crossentropyloss_output=crossentropyloss(output,target)
Be careful , Use nn.CrossEntropyLoss() when , There is no need to pass the output through softmax layer , Otherwise, the calculated loss will be wrong , That is, the network output is directly used to calculate the loss
nn.CrossEntropyLoss() The calculation formula of is :
among x Is the output vector of the network ,class It's the real label
for instance , The output of a three class network to an input sample is [-0.7715, -0.6205,-0.2562], The real label of this sample is 0, Then use nn.CrossEntropyLoss() The calculated loss is :
NLLLoss
In the picture list label classification , Input m A picture , Output one mN Of Tensor, among N Is the number of categories . Such as input 3 A picture , There are three categories , The final output is a 33 Of Tensor, for instance :
input = torch.randn(3,3)
print('input', input)

The first 123 The lines are... Respectively 123 The result of the picture , Hypothesis number 1 123 The columns are cats 、 Dog and pig classification scores .
Then use... For each line Softmax, In this way, we can get the probability distribution of each picture .
sm = nn.Softmax(dim=1)
output = sm(input)
print('output', output)

here dim It means to calculate Softmax Dimensions , Set up here dim=1, You can see that the sum of each line is 1. For example, the sum of the first line =1.
sm = nn.Softmax(dim=0)
output2= sm(input)
print('output2', output2)

If you set dim=0, Is the sum of a column 1. For example, the sum of the first column =1.
A picture here is a line , therefore dim It should be set to 1.
Then on Softmax Take the natural logarithm as the result of :
print(torch.log(sm(input)))

Softmax The last values are all in 0~1 Between , therefore ln Then the value range is negative infinity to 0.
NLLLoss The result is to match the above output with Label Take out the corresponding value , Then remove the minus sign , Then find the average .
Let's say we are now Target yes [0,2,1]( The first picture is a cat , The second is a pig , The third one is the dog ). The first line takes 0 Elements , Take the second line 2 individual , The third line is taken as 1 individual , Remove the minus sign , The result is :[0.4155,1.0945,1.5285]. Find another mean , The result is :
loss = nn.NLLLoss()
target = torch.tensor([0,2,1])
LOS = loss(torch.log(sm(input)),target)
print('LOS', LOS)

CrossEntropyLoss Is to put the above Softmax–Log–NLLLoss Merge into one step , We use the one that just came out randomly input Directly verify whether the result is 1.0128:
loss = nn.CrossEntropyLoss()
target = torch.tensor([0,2,1])
LOS2 = loss(input,target)
print('LOS2', LOS2)

边栏推荐
- How to use SMS to deliver service information to customers? The guide is here!
- The odoo15 page jumps directly to the editing status
- VScode如何Debug(调试)进入标准库文件/第三方包源码
- AutoJS代碼能加密嗎?YES,AutoJS加密技巧展示
- 冒泡排序
- Heap sort
- 2.< tag-动态规划和0-1背包问题>lt.416. 分割等和子集 + lt.1049. 最后一块石头的重量 II
- Entering Jiangsu writers and poets carmine Jasmine World Book Day
- 一种跳板机的实现思路
- dhu编程练习
猜你喜欢

ROS Bridge 笔记(01)— apt 安装、源码编译安装、安装依赖、运行显示

FDA ESG regulation: digital certificate must be used to ensure communication security

After the blueprint node of ue5 is copied to UE4, all connections and attribute values are lost

隐藏在科技教育中的steam元素

FAQs for code signature and driver signature

VScode如何Debug(调试)进入标准库文件/第三方包源码

DMX configuration

How to create a CSR (certificate signing request) file?

Seven common errors of SSL certificate and their solutions

Recheck on February 15, 2022
随机推荐
Weekly recommended short video: why is the theory correct but can not get the expected results?
Can autojs code be encrypted? Yes, display of autojs encryption skills
【postgres】postgres 数据库迁移
代码签名、驱动签名的常见问题解答
Realization of a springboard machine
JS advanced -h5 new features
一种跳板机的实现思路
主流CA吊销俄罗斯数字证书启示:升级国密算法SSL证书,助力我国网络安全自主可控
After the blueprint node of ue5 is copied to UE4, all connections and attribute values are lost
Five cheapest wildcard SSL certificate brands
PEM_ read_ bio_ Privatekey() returns null only in ECB mode - PEM_ read_ bio_ PrivateKey() returns NULL in ECB mode only
How vscode debugs into standard library files / third-party package source code
FDA ESG规定:必须使用数字证书保证通信安全
Dynamic SQL
直接插入排序
How to prevent phishing emails? S/mime mail certificate
Le Code autojs peut - il être chiffré? Oui, présentation des techniques de chiffrement autojs
vs实现快速替换功能
FDA ESG regulation: digital certificate must be used to ensure communication security
每周推荐短视频:为什么理论正确但得不到预期结果?