当前位置:网站首页>Pytoch learning (II)
Pytoch learning (II)
2022-06-30 02:27:00 【Master Ma】
One 、Pycharm View function parameters and usage
1、 Right click to view function information
1.1. Detailed parameters
Place the mouse over the function : Right click —>Go To—>Declaration or Usages It will jump to the source code of the function . You can also use shortcut keys Ctrl+B
1.2. Function usage
Place the mouse over the function : Right click —>Find Usages The usage of this function will be output on the console . You can also use shortcut keys Alt+F7

2. Use Ctrl View function information
2.1. Detailed parameters
Hold down Ctrl Place the mouse over the function you want to view , The required parameters of the function and other brief information will appear . To view the detailed parameters, click on the function , Will jump directly to the source code of the function .
Two 、nn.Dropout
dropout yes Hinton The old man put forward one for training trick. stay pytorch in , In addition to the original usage , And the use of data enhancement ( It will be mentioned later ).
The first thing to know is ,dropout It's designed for training . In the reasoning stage , Need to put dropout Turn off the , and model.eval() Will do this .
Link to the original text : https://arxiv.org/abs/1207.0580
In the usual sense dropout Interpreted as : In the forward propagation of the training process , Let each neuron with a certain probability p In an inactive state . To achieve the effect of reducing over fitting .
However , stay pytorch in ,dropout There is another use . If you put dropout Add it to the input tensor :
x = torch.randn(20, 16)
dropout = nn.Dropout(p=0.2)
x_drop = dropout(x)
1.Dropout It is set to prevent over fitting
2.Dropout As the name suggests, it means to lose
3.nn.Dropout(p = 0.3) # Indicates that each neuron has 0.3 The possibility of not being activated
4.Dropout It can only be used in the training part, not in the testing part
5.Dropout It is generally used after the fully connected neural network mapping layer , Such as code nn.Linear(20, 30) after
import torch
import torch.nn as nn
a = torch.randn(4, 4)
print(a)
"""
tensor([[ 1.2615, -0.6423, -0.4142, 1.2982],
[ 0.2615, 1.3260, -1.1333, -1.6835],
[ 0.0370, -1.0904, 0.5964, -0.1530],
[ 1.1799, -0.3718, 1.7287, -1.5651]])
"""
dropout = nn.Dropout()
b = dropout(a)
print(b)
"""
tensor([[ 2.5230, -0.0000, -0.0000, 2.5964],
[ 0.0000, 0.0000, -0.0000, -0.0000],
[ 0.0000, -0.0000, 1.1928, -0.3060],
[ 0.0000, -0.7436, 0.0000, -3.1303]])
"""
From the above code we can see Dropout You can also partially tensor Set the value in to 0
3、 ... and 、BatchNorm1d
torch.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
num_features – Feature dimension
eps – A value added to the denominator for numerical stability .
momentum – Moving average momentum value .
affine – A Boolean value , When set to true , This module has learnable affine parameters .
Four 、nn.CrossEntropyLoss() And NLLLoss
In the picture list label classification , Input m A picture , Output one mN Of Tensor, among N Is the number of categories . Such as input 3 A picture , There are three categories , The final output is a 33 Of Tensor, for instance :
nn.CrossEntropyLoss() Function to calculate the cross entropy loss
usage :
# output It's the output of the network ,size=[batch_size, class]
# Such as network batch size by 128, The data is divided into 10 class , be size=[128, 10]
# target Is the real label of the data , It's scalar ,size=[batch_size]
# Such as network batch size by 128, be size=[128]
crossentropyloss=nn.CrossEntropyLoss()
crossentropyloss_output=crossentropyloss(output,target)
Be careful , Use nn.CrossEntropyLoss() when , There is no need to pass the output through softmax layer , Otherwise, the calculated loss will be wrong , That is, the network output is directly used to calculate the loss
nn.CrossEntropyLoss() The calculation formula of is :
among x Is the output vector of the network ,class It's the real label
for instance , The output of a three class network to an input sample is [-0.7715, -0.6205,-0.2562], The real label of this sample is 0, Then use nn.CrossEntropyLoss() The calculated loss is :
NLLLoss
In the picture list label classification , Input m A picture , Output one mN Of Tensor, among N Is the number of categories . Such as input 3 A picture , There are three categories , The final output is a 33 Of Tensor, for instance :
input = torch.randn(3,3)
print('input', input)

The first 123 The lines are... Respectively 123 The result of the picture , Hypothesis number 1 123 The columns are cats 、 Dog and pig classification scores .
Then use... For each line Softmax, In this way, we can get the probability distribution of each picture .
sm = nn.Softmax(dim=1)
output = sm(input)
print('output', output)

here dim It means to calculate Softmax Dimensions , Set up here dim=1, You can see that the sum of each line is 1. For example, the sum of the first line =1.
sm = nn.Softmax(dim=0)
output2= sm(input)
print('output2', output2)

If you set dim=0, Is the sum of a column 1. For example, the sum of the first column =1.
A picture here is a line , therefore dim It should be set to 1.
Then on Softmax Take the natural logarithm as the result of :
print(torch.log(sm(input)))

Softmax The last values are all in 0~1 Between , therefore ln Then the value range is negative infinity to 0.
NLLLoss The result is to match the above output with Label Take out the corresponding value , Then remove the minus sign , Then find the average .
Let's say we are now Target yes [0,2,1]( The first picture is a cat , The second is a pig , The third one is the dog ). The first line takes 0 Elements , Take the second line 2 individual , The third line is taken as 1 individual , Remove the minus sign , The result is :[0.4155,1.0945,1.5285]. Find another mean , The result is :
loss = nn.NLLLoss()
target = torch.tensor([0,2,1])
LOS = loss(torch.log(sm(input)),target)
print('LOS', LOS)

CrossEntropyLoss Is to put the above Softmax–Log–NLLLoss Merge into one step , We use the one that just came out randomly input Directly verify whether the result is 1.0128:
loss = nn.CrossEntropyLoss()
target = torch.tensor([0,2,1])
LOS2 = loss(input,target)
print('LOS2', LOS2)

边栏推荐
- PMP考生如何应对新考纲?看过来!
- 直接插入排序
- Implement vs to run only one source file at a time
- Merge sort
- Le Code autojs peut - il être chiffré? Oui, présentation des techniques de chiffrement autojs
- Blue Bridge Cup stm32g431 - three lines of code for keys (long press, short press, click, double click)
- vs实现快速替换功能
- 论文回顾:Playful Palette: An Interactive Parametric Color Mixer for Artists
- What is an X.509 certificate? 10. 509 certificate working principle and application?
- Realization of a springboard machine
猜你喜欢
随机推荐
Openlayers 3 built in interaction
DDoS threat situation gets worse
Seven common errors of SSL certificate and their solutions
What are the requirements for NPDP product manager international certification examination?
Blitzkrieg companies with DDoS attacks exceeding 100gbps in 2014
堆排序
Matlab code running tutorial (how to run the downloaded code)
網上炒股安全麼?炒股需要開戶嗎?
Alphassl digital certificate
[Galaxy Kirin V10] [desktop] Firefox browser settings home page does not take effect
Jenkins continuous integration environment build 8 (configure mailbox server to send build results)
Tencent released the first Office Photo 23 years ago. It's so chronological
Day_ 19 multithreading Basics
What is digicert smart seal?
Shell Sort
论文回顾:Playful Palette: An Interactive Parametric Color Mixer for Artists
8 — router
Est - ce que la bourse en ligne est sécurisée? Dois - je ouvrir un compte pour la spéculation boursière?
如何使用SMS向客户传递服务信息?指南在这里!
What is a dangling pointer- What is a dangling pointer?





![[dry goods sharing] the latest WHQL logo certification application process](/img/c3/37277572c70b0af944e594f0965a6c.png)

