当前位置:网站首页>Pytorch learning record
Pytorch learning record
2022-07-01 23:45:00 【OPTree412】
Here's the catalog title
- 1.TENSORS
- 2. Tensor Usage mode
- 1. Tensor Use GPU`tensor.to('cuda')`
- 2. Change the element
- 3. Splicing `torch.cat()`
- 4. Tensors Multiplication
- 5. Tensors Add `add_(n)`
- 6. Shaping `reshape(input, (row,colmn))`
- 7. Tensor contraction `squeeze()`
- 8. Tensor expansion `unsqueeze(input, dim)`
- 9. Dimension exchange `transpose()`
- 10. Find non-zero element `nonzero()`
- 11. Sum up , The average , square
- 2.Autograd
1.TENSORS
1.1 What is? tensors( tensor )
stay PyTorch in , Use tensors To encode the input and output of the model , And the parameters of the model .tensors Equivalent to numpy.array(), Can be in GPU Or other hardware .
1.2 Tensor initialization
1. torch.tensor
torch.tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False)
take data Convert to Tensor.data It can be list, tuple, NumPy ndarray, scalar And other data in the form of arrays .
import torch
# list type
data = [[1, 2], [3, 4]]
x_data = torch.tensor(data)
print(x_data)
# np.array type
score = np.array([[0,1],[2,3]])
y_data = torch.tensor(data)
print(y_data)

2. from_numpy(ndarray)
torch.from_numpy(ndarray) Will a numpy.ndarray Convert to Tensor, But to Note that this conversion is a shallow copy .( For deep copy tensor.copy_())
score = np.array([[0,1],[2,3]])
x_np = torch.from_numpy(score) # Shallow copy score object
print(f" Before change \nx_np:{
x_np},\nscore:{
score} ")
# After change
score[0][0] = 9
x_np[0][1] = 8
print(f"**********\n After change \nx_np:{
x_np},\nscore:{
score} ")

3. torch.ones()、torch.rand() And torch.zeros()
torch.zeros(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) Building a system where all elements are 0 Tensor .
torch.ones()、torch.rand() And torch.zeros() Empathy , But all elements are different .
rand = torch.rand((3,4))
zero = torch.zeros((3,4))
ones = torch.ones((3,4))

4. torch.ones_like() And torch.rand_like()
torch.ones_like() And torch.rand_like() The new tensor preserves the properties of the parameter tensor ( shape 、 data type )
x_ones = torch.ones_like(x_data) # Retain x_data Properties of
print(f"Ones Tensor: \n {
x_ones} \n {
x_data} \n")
x_rand = torch.rand_like(x_data, dtype=torch.float)
print(f"Random Tensor: \n {
x_rand} \n{
x_data} \n")

5. arange()
arange(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)
rr = torch.arange(6,13)
rrr = torch.arange(13)
rrrr = torch.arange(6, 13, 0.5)
rr,rrr,rrrr

1.3 Tensor attribute
tensor = torch.rand(3, 4)
print(f"Shape of tensor: {
tensor.shape}")
print(f"Datatype of tensor: {
tensor.dtype}")
print(f"Device tensor is stored on: {
tensor.device}")

2. Tensor Usage mode
1. Tensor Use GPUtensor.to('cuda')
if torch.cuda.is_available():
tensor = tensor.to('cuda')
print(f"Device tensor is stored on: {
tensor.device}")

2. Change the element
tensor = torch.ones(4, 4)
tensor[:,1] = 0
print(tensor)

3. Splicing torch.cat()
torch.cat(tensors, dim=0, *, out=None) concatenated tensors( A pile of non empty Tensor Splicing , In Africa dim Dimensions must be the same shape ), Return results .
Be careful : Splicing is the splicing of a specific dimension , Other dimensions are ignored
tensor = torch.ones(4, 4)
tensor[:,0] = 0
t1 = torch.cat([tensor, tensor, tensor], dim=1)
print(t1)

4. Tensors Multiplication
Multiplication at the element level Tensor1.mul(Tensors2) or *, Matrix multiplication Tensor1.matmul(Tensors2) or @
temp = torch.ones(4,4)
print(f"\ntemp.mul(nn) {
temp.mul(temp)} \n")
print(f"temp * temp \n {
temp * temp}")
print(f"temp @ temp \n {
temp @ temp}")
print(f"temp.matmul(temp) \n {
temp.matmul(temp)}")

5. Tensors Add add_(n)
All elements add n
print(tensor, "\n")
tensor.add_(5)
print(tensor)

6. Shaping reshape(input, (row,colmn))
hold input The matrix is changed to row That's ok ,colmn Column
a = torch.arange(4.).reshape((2, 2))
b = torch.tensor([[0, 1], [2, 3]]).reshape((-1,))
''' perhaps a = torch.arange(4.).reshape((2, 2)) a = torch.reshape(a, (2, 2)) b = torch.tensor([[0, 1], [2, 3]]).reshape((-1,)) a = torch.reshape(b, (-1, )) '''

7. Tensor contraction squeeze()
squeeze(Tensor, dim=None, *, out=None) Get rid of Tensor The middle dimension is 1 Dimensions , And return this Tensor. If there is dim Only the specified dimension squeeze operation . It's a deep copy .
x = torch.zeros(2, 1, 2, 1, 2)
print(x.size())
y = torch.squeeze(x,dim = 1)
print(y.size())
print(x.size())

8. Tensor expansion unsqueeze(input, dim)
unsqueeze(input, dim) stay input Insert a dimension with a length of 1 Dimensions , return Tensor
9. Dimension exchange transpose()
transpose(input, dim0, dim1) return input Transposed Tensor,dim0 and dim1 In exchange for .
x = torch.tensor([1, 2, 3, 4])
print(x.shape)
print(torch.unsqueeze(x, 0))
print(torch.unsqueeze(x, 0).shape)
print(torch.unsqueeze(x, 1))
print(torch.unsqueeze(x, 1).shape)

10. Find non-zero element nonzero()
nonzero(input, *, out=None, as_tuple=False)
①as_tuple=False: Returns a two-dimensional Tensor, Each row is one input Index of non-zero elements
torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
[0.0, 0.4, 0.0, 0.0],
[0.0, 0.0, 1.2, 0.0],
[0.0, 0.0, 0.0,-0.4]]))
tensor([[ 0, 0],
[ 1, 1],
[ 2, 2],
[ 3, 3]])
②as_tuple=True: Returns a one-dimensional index Tensor Composed of tuple( Each element is an index on a dimension )
Be careful :where(condition) and torch.nonzero(condition, as_tuple=True) identical
torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
[0.0, 0.4, 0.0, 0.0],
[0.0, 0.0, 1.2, 0.0],
[0.0, 0.0, 0.0,-0.4]]), as_tuple=True)
(tensor([0, 1, 2, 3]), tensor([0, 1, 2, 3]))
11. Sum up , The average , square
y=x.sum(),y=x.mean(),y=x.pow(2)
2.Autograd
torch.autograd yes PyTorch Automatic derivation package provided , Very easy to use , You don't have to calculate the neural network partial derivative by yourself .
2.1 A round of training
- Forward propagation :
prediction = model(data) - Back propagation :
1. Calculation loss
2.loss.backward()(autograd The gradient of the parameter will be calculated in this step , There are corresponding parameters Tensor Of grad Properties of the )
3. Update parameters
1. load optimizer( adopt torch.optim)
2.optimizer.step()Use gradient descent method to update parameters ( The gradient comes from the parameter grad attribute )
import torch, torchvision
# Build the model 、 Parameters 、 label
model = torchvision.models.resnet18(pretrained=True) # Model
data = torch.rand(1, 3, 64, 64) # data Create a random data tensor to represent the data with 3 Channels 、 Height and width are 64 A single image of
labels = torch.rand(1, 1000) # Initialize its corresponding tag to some random values . The labels in the pre training model have shapes (1,1000).
# Forward propagation
prediction = model(data) # Get the model prediction results
loss = (prediction - labels).sum() # Calculate the loss function
# Back propagation
loss.backward() # Calculate the gradient of the parameter .grad() Store gradients in
optim = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9) # Stochastic gradient descent Learning rate 0.001
# Last , call .step() To start the gradient descent . The optimizer stores in .grad To adjust each parameter .
optim.step() #gradient descent
Reference resources :
边栏推荐
- 哈工大《信息内容安全》课程知识要点和难点
- sql 优化
- ADO. Net SqlDataAdapter object
- RPA教程01:EXCEL自动化从入门到实操
- 【必会】BM41 输出二叉树的右视图【中等+】
- 在长城证券上买基金安全吗?
- Notes to problems - file /usr/share/mysql/charsets/readme from install of mysql-server-5.1.73-1 glibc23.x86_ 64 c
- Leetcode (34) -- find the first and last positions of elements in a sorted array
- ADO.NET之sqlCommand对象
- SecurityUtils.getSubject().getPrincipal()为null的问题怎么解决
猜你喜欢

问题随记 —— file /usr/share/mysql/charsets/README from install of MySQL-server-5.1.73-1.glibc23.x86_64 c

. env. XXX file, with constant, but undefined

问题随记 —— /usr/bin/perl is needed by MySQL-server-5.1.73-1.glibc23.x86_64

Matplotlib common charts

2021 robocom world robot developer competition - semi finals of higher vocational group

The best smart home open source system in 2022: introduction to Alexa, home assistant and homekit ecosystem

Matplotlib common settings
![[understanding of opportunity-35]: Guiguzi - flying clamp - the art of remote connection, remote control and remote testing](/img/08/9ecfd53a04e147022dde3449aec132.png)
[understanding of opportunity-35]: Guiguzi - flying clamp - the art of remote connection, remote control and remote testing

PyCharm调用matplotlib绘图时图像弹出问题怎么解决

深度学习 | 三个概念:Epoch, Batch, Iteration
随机推荐
Notes on problems - /usr/bin/perl is needed by mysql-server-5.1.73-1 glibc23.x86_ sixty-four
[understanding of opportunity-35]: Guiguzi - flying clamp - the art of remote connection, remote control and remote testing
Daily three questions 6.30
[swoole Series 1] what will you learn in the world of swoole?
Linux基础 —— CentOS7 离线安装 MySQL
Daily three questions 6.29
Create Ca and issue certificate through go language
jpa手写sql,用自定义实体类接收
Windows 7 安装MYSQL 错误:1067
Redis AOF log
E-commerce RPA robot helps brand e-commerce to achieve high traffic
Similarities and differences between the defined identity execution function authid determiner and PostgreSQL in Oracle
Redis RDB snapshot
[LeetCode] 最后一个单词的长度【58】
Anomaly-Transformer (ICLR 2022 Spotlight)复现过程及问题
SWT / anr problem - SWT causes low memory killer (LMK)
cookie、session、tooken
ADO.NET 之sqlConnection 对象使用摘要
Is it safe to choose mobile phone for stock trading account opening in Shanghai?
Experience of practical learning of Silicon Valley products