当前位置:网站首页>Pytorch summary learning series - operation

Pytorch summary learning series - operation

2022-06-29 09:21:00 TJMtaotao

Arithmetic operations

stay PyTorch in , The same operation can take many forms , Next ⾯ Use addition as 例⼦.

Additive form ⼀

x = torch.tensor([5.5, 3])

y = torch.rand(5, 3)
print(x + y)

Additive form ⼆

print(torch.add(x, y)) 

You can also specify the output :

result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)

Addition form 3 、inplace

# adds x to y
y.add_(x)
print(y)

 

Indexes

We can also make ⽤ Use similar NumPy To access Tensor Of ⼀ part , It should be noted that : The indexed results are similar to
Original data shared memory , That is, modify a , The other one will be modified .

y = x[0, :]
y += 1
print(y)
print(x[0, :]) # Source tensor Also changed 了

Except often ⽤ Index of the selected data ,PyTorch Some advanced selection functions are also provided :

Change shape

⽤ use view() To change Tensor The shape of the :

y = x.view(15)
z = x.view(-1, 5) # -1 The dimension can be derived from the values of other dimensions
print(x.size(), y.size(), z.size())

  Output

torch.Size([5, 3]) torch.Size([15]) torch.Size([3, 5]) 

Be careful view() Back to the new tensor With the source tensor Shared memory ( In fact, it is the same as ⼀ One tensor), That is to say 更 Change one of them ⼀ One , another
Outside ⼀ One will change with it .( seeing the name of a thing one thinks of its function ,view It just changed 了 For this tensor 量 The observation of ⻆ angle )

x += 1
print(x)
print(y) # Also add 了1

So if we Want to go back ⼀ A really new copy ( namely 不 Shared memory ) What to do ?Pytorch It also provides 了⼀ One
individual reshape() You can change the shape , But this function does not 不 What can guarantee to return is its copy ⻉ shellfish , So don't 不 Recommend to make ⽤ use . Recommend first
use clone Create a copy and then make ⽤ view 

x_cp = x.clone().view(15)
x -= 1
print(x)
print(x_cp)

send ⽤ use clone also ⼀ One advantage is that it will be recorded in the calculation diagram , That is, when the gradient is returned to the replica, it will also be transmitted to the source Tensor .

in addition ⼀ A regular ⽤ The function used is item() , It can be ⼀ A target 量 Tensor convert to ⼀ One Python number:

x = torch.randn(1)
print(x)
print(x.item())

Output

tensor([2.3466])
2.3466382026672363

linear algebra

in addition ,PyTorch also ⽀ Some linear functions are supported , There is no need to ⼰ Making a wheel ⼦, For specific usage, please refer to the official ⽅
file . As shown in the following table :

 

原网站

版权声明
本文为[TJMtaotao]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/180/202206290830509062.html