当前位置:网站首页>[deep learning] pytoch tensor
[deep learning] pytoch tensor
2022-07-27 20:51:00 【Candy fan】

Catalog
Use it directly Python The list is transformed into a tensor :
adopt Numpy Array (ndarray) Convert to tensor :
Generate a new tensor from the existing tensor :
Generate tensors by specifying data dimensions :
3、 ... and 、 Tensor properties :
1. Tensor indexing and slicing :
3. Tensor multiplication and matrix multiplication :
Multiplication ( Point multiplication ):
Matrix multiplication ( Cross riding ):
4. Automatic assignment operation :
5、 ... and 、Tensor and Numpy Mutual conversion of :
1. from tensor Convert to ndarray:
2. from Ndarray Convert to Tensor:
It's early morning 12 spot , Record your study , Review it again Pytorch...... ok , In fact, it's not review , I just had a brief understanding before , That's it . however ! Now it's different , Need to study carefully !
Let‘s do it !!!
This is just a simple study note , That's it !!!
One 、 Tensor overview :
A special data structure , Used in deep learning neural networks , It's like an array ( multidimensional ) And matrices . Input and output of neural network 、 Mesh parameters are described by tensors !
import torch
import numpy as np
Two 、 Initialization tensor :
There are many ways to initialize tensors , Mainly choose different initialization methods according to the data source :
Use it directly Python The list is transformed into a tensor :
data = [[1, 2], [3, 4]]
x_data = torch.tensor(data)
Use torch Functions in the library tensor Put a two-dimensional python The list is transformed into a two-dimensional tensor .
adopt Numpy Array (ndarray) Convert to tensor :
ndarray And tensor (tensor) It supports mutual conversion
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
Generate a new tensor from the existing tensor :
The new tensor will inherit the data properties of the original tensor ( Structure and type ), You can also reassign new data attributes .
x_ones = torch.ones_like(x_data) # Retain x_data Properties of
print(f"Ones Tensor: \n {x_ones} \n")
x_rand = torch.rand_like(x_data, dtype=torch.float) # rewrite x_data Data type of int -> float
print(f"Random Tensor: \n {x_rand} \n")
Ones Tensor:
tensor([[1, 1],
[1, 1]])
Random Tensor:
tensor([[0.0381, 0.5780],
[0.3963, 0.0840]])
Generate tensors by specifying data dimensions :
Use shape Tuples specify the generated tensor dimension , Pass tuples to torch Function to create different tensors :
shape = (2,3,)
rand_tensor = torch.rand(shape)
ones_tensor = torch.ones(shape)
zeros_tensor = torch.zeros(shape)
print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor}")
Random Tensor:
tensor([[0.0266, 0.0553, 0.9843],
[0.0398, 0.8964, 0.3457]])
Ones Tensor:
tensor([[1., 1., 1.],
[1., 1., 1.]])
Zeros Tensor:
tensor([[0., 0., 0.],
[0., 0., 0.]])
3、 ... and 、 Tensor properties :
Through the different properties of tensor , We can know the dimension of tensor , The data type of the tensor 、 Tensor storage devices ( Physical devices )
tensor = torch.rand(3,4)
print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")
Shape of tensor: torch.Size([3, 4]) # dimension
Datatype of tensor: torch.float32 # data type
Device tensor is stored on: cpu # The storage device
Four 、 Tensor operations :
Check whether the current operating environment supports Pytorch, Check code :
# Judge the current environment GPU Is it available , And then tensor Import GPU Internal operation
if torch.cuda.is_available():
tensor = tensor.to('cuda')
1. Tensor indexing and slicing :
Python The section of , The first parameter is line operation , The second parameter is the column operation .
tensor = torch.ones(4, 4)
tensor[:,1] = 0 # Will be the first 1 Column ( from 0 Start ) All the data of are assigned to 0
print(tensor)
All index positions are from 0 Start :
tensor([[1., 0., 1., 1.],
[1., 0., 1., 1.],
[1., 0., 1., 1.],
[1., 0., 1., 1.]])
2. The splicing of tensors :
You can go through torch.cat Method to splice a set of tensors according to the specified dimension , You can also refer to torch.stack Method .
t1 = torch.cat([tensor, tensor, tensor], dim=1)
print(t1)
Notice the dim Parameters , Here is the designation tensor Spliced dimensions , The dimension index is also from 0 Start ,0 Represents the first dimension ,1 Represents the second dimension , Therefore, in the case of two-dimensional splicing, it is spliced according to columns :
tensor([[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.]])
Want to know how many dimension , Just count the number of brackets , There are several layers of brackets, and there are several dimensions . and , Follow the brackets from the outside to the inside , Dimensions increase in turn : from 0 Turn into 1 Turn into 2.
3. Tensor multiplication and matrix multiplication :
Simply distinguish the difference between multiplication and matrix multiplication :
- Multiplication : On the matrix are two shape The same matrix ( That is, the shape of the matrix must be consistent ), Multiply the elements in the corresponding position
- Matrix multiplication : The dimensions of the matrix inline are required to be consistent , namely (n,m)x (m,z)
Multiplication ( Point multiplication ):
# Multiply the result element by element
print(f"tensor.mul(tensor): \n {tensor.mul(tensor)} \n")
# Equivalent writing :
print(f"tensor * tensor: \n {tensor * tensor}")
tensor.mul(tensor):
tensor([[1., 0., 1., 1.],
[1., 0., 1., 1.],
[1., 0., 1., 1.],
[1., 0., 1., 1.]])
tensor * tensor:
tensor([[1., 0., 1., 1.],
[1., 0., 1., 1.],
[1., 0., 1., 1.],
[1., 0., 1., 1.]])
Matrix multiplication ( Cross riding ):
print(f"tensor.matmul(tensor.T): \n {tensor.matmul(tensor.T)} \n")
# Equivalent writing :
print(f"tensor @ tensor.T: \n {tensor @ tensor.T}")
tensor.matmul(tensor.T):
tensor([[3., 3., 3., 3.],
[3., 3., 3., 3.],
[3., 3., 3., 3.],
[3., 3., 3., 3.]])
tensor @ tensor.T:
tensor([[3., 3., 3., 3.],
[3., 3., 3., 3.],
[3., 3., 3., 3.],
[3., 3., 3., 3.]])
4. Automatic assignment operation :
The automatic assignment operation usually has... After the method _ As a suffix , for example : x.copy_(y), x.t_() The operation will change x The value of . Re assign the result of the method call execution to the variable of the calling method .
print(tensor, "\n")
tensor.add_(5)
print(tensor)
tensor([[1., 0., 1., 1.],
[1., 0., 1., 1.],
[1., 0., 1., 1.],
[1., 0., 1., 1.]])
tensor([[6., 5., 6., 6.],
[6., 5., 6., 6.],
[6., 5., 6., 6.],
[6., 5., 6., 6.]])
Be careful : Although automatic assignment operation can save memory , But in the derivation, some problems will be caused by the loss of the intermediate process , So we don't encourage it .
5、 ... and 、Tensor and Numpy Mutual conversion of :
Tensor sum ndarray Array in CPU You can share a memory area on the , Change one of the values , The other will change .
1. from tensor Convert to ndarray:
tensor Call directly numpy Method :
t = torch.ones(5)
print(f"t: {t}")
n = t.numpy()
print(f"n: {n}")
t: tensor([1., 1., 1., 1., 1.])
n: [1. 1. 1. 1. 1.]
here , If you modify the tensor tensor Value , So the corresponding ndarray The value in will also change , Here is only the change of variable type , But the memory address that the variable points to is the same memory space :
t.add_(1)
print(f"t: {t}")
print(f"n: {n}")
t: tensor([2., 2., 2., 2., 2.])
n: [2. 2. 2. 2. 2.]
2. from Ndarray Convert to Tensor:
n = np.ones(5)
t = torch.from_numpy(n)
modify Numpy array The value of the array , Then the tensor value will change .
np.add(n, 1, out=n)
print(f"t: {t}")
print(f"n: {n}")
t: tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
n: [2. 2. 2. 2. 2.]
So that's it , Current pair tensor Tensor With a simple understanding , Learn how to create Tensor Variable ,Tensor Properties of , as well as Tensor Common operations of !
We will continue to learn related content later , Come on, ladies and gentlemen !!!
.
By the way, my live learning room is at a station , Ha ha ha , You can come and visit when you are free , Then stop here , Encourage everyone !!!
边栏推荐
- 面了个腾讯拿38K跳槽出来的,见识到了真正的测试天花板
- SQL高级技巧CTE和递归查询
- Understand the wonderful use of dowanward API, and easily grasp kubernetes environment variables
- Oracle +JDBC
- [benefit activity] stack a buff for your code! Click "tea" to receive the gift
- How to improve the picture transmission speed and success rate in the development of IM instant messaging under the mobile network
- 未定义变量 “Lattice“ 或类 “Lattice.latticeEasy“(Matlab)
- UE5使用DLSS(超级采样)提升场景的 FPS 远离卡顿的优化方案
- MySQL string function
- Oracle simple advanced query
猜你喜欢

浅析即时通讯移动端 IM 开发中登录请求的优化

leetcode:1498. 满足条件的子序列数目【排序 + 二分 + 幂次哈希表】

sql编码bug

2022-07-19 网工进阶(二十)BGP-路由优选、路由优选逐条分析

One week activity express | in simple terms, issue 8; Meetup Chengdu station registration in progress

LabVIEW学习笔记五:按钮按下后无法返回原状

一个程序员的水平能差到什么程度?

全局样式与图标

一周活动速递|深入浅出第8期;Meetup成都站报名进行中

UE5使用DLSS(超级采样)提升场景的 FPS 远离卡顿的优化方案
随机推荐
用户组织架构的管理
To do the test, you have to go to the big factory and disclose the "hidden rules" of bat big factory recruitment internally
MySQL basic queries and operators
Express WEB服务器的简单使用
用户和权限限制用户使用资源
JVs privatization deployment start failure handling scheme
一周活动速递|深入浅出第8期;Meetup成都站报名进行中
Flask-MDict搭建在线Mdict词典服务
access control
When adding RTSP devices to easycvr platform, what is the reason for the phenomenon that they are all connected by TCP?
关于栈迁移的那些事儿
MYSQL设计优化生成列
Unity fairygui play video (Lua)
Innovative cases | the growth strategy of digitalization of local life services and upgrading of Gaode brand
Session attack
vi工作模式(3种)以及模式切换(转换)
洋葱集团携手OceanBase实现分布式升级,全球数据首次实现跨云融合
Ie11 method of downloading doc PDF and other files
MySQL驱动jar包的下载--保姆教程
Nailing development document
https://live.bilibili.com/22404070?broadcast_type=0&is_room_feed=1?spm_id_from=333.999.space_home.left_live.click