当前位置:网站首页>[DL] introduction and understanding of tensor
[DL] introduction and understanding of tensor
2022-07-29 06:00:00 【Machines don't learn I learn】
1. Preface
In deep learning , We are sure to encounter a noun : tensor (tensor). For one dimension 、 Two dimensional is easy for us to understand , But three-dimensional 、 4 d 、…、n dimension , How do we understand ? Now we will pytorch Take the deep learning framework as an example to introduce in detail .
2. A one-dimensional
import torch # edition :1.8.0+cpu
a = torch.tensor([1,2,3,4])
print(a)
print(a.shape)
Output :
tensor([1, 2, 3, 4])
torch.Size([4])
The output has only one dimension , So it's one-dimensional .
3. A two-dimensional
import torch # edition :1.8.0+cpu
a = torch.tensor([[1,2,3,4]])
print(a)
print(a.shape)
Output :
tensor([[1, 2, 3, 4]])
torch.Size([1, 4])
The output above has rows and columns , So it's a two-dimensional tensor , It's actually a two-dimensional matrix .
4. The three dimensional
import torch # edition :1.8.0+cpu
a = torch.ones(1,3,3)
print(a)
print(a.shape)
Output :
tensor([[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]])
torch.Size([1, 3, 3])
From the output results, we can see that this tensor has three dimensions , There is one more dimension ahead 1. But I can't see this intuitively 1 Where is it reflected . Now let's look at a tensor , Intuitively feel where the front dimension is reflected :
import torch # edition :1.8.0+cpu
a = torch.ones(3,4,5)
print(a)
print(a.shape)
Output :
tensor([[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]],
[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]],
[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]]])
torch.Size([3, 4, 5])
Output the results from above , You can intuitively feel the front dimension ( Numbers 3) The embodiment of ;
The first number 3: Divide into 3 Big row
The second number 4: Each line is divided into 4 Xiaoxing
The third number 5: Each line is divided into 5 Small column
So the dimension of data is 3×4×5, The last number represents the dimension of the column ; We can also understand it as 3 individual 4 That's ok 5 Columns of data .
If we compare the above tensor to a RGB Image , Numbers 3 Express 3 Channels , The size of each channel is 4 That's ok 5 Column .
5. 4 d
import torch # edition :1.8.0+cpu
a = torch.ones(2,3,4,5)
print(a)
print(a.shape)
Output :
tensor([[[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]],
[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]],
[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]]],
[[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]],
[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]],
[[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]]]])
torch.Size([2, 3, 4, 5])
We separate the above output with a split line :

The upper part of the red line is a “ dimension ”, Here is another “ dimension ”, So there are two dimensions .
Let's be a little bit more straightforward , Tensor a, Yes 2 Big row , Each line is divided again 3 Xiaoxing , Each line is divided again 4 That's ok , And then it was divided 5 Column .
tensor a It can be understood in the daily image data set :
The first number 2: In fact, that is batchsize, It is equivalent to that this tensor is input 2 Zhang image
The second number 3: The number of channels per image is 3
The third number 4: The image is high 4
The fourth number 5: The width of the image is 5

Reference resources :
https://mp.weixin.qq.com/s/9gdoufWGE8xOvwPvAcVEGw
https://blog.csdn.net/z240626191s/article/details/124204965
My official account of WeChat :
Welcome to pay attention , Share related technologies from time to time .
边栏推荐
- Go|gin quickly use swagger
- Ribbon learning notes 1
- Thinkphp6 output QR code image format to solve the conflict with debug
- 【综述】图像分类网络
- 识变!应变!求变!
- Ribbon学习笔记一
- Detailed explanation of tool classes countdownlatch and cyclicbarrier of concurrent programming learning notes
- Machine learning makes character recognition easier: kotlin+mvvm+ Huawei ml Kit
- Most PHP programmers don't understand how to deploy safe code
- 【比赛网站】收集机器学习/深度学习比赛网站(持续更新)
猜你喜欢

Process management of day02 operation

【综述】图像分类网络

Realize the scheduled backup of MySQL database in Linux environment through simple script (mysqldump command backup)

Training log II of the project "construction of Shandong University mobile Internet development technology teaching website"

以‘智’提‘质|金融影像平台解决方案

XDFS&空天院HPC集群典型案例

D3.JS 纵向关系图(加箭头,连接线文字描述)

day02作业之进程管理

简单聊聊 PendingIntent 与 Intent 的区别

主流实时流处理计算框架Flink初体验。
随机推荐
Markdown语法
day02作业之进程管理
mysql插入百万数据(使用函数和存储过程)
DCAT batch operation popup and parameter transfer
Flink connector Oracle CDC synchronizes data to MySQL in real time (oracle19c)
并发编程学习笔记 之 Lock锁及其实现类ReentrantLock、ReentrantReadWriteLock和StampedLock的基本用法
重庆大道云行作为软件产业代表受邀参加渝中区重点项目签约仪式
"Shandong University mobile Internet development technology teaching website construction" project training log I
有价值的博客、面经收集(持续更新)
Huawei 2020 school recruitment written test programming questions read this article is enough (Part 2)
PHP write a diaper to buy the lowest price in the whole network
全闪分布式,如何深度性能POC?
asyncawait和promise的区别
裸金属云FASS高性能弹性块存储解决方案
mysql 的show profiles 使用。
Shanzhai coin Shib has a US $548.6 million stake in eth whale's portfolio - traders should be on guard
Ribbon学习笔记二
July 28 ens/usd Value Forecast: ENS attracts huge profits
mysql在查询字符串类型的时候带单引号和不带的区别和原因
How does PHP generate QR code?