当前位置:网站首页>"Hands on learning in depth" Chapter 2 - preparatory knowledge_ 2.1 data operation_ Learning thinking and exercise answers
"Hands on learning in depth" Chapter 2 - preparatory knowledge_ 2.1 data operation_ Learning thinking and exercise answers
2022-07-08 02:10:00 【coder_ sure】
List of articles
2.1 Data manipulation
One 、 Summary of key contents
1. Deconstruction of variables
This concept was vague before me , Teacher Li Mu mentioned here , Make a note of :
Take an example
Y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
X = torch.arange(12, dtype=torch.float32).reshape((3,4))
before = id(Y)
Y = Y + X
id(Y) == before
Output results :
False
This example shows that when running some operations , It may cause memory allocation for the new structure .
take Y The address of is recorded in before variable , And then we did Y + X Assigned to a new variable Y The operation of , Although on the surface, it is still Y, But this Y It's not that Y 了 !Y The address of has changed .
2. Perform in place operation
Continue the above example , We can operate in situ in this way :
Z = torch.zeros_like(Y)
print('id(Z):', id(Z))
Z[:] = X + Y
print('id(Z):', id(Z))
Output results :
id(Z): 140040758378960
id(Z): 140040758378960
Construct a and Y Of the same dimension Z,Z All elements of are 0, after Z[:] = X + Y operation , It's equivalent to Z An adaptation of the element of , The address doesn't change , Realize in-situ operation .
Two 、 Exercise answer
1. Run the code in this section . The conditional statements in this section X == Y Change to X < Y or X > Y, Then see what kind of tensor you can get .
X = torch.arange(12, dtype=torch.float32).reshape((3,4))
Y = torch.tensor([[2.0, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
X, Y
Output results :
(tensor([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]]), tensor([[2., 1., 4., 3.],
[1., 2., 3., 4.],
[4., 3., 2., 1.]]))
X > Y
Output results :
tensor([[False, False, False, False],
[ True, True, True, True],
[ True, True, True, True]])
2. Use other shapes ( For example, three-dimensional tensors ) Replace two tensors operated by elements in the broadcast mechanism . Whether the result is the same as expected ?
A = torch.tensor([[[1, 2, 3], [4, 5, 6]]])
B = torch.tensor([[[10, 20, 30]], [[40, 50, 60]]])
C = A + B
A.shape,B.shape, C, C.shape
Output results :
(torch.Size([1, 2, 3]), torch.Size([2, 1, 3]), tensor([[[11, 22, 33],
[14, 25, 36]],
[[41, 52, 63],
[44, 55, 66]]]), torch.Size([2, 2, 3]))
边栏推荐
- 进程和线程的退出
- 牛熊周期与加密的未来如何演变?看看红杉资本怎么说
- node js 保持长连接
- The way fish and shrimp go
- PB9.0 insert OLE control error repair tool
- 2022年5月互联网医疗领域月度观察
- Where to think
- cv2读取视频-并保存图像或视频
- [knowledge map paper] Devine: a generative anti imitation learning framework for knowledge map reasoning
- The circuit is shown in the figure, r1=2k Ω, r2=2k Ω, r3=4k Ω, rf=4k Ω. Find the expression of the relationship between output and input.
猜你喜欢

《ClickHouse原理解析与应用实践》读书笔记(7)

leetcode 865. Smallest Subtree with all the Deepest Nodes | 865. The smallest subtree with all the deepest nodes (BFs of the tree, parent reverse index map)

JVM memory and garbage collection-3-object instantiation and memory layout

保姆级教程:Azkaban执行jar包(带测试样例及结果)
![[recommendation system paper reading] recommendation simulation user feedback based on Reinforcement Learning](/img/48/3366df75c397269574e9666fcd02ec.jpg)
[recommendation system paper reading] recommendation simulation user feedback based on Reinforcement Learning

Matlab r2021b installing libsvm

云原生应用开发之 gRPC 入门

leetcode 869. Reordered Power of 2 | 869. Reorder to a power of 2 (state compression)
![[reinforcement learning medical] deep reinforcement learning for clinical decision support: a brief overview](/img/45/5f14454267318bb404732c2df5e03c.jpg)
[reinforcement learning medical] deep reinforcement learning for clinical decision support: a brief overview

Nanny level tutorial: Azkaban executes jar package (with test samples and results)
随机推荐
LeetCode精选200道--数组篇
Introduction to grpc for cloud native application development
Many friends don't know the underlying principle of ORM framework very well. No, glacier will take you 10 minutes to hand roll a minimalist ORM framework (collect it quickly)
Towards an endless language learning framework
The circuit is shown in the figure, r1=2k Ω, r2=2k Ω, r3=4k Ω, rf=4k Ω. Find the expression of the relationship between output and input.
leetcode 866. Prime Palindrome | 866. prime palindromes
leetcode 869. Reordered Power of 2 | 869. Reorder to a power of 2 (state compression)
ArrayList源码深度剖析,从最基本的扩容原理,到魔幻的迭代器和fast-fail机制,你想要的这都有!!!
Application of slip ring in direct drive motor rotor
金融业数字化转型中,业务和技术融合需要经历三个阶段
leetcode 866. Prime Palindrome | 866. 回文素数
直接加比较合适
Nanny level tutorial: Azkaban executes jar package (with test samples and results)
VIM use
The way fish and shrimp go
JVM memory and garbage collection-3-object instantiation and memory layout
[knowledge atlas paper] minerva: use reinforcement learning to infer paths in the knowledge base
metasploit
Can you write the software test questions?
WPF custom realistic wind radar chart control