当前位置:网站首页>Pytoch learning - from tensor to LR
Pytoch learning - from tensor to LR
2022-07-26 08:51:00 【Miracle Fan】
Pytorch Study — from Tensor To LR
1. Generate simple Tensor
x=torch.empty(1)
y=torch.rand(2,3)
x,y
(tensor([0.]),
tensor([[0.0721, 0.6318, 0.4175],
[0.3821, 0.0745, 0.0769]]))
x=torch.ones(2,4,dtype=torch.float16)
print(x)
print(x.dtype)
print(x.size())
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.]], dtype=torch.float16)
torch.float16
torch.Size([2, 4])
2. Addition, subtraction, multiplication and division of tensors
x+y = torch.add(x,y)
x-y = torch.sub(x,y)
x*y = torch.mul(x,y)
x/y = torch.div(x,y)
y.add_(x) take x Add to y And update the y Value
x=torch.rand(2,2)
y=torch.rand(2,2)
print(x,y)
y.add_(x)# take x add y And cover y Value
print(y)
tensor([[0.7928, 0.1424],
[0.5847, 0.9996]]) tensor([[0.1585, 0.1488],
[0.8360, 0.0950]])
tensor([[0.9513, 0.2912],
[1.4207, 1.0946]])
3. Tensor slices 、 Indexes
x=torch.rand(5,3)
print(x)
print(x[:,:1])# The first parameter represents the line , The second parameter is the column
print(x[1,1].item())# Get the tensor actual value
tensor([[0.4997, 0.6359, 0.7303],
[0.2803, 0.6739, 0.0794],
[0.5455, 0.0975, 0.9395],
[0.0389, 0.0743, 0.8702],
[0.6613, 0.6809, 0.1929]])
tensor([[0.4997],
[0.2803],
[0.5455],
[0.0389],
[0.6613]])
0.673908531665802
4. Data type conversion
4.1 Tensor ⇔ \Leftrightarrow ⇔Numpy
tensor.numpy()
torch.from_numpy()
a=torch.ones(5)
print(a)
b=a.numpy()
print(type(b))
tensor([1., 1., 1., 1., 1.])
<class 'numpy.ndarray'>
a=np.ones(5)
print(a)
b=torch.from_numpy(a)
print(b)
[1. 1. 1. 1. 1.]
tensor([1., 1., 1., 1., 1.], dtype=torch.float64)
4.2 Migrate data to cuda
if torch.cuda.is_available():
device = torch.device("cuda")
x=torch.ones(5,device=device)
y=torch.ones(5)
y=y.to(device)
z=x+y
z=z.to("cpu")
5. Gradient calculation
5.1 autograd
x=torch.randn(3,requires_grad=True)
print(x)
tensor([0.9217, 0.3146, 0.0978], requires_grad=True)
y=x+2
print(y)
z=y*y*2
# z=z.mean()
print(z)
tensor([2.9217, 2.3146, 2.0978], grad_fn=<AddBackward0>)
tensor([17.0728, 10.7144, 8.8019], grad_fn=<MulBackward0>)
z.backward() #dz/dx
print(x.grad)
tensor([3.8956, 3.0861, 2.7971])
v=torch.tensor([0.1,1.0,0.001],dtype=torch.float32)
# If z Not scalar , Need a parameter
z.backward(v) #dz/dx
print(x.grad)
tensor([ 5.0643, 12.3444, 2.8055])
x=torch.tensor(1.0)
y=torch.tensor(2.0)
w= torch.tensor(1.0,requires_grad=True)
# Forward transfer and calculate the mean square error
y_hat=w*x
loss=(y_hat-y)**2
print(loss)
loss.backward()# dl / dw = 2*y_hat*x
print(w.grad)
tensor(1., grad_fn=<PowBackward0>)
tensor(-2.)
5.2 Cancel gradient
#1. x.requires_grad_(False)
#2. y=x.detach()
#3. with torch.no_grad():
# with torch.no_grad():
# y=x+2
# print(y)
y=x.detach()
print(y)
print(x)
tensor([ 0.6006, -1.0621, 0.5496])
tensor([ 0.6006, -1.0621, 0.5496], requires_grad=True)
5.3 Gradient accumulation
weights = torch.ones(4,requires_grad=True)
for epoch in range(3):
model_output=(weights*5).sum()
model_output.backward()
print(weights.grad)
# Gradient accumulation , In the next iteration or optimization , You need to clear the gradient
tensor([5., 5., 5., 5.])
tensor([10., 10., 10., 10.])
tensor([15., 15., 15., 15.])
weights.grad.zero_()
tensor([5., 5., 5., 5.])
tensor([5., 5., 5., 5.])
tensor([5., 5., 5., 5.])
6. Linear regression
6.1 Use numpy Realization
X = np.array([1, 2, 3, 4], dtype=np.float32)
Y = np.array([2, 4, 6, 8], dtype=np.float32)
w = 0.0
# Build a model
def forward(x):
return w * x
# The error function is set to MSE
def loss(y, y_pred):
return ((y_pred - y)**2).mean()
# MSE = 1/N * (w*x - y)**2
# dl/dw = 1/N * 2x(w*x - y)
def gradient(x, y, y_pred):
return np.dot(2*x, y_pred - y).mean()
print(f' Pre training predictions f(5) = {
forward(5):.3f}')
# Training
lr = 0.01
n_iters = 20
for epoch in range(n_iters):
# predict = forward pass
y_pred = forward(X)
l = loss(Y, y_pred)
dw = gradient(X, Y, y_pred)
w -= lr * dw
if epoch % 2 == 0:
print(f'epoch {
epoch+1}: w = {
w:.3f}, loss = {
l:.8f}')
print(f' Predicted value after training f(5) = {
forward(5):.3f}')
Prediction before training: f(5) = 0.000
epoch 1: w = 1.200, loss = 30.00000000
epoch 3: w = 1.872, loss = 0.76800019
epoch 5: w = 1.980, loss = 0.01966083
epoch 7: w = 1.997, loss = 0.00050331
epoch 9: w = 1.999, loss = 0.00001288
epoch 11: w = 2.000, loss = 0.00000033
epoch 13: w = 2.000, loss = 0.00000001
epoch 15: w = 2.000, loss = 0.00000000
epoch 17: w = 2.000, loss = 0.00000000
epoch 19: w = 2.000, loss = 0.00000000
Prediction after training: f(5) = 10.000
6.2 Use Pytorch Realization
X = torch.tensor([1, 2, 3, 4], dtype=torch.float32)
Y = torch.tensor([2, 4, 6, 8], dtype=torch.float32)
w = torch.tensor(0.0, dtype=torch.float32, requires_grad=True)
# model output
def forward(x):
return w * x
# loss = MSE
def loss(y, y_pred):
return ((y_pred - y)**2).mean()
print(f'Prediction before training: f(5) = {
forward(5).item():.3f}')
# Training
learning_rate = 0.01
n_iters = 100
for epoch in range(n_iters):
# predict = forward pass
y_pred = forward(X)
# loss
l = loss(Y, y_pred)
# calculate gradients = backward pass
l.backward()
# update weights
#w.data = w.data - learning_rate * w.grad
with torch.no_grad():
w -= learning_rate * w.grad
# Clear the gradient after updating the parameters
w.grad.zero_()
if epoch % 10 == 0:
print(f'epoch {
epoch+1}: w = {
w.item():.3f}, loss = {
l.item():.8f}')
print(f'Prediction after training: f(5) = {
forward(5).item():.3f}')
Prediction before training: f(5) = 0.000
epoch 1: w = 0.300, loss = 30.00000000
epoch 11: w = 1.665, loss = 1.16278565
epoch 21: w = 1.934, loss = 0.04506890
epoch 31: w = 1.987, loss = 0.00174685
epoch 41: w = 1.997, loss = 0.00006770
epoch 51: w = 1.999, loss = 0.00000262
epoch 61: w = 2.000, loss = 0.00000010
epoch 71: w = 2.000, loss = 0.00000000
epoch 81: w = 2.000, loss = 0.00000000
epoch 91: w = 2.000, loss = 0.00000000
Prediction after training: f(5) = 10.000
边栏推荐
- 1、 Redis data structure
- Introduction to AWD attack and defense competition
- Oracle 19C OCP 1z0-082 certification examination question bank (36-41)
- Uni app simple mall production
- [recommended collection] MySQL 30000 word essence summary index (II) [easy to understand]
- CIS 2020 - alternative skills against cloud WAF (pyn3rd)
- 第6天总结&数据库作业
- [untitled]
- Oracle 19C OCP 1z0-082 certification examination question bank (24-29)
- 基于C#实现的文件管理文件系统
猜你喜欢

Sub Chocolate & paint area

day06 作业--技能题6

Okaleido launched the fusion mining mode, which is the only way for Oka to verify the current output
![[encryption weekly] has the encryption market recovered? The cold winter still hasn't thawed out. Take stock of the major events that occurred in the encryption market last week](/img/d8/a367c26b51d9dbaf53bf4fe2a13917.png)
[encryption weekly] has the encryption market recovered? The cold winter still hasn't thawed out. Take stock of the major events that occurred in the encryption market last week

My meeting of OA project (query)

Kept dual machine hot standby

Poor English, Oracle OCP or MySQL OCP exam can also get a high score of 80 points

Huffman transformation software based on C language

六、品达通用权限系统__pd-tools-log

Excel delete blank lines
随机推荐
数据库操作 题目二
Foundry tutorial: writing scalable smart contracts in various ways (Part 1)
Grid segmentation
C Entry series (31) -- operator overloading
Number of briquettes & Birthday Candles & building blocks
Spark SQL common date functions
Excel find duplicate lines
基于C#实现的文件管理文件系统
Web概述和B/S架构
Human computer interaction software based on C language
基于Raft共识协议的KV数据库
Kept dual machine hot standby
Sub Chocolate & paint area
基于C语言的哈夫曼转化软件
Xtrabackup appears' flush no '_ WRITE_ TO_ BINLOG TABLES‘: 1205 (HY000) Lock wait timeout exceeded;
[encryption weekly] has the encryption market recovered? The cold winter still hasn't thawed out. Take stock of the major events that occurred in the encryption market last week
day06 作业---技能题7
Dynamic SQL and exceptions of pl/sql
uni-app 简易商城制作
Oracle 19C OCP 1z0-082 certification examination question bank (51-60)