当前位置:网站首页>pytorch_ 01 automatic derivation mechanism
pytorch_ 01 automatic derivation mechanism
2022-07-07 05:42:00 【Magnetoelectricity】
One of the most powerful things framework does is : Manually define forward propagation that requires derivation , Calculate all the backward propagation
import torch
# Method 1
x = torch.randn(3,4,requires_grad=True)# structure 3 That's ok 4 Columns of the matrix requires_grad=True Indicates that the current X To find the derivative , The default is false
x

# Method 2
x = torch.randn(3,4)#
x.requires_grad=True
x

b = torch.randn(3,4,requires_grad=True)
t = x + b
y = t.sum()
y#y As a loss function , Back propagation is to derive layer by layer from the loss function
y.backward()
b.grad
out:tensor(2.1753, grad_fn=)
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]])
Although not specified t Of requires_grad But you need it , It will also default
x.requires_grad, b.requires_grad, t.requires_grad
out (True, True, True)

# Calculation process
# Yes x w b Initialization of random values
x = torch.rand(1)
b = torch.rand(1, requires_grad = True)
w = torch.rand(1, requires_grad = True)
y = w * x
z = y + b
# Backward propagation calculation
z.backward(retain_graph=True)# stay pytorch In the frame , If you don't empty the gradient, it will add up
Do a linear regression and try water
Construct a set of input data X And its corresponding label y
import numpy as np
x_values = [i for i in range(11)]
x_train = np.array(x_values, dtype=np.float32)#x Now it is ndarry The format of cannot be input into pytorch Training in To put ndarry Turn into tensor Format
x_train = x_train.reshape(-1, 1)# In order to prevent subsequent errors, it is converted into matrix format
x_train.shape
y_values = [2*i + 1 for i in x_values]
y_train = np.array(y_values, dtype=np.float32)
y_train = y_train.reshape(-1, 1)
y_train.shape
import torch
import torch.nn as nn

Linear regression model is actually a full connection layer without activation function
class LinearRegressionModel(nn.Module):# No matter how complex the model is built First define the model class Inherit existing nn.Module modular
def __init__(self, input_dim, output_dim):# Write those layers in the constructor
super(LinearRegressionModel, self).__init__()
self.linear = nn.Linear(input_dim, output_dim) # call nn The full connection layer of , Dimension of incoming input and output layer
def forward(self, x):# Specify the layer to use in forward propagation
out = self.linear(x)
return out
input_dim = 1
output_dim = 1
model = LinearRegressionModel(input_dim, output_dim)
Specify parameters and loss function for training
epochs = 1000# cycles
learning_rate = 0.01# Learning rate
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)# Define optimizer -SGD ( Parameters to be optimized , Learning rate )
criterion = nn.MSELoss()# Appoint MSE Loss function
Training models
for epoch in range(epochs):
epoch += 1
# Pay attention to turning into tensor
inputs = torch.from_numpy(x_train)
labels = torch.from_numpy(y_train)
# The gradient should be cleared every iteration
optimizer.zero_grad()
# Forward propagation results
outputs = model(inputs)
# Calculate the loss
loss = criterion(outputs, labels)
# Backward propagation
loss.backward()
# Update weight parameters
optimizer.step()
if epoch % 50 == 0:
print('epoch {}, loss {}'.format(epoch, loss.item()))

Test model prediction results
predicted = model(torch.from_numpy(x_train).requires_grad_()).data.numpy()# Make a forward propagation to predict , Turn the result into ndarry Format , Convenient for drawing and pandas You need to use ndarry Format
predicted

Save and read the model
torch.save(model.state_dict(), 'model.pkl')# Save in dictionary format Save the weight parameters and offsets
model.load_state_dict(torch.load('model.pkl'))
Use GPU Training
Just pass the data and model into cuda Just inside
import torch
import torch.nn as nn
import numpy as np
class LinearRegressionModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegressionModel, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = self.linear(x)
return out
input_dim = 1
output_dim = 1
model = LinearRegressionModel(input_dim, output_dim)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")# If GPU Configured for use GPU
model.to(device)# Transfer the model to cuda in
criterion = nn.MSELoss()
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
epochs = 1000
for epoch in range(epochs):
epoch += 1
inputs = torch.from_numpy(x_train).to(device)# Transfer training data to cuda in
labels = torch.from_numpy(y_train).to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if epoch % 50 == 0:
print('epoch {}, loss {}'.format(epoch, loss.item()))



边栏推荐
- LabVIEW is opening a new reference, indicating that the memory is full
- [JS component] date display.
- When deleting a file, the prompt "the length of the source file name is greater than the length supported by the system" cannot be deleted. Solution
- Egr-20uscm ground fault relay
- win配置pm2开机自启node项目
- MySQL数据库学习(7) -- pymysql简单介绍
- Simple case of SSM framework
- [论文阅读] A Multi-branch Hybrid Transformer Network for Corneal Endothelial Cell Segmentation
- 微信小程序蓝牙连接硬件设备并进行通讯,小程序蓝牙因距离异常断开自动重连,js实现crc校验位
- ForkJoin最全详解(从原理设计到使用图解)
猜你喜欢
随机推荐
Paper reading [MM21 pre training for video understanding challenge:video captioning with pre training techniqu]
DOM node object + time node comprehensive case
What are the common message queues?
Leakage relay jelr-250fg
京东商品详情页API接口、京东商品销量API接口、京东商品列表API接口、京东APP详情API接口、京东详情API接口,京东SKU信息接口
ForkJoin最全详解(从原理设计到使用图解)
Writing process of the first paper
Lombok plug-in
Intelligent annotation scheme of entity recognition based on hugging Face Pre training model: generate doccano request JSON format
1.AVL树:左右旋-bite
English grammar_ Noun possessive
Summary of the mean value theorem of higher numbers
Differences and introduction of cluster, distributed and microservice
Taobao commodity details page API interface, Taobao commodity list API interface, Taobao commodity sales API interface, Taobao app details API interface, Taobao details API interface
Senior programmers must know and master. This article explains in detail the principle of MySQL master-slave synchronization, and recommends collecting
Zero sequence aperture of leakage relay jolx-gs62 Φ one hundred
Leetcode 1189 maximum number of "balloons" [map] the leetcode road of heroding
纪念下,我从CSDN搬家到博客园啦!
async / await
淘寶商品詳情頁API接口、淘寶商品列錶API接口,淘寶商品銷量API接口,淘寶APP詳情API接口,淘寶詳情API接口


![[JS component] custom select](/img/9d/f7f15ec21763c40b9bb6a053d90ee4.jpg)






