当前位置:网站首页>[deep learning]: introduction to pytorch to project practice: simple code to realize linear neural network (with code)
[deep learning]: introduction to pytorch to project practice: simple code to realize linear neural network (with code)
2022-07-28 16:57:00 【JOJO's data analysis Adventure】
【 Deep learning 】:《PyTorch Introduction to project practice 》 On the third day : Simple code to realize linear neural network ( The attached code )
- This article is included in 【 Deep learning 】:《PyTorch Introduction to project practice 》 special column , This column mainly records how to use
PyTorchRealize deep learning notes , Try to keep updating every week , You are welcome to subscribe ! - Personal home page :JoJo Data analysis adventure
- Personal introduction : I'm reading statistics in my senior year , At present, Baoyan has reached statistical top3 Colleges and universities continue to study for Postgraduates in Statistics
- If it helps you , welcome
Focus on、give the thumbs-up、Collection、subscribespecial column
Reference material : This column focuses on bathing God 《 Hands-on deep learning 》 For learning materials , Take notes of your study , Limited ability , If there is a mistake , Welcome to correct . At the same time, Musen uploaded teaching videos and teaching materials , You can go to study .
- video : Hands-on deep learning
- The teaching material : Hands-on deep learning

List of articles
- 【 Deep learning 】:《PyTorch Introduction to project practice 》 On the third day : Simple code to realize linear neural network ( The attached code )
- 1. Generate data set
- 2. Reading data sets
- 3. Linear model building
- 4. Initialize parameters
- 5. Define the loss function
- 6. Choose the optimization method
- 7. Complete code
- Recommended reading
In the last section, we learned how to use pytorch Realize a linear regression model from zero . Including generating data sets , Build loss function , gradient descent Optimize the solution parameters, etc . Like many other machine learning frameworks ,pytorch It also contains many packages that can automatically realize machine learning . This chapter describes how to use nn Simple implementation of a linear regression model
1. Generate data set
import numpy as np
import torch
from torch.utils import data
Next, we define the real w and b, And generating simulation data sets , This step is the same as before
def synthetic_data(w, b, num_examples):
X = torch.normal(0, 1, (num_examples, len(w)))
y = torch.matmul(X, w) + b
y += torch.normal(0, 0.01, y.shape)
return X, y.reshape((-1, 1))
true_w = torch.tensor([2, -3.4])
true_b = 4.2
features, labels = synthetic_data(true_w, true_b, 1000)
2. Reading data sets
The next difference is that we pass data.TensorDataset Put the data in Tensor In structure , How to use batch_size To set how many samples to take at a time
def load_array(data_arrays,batch_size,is_train=True):
# TensorDataset: Map the data to Tensor list
data_set = data.TensorDataset(*data_arrays)
return data.DataLoader(data_set,batch_size,shuffle=is_train)# Load and read the data ,batch_size Number of definitions
batch_size = 10
data_iter = load_array((features,labels),batch_size)
next(iter(data_iter))
[tensor([[-0.0384, 1.1566],
[-0.9023, -0.6922],
[-0.0652, 1.1757],
[-0.8569, -1.0172],
[ 1.3489, -0.6855],
[ 0.1463, 0.1577],
[ 0.1615, -2.1549],
[-0.0533, -0.3301],
[-0.9913, 0.2226],
[ 0.1432, -0.9537]]),
tensor([[ 0.1836],
[ 4.7540],
[ 0.0802],
[ 5.9541],
[ 9.2256],
[ 3.9620],
[11.8700],
[ 5.2242],
[ 1.4718],
[ 7.7181]])]
3. Linear model building
Here we go through nn Build a linear neural network
from torch import nn
Here we use Sequential Class to receive the linear layer , Here we have only one linear neural network layer , In fact, you can not set , But in the other algorithms we will introduce later , They are often multi-layered , So we can think of this as a standardized process .
Its function is to string different layers together , First pass the data into the first layer , Then pass the output of the first layer into the second layer as input , And so on
net = nn.Sequential(nn.Linear(2,1))# The first parameter represents the latitude of the input feature , The second parameter represents the latitude of the output layer
4. Initialize parameters
In defining net after , All we need to do is define the parameters we want to estimate . It's still similar to before , Here we also need to define two parameters. One is weight Equivalent to the previous w, One is bias Equivalent to the previous b
net[0].weight.data.normal_(0, 0.01)# First floor weight initialization
net[0].bias.data.fill_(0)# First floor bias initialization
tensor([0.])
5. Define the loss function
Here we use the mean square error MSE As our loss function
loss = nn.MSELoss()
6. Choose the optimization method
Here we use Random gradient Drop to optimize . So we can get our parameters . You need to pass in two parameters , One is Parameters to be estimated , The other is Learning rate . I'm going to set it to zero here 0.03
trainer = torch.optim.SGD(net.parameters(), lr=0.03)
num_epochs = 3
for epoch in range(num_epochs):
for X, y in data_iter:
l = loss(net(X) ,y)
trainer.zero_grad()
l.backward()
trainer.step()# Update parameters
l = loss(net(features), labels)
print(f'epoch {
epoch + 1}, loss {
l:f}')
epoch 1, loss 0.000102
epoch 2, loss 0.000103
epoch 3, loss 0.000104
7. Complete code
# Import related libraries
import numpy as np
import torch
from torch.utils import data
from torch import nn
''' Define analog dataset functions '''
def synthetic_data(w, b, num_examples):
X = torch.normal(0, 1, (num_examples, len(w)))# Generate standard normal distribution
y = torch.matmul(X, w) + b# Calculation y
y += torch.normal(0, 0.01, y.shape)
return X, y.reshape((-1, 1))
''' Generate data set '''
true_w = torch.tensor([2, -3.4])# Definition w
true_b = 4.2# Definition b
features, labels = synthetic_data(true_w, true_b, 1000)# Generate simulation data set
''' Load data set '''
def load_array(data_arrays,batch_size,is_train=True):
# TensorDataset: Map the data to Tensor list
data_set = data.TensorDataset(*data_arrays)
return data.DataLoader(data_set,batch_size,shuffle=is_train)# Load and read the data ,batch_size Number of definitions
batch_size = 10
data_iter = load_array((features,labels),batch_size)
''' Create a linear neural network '''
net = nn.Sequential(nn.Linear(2,1))# The first parameter represents the latitude of the input feature , The second parameter represents the latitude of the output layer
net[0].weight.data.normal_(0, 0.01)# First floor weight initialization
net[0].bias.data.fill_(0)# First floor bias initialization
''' Define the loss function MSE '''
loss = nn.MSELoss()
''' establish SGD An optimization method '''
trainer = torch.optim.SGD(net.parameters(), lr=0.03)# establish SGD Optimization trainer
''' Formal training '''
num_epochs = 3# The number of iterations
for epoch in range(num_epochs):
for X, y in data_iter:
l = loss(net(X) ,y)# Calculate the loss function
trainer.zero_grad()
l.backward()
trainer.step()# Update parameters
l = loss(net(features), labels)# Calculate the final loss
print(f'epoch {
epoch + 1}, loss {
l:f}')
epoch 1, loss 0.000240
epoch 2, loss 0.000099
epoch 3, loss 0.000100
Recommended reading
- 《100 Study together every day PyTorch》 The first day : Data manipulation and automatic derivation
- 《100 Study together every day PyTorch》 the second day : Linear regression from zero ( With detailed code )
This is the introduction of this chapter , If it helps you , Please do more thumb up 、 Collection 、 Comment on 、 Focus on supporting !!
边栏推荐
- MySQL CDC if the binlog log file is incomplete, can you read all the data in the full volume stage
- Hdu1847 problem solving ideas
- Design direction of daily development plan
- 有趣的 Kotlin 0x09:Extensions are resolved statically
- Splash (rendering JS service) introduction installation
- Re12:读论文 Se3 Semantic Self-segmentation for Abstractive Summarization of Long Legal Documents in Low
- epoll水平出发何边沿触发
- 传英伟达已与软银展开会谈,将出价超过320亿美元收购Arm
- Interesting kotlin 0x06:list minus list
- Some suggestions on Oracle SQL tuning
猜你喜欢

综合设计一个OPPE主页--页面的售后服务

结构化设计的概要与原理--模块化

Microsoft: edge browser has built-in disk cache compression technology, which can save space and not reduce system performance

有趣的 Kotlin 0x07:Composition

Do you really understand CMS garbage collector?

egg(十九):使用egg-redis性能优化,缓存数据提升响应效率

MySQL 5.7 and sqlyogv12 installation and use cracking and common commands

Text filtering skills

Interesting kotlin 0x06:list minus list

快速掌握 Kotlin 集合函数
随机推荐
【从零开始学习SLAM】将坐标系变换关系发布到 topic tf
综合设计一个OPPE主页--页面的售后服务
阿里大哥教你如何正确认识关于标准IO缓冲区的问题
[pointer internal skill cultivation] character pointer + pointer array + array pointer + pointer parameter (I)
HTAP是有代价的
How to set ticdc synchronization data to only synchronize the specified library?
mysql cdc 如果binlog日志文件不全,全量阶段能读到所有数据吗
WSL+Valgrind+Clion
Re13: read the paper gender and racial stereotype detection in legal opinion word embeddings
打造自组/安全/可控的LoRa网!Semtech首度回应“工信部新规”影响
Question note 4 (the first wrong version, search the insertion position)
有趣的 Kotlin 0x09:Extensions are resolved statically
记录ceph两个rbd删除不了的处理过程
Some suggestions on Oracle SQL tuning
有趣的 Kotlin 0x0A:Fun with composition
leetcode647. 回文子串
Interesting kotlin 0x07:composition
阿里云-武林头条-建站小能手争霸赛
13 differences between MySQL and Oracle
[JS] eight practical new functions of 1394-es2022