当前位置:网站首页>[deep learning]: introduction to pytorch to project practice: simple code to realize linear neural network (with code)
[deep learning]: introduction to pytorch to project practice: simple code to realize linear neural network (with code)
2022-07-28 16:57:00 【JOJO's data analysis Adventure】
【 Deep learning 】:《PyTorch Introduction to project practice 》 On the third day : Simple code to realize linear neural network ( The attached code )
- This article is included in 【 Deep learning 】:《PyTorch Introduction to project practice 》 special column , This column mainly records how to use
PyTorchRealize deep learning notes , Try to keep updating every week , You are welcome to subscribe ! - Personal home page :JoJo Data analysis adventure
- Personal introduction : I'm reading statistics in my senior year , At present, Baoyan has reached statistical top3 Colleges and universities continue to study for Postgraduates in Statistics
- If it helps you , welcome
Focus on、give the thumbs-up、Collection、subscribespecial column
Reference material : This column focuses on bathing God 《 Hands-on deep learning 》 For learning materials , Take notes of your study , Limited ability , If there is a mistake , Welcome to correct . At the same time, Musen uploaded teaching videos and teaching materials , You can go to study .
- video : Hands-on deep learning
- The teaching material : Hands-on deep learning

List of articles
- 【 Deep learning 】:《PyTorch Introduction to project practice 》 On the third day : Simple code to realize linear neural network ( The attached code )
- 1. Generate data set
- 2. Reading data sets
- 3. Linear model building
- 4. Initialize parameters
- 5. Define the loss function
- 6. Choose the optimization method
- 7. Complete code
- Recommended reading
In the last section, we learned how to use pytorch Realize a linear regression model from zero . Including generating data sets , Build loss function , gradient descent Optimize the solution parameters, etc . Like many other machine learning frameworks ,pytorch It also contains many packages that can automatically realize machine learning . This chapter describes how to use nn Simple implementation of a linear regression model
1. Generate data set
import numpy as np
import torch
from torch.utils import data
Next, we define the real w and b, And generating simulation data sets , This step is the same as before
def synthetic_data(w, b, num_examples):
X = torch.normal(0, 1, (num_examples, len(w)))
y = torch.matmul(X, w) + b
y += torch.normal(0, 0.01, y.shape)
return X, y.reshape((-1, 1))
true_w = torch.tensor([2, -3.4])
true_b = 4.2
features, labels = synthetic_data(true_w, true_b, 1000)
2. Reading data sets
The next difference is that we pass data.TensorDataset Put the data in Tensor In structure , How to use batch_size To set how many samples to take at a time
def load_array(data_arrays,batch_size,is_train=True):
# TensorDataset: Map the data to Tensor list
data_set = data.TensorDataset(*data_arrays)
return data.DataLoader(data_set,batch_size,shuffle=is_train)# Load and read the data ,batch_size Number of definitions
batch_size = 10
data_iter = load_array((features,labels),batch_size)
next(iter(data_iter))
[tensor([[-0.0384, 1.1566],
[-0.9023, -0.6922],
[-0.0652, 1.1757],
[-0.8569, -1.0172],
[ 1.3489, -0.6855],
[ 0.1463, 0.1577],
[ 0.1615, -2.1549],
[-0.0533, -0.3301],
[-0.9913, 0.2226],
[ 0.1432, -0.9537]]),
tensor([[ 0.1836],
[ 4.7540],
[ 0.0802],
[ 5.9541],
[ 9.2256],
[ 3.9620],
[11.8700],
[ 5.2242],
[ 1.4718],
[ 7.7181]])]
3. Linear model building
Here we go through nn Build a linear neural network
from torch import nn
Here we use Sequential Class to receive the linear layer , Here we have only one linear neural network layer , In fact, you can not set , But in the other algorithms we will introduce later , They are often multi-layered , So we can think of this as a standardized process .
Its function is to string different layers together , First pass the data into the first layer , Then pass the output of the first layer into the second layer as input , And so on
net = nn.Sequential(nn.Linear(2,1))# The first parameter represents the latitude of the input feature , The second parameter represents the latitude of the output layer
4. Initialize parameters
In defining net after , All we need to do is define the parameters we want to estimate . It's still similar to before , Here we also need to define two parameters. One is weight Equivalent to the previous w, One is bias Equivalent to the previous b
net[0].weight.data.normal_(0, 0.01)# First floor weight initialization
net[0].bias.data.fill_(0)# First floor bias initialization
tensor([0.])
5. Define the loss function
Here we use the mean square error MSE As our loss function
loss = nn.MSELoss()
6. Choose the optimization method
Here we use Random gradient Drop to optimize . So we can get our parameters . You need to pass in two parameters , One is Parameters to be estimated , The other is Learning rate . I'm going to set it to zero here 0.03
trainer = torch.optim.SGD(net.parameters(), lr=0.03)
num_epochs = 3
for epoch in range(num_epochs):
for X, y in data_iter:
l = loss(net(X) ,y)
trainer.zero_grad()
l.backward()
trainer.step()# Update parameters
l = loss(net(features), labels)
print(f'epoch {
epoch + 1}, loss {
l:f}')
epoch 1, loss 0.000102
epoch 2, loss 0.000103
epoch 3, loss 0.000104
7. Complete code
# Import related libraries
import numpy as np
import torch
from torch.utils import data
from torch import nn
''' Define analog dataset functions '''
def synthetic_data(w, b, num_examples):
X = torch.normal(0, 1, (num_examples, len(w)))# Generate standard normal distribution
y = torch.matmul(X, w) + b# Calculation y
y += torch.normal(0, 0.01, y.shape)
return X, y.reshape((-1, 1))
''' Generate data set '''
true_w = torch.tensor([2, -3.4])# Definition w
true_b = 4.2# Definition b
features, labels = synthetic_data(true_w, true_b, 1000)# Generate simulation data set
''' Load data set '''
def load_array(data_arrays,batch_size,is_train=True):
# TensorDataset: Map the data to Tensor list
data_set = data.TensorDataset(*data_arrays)
return data.DataLoader(data_set,batch_size,shuffle=is_train)# Load and read the data ,batch_size Number of definitions
batch_size = 10
data_iter = load_array((features,labels),batch_size)
''' Create a linear neural network '''
net = nn.Sequential(nn.Linear(2,1))# The first parameter represents the latitude of the input feature , The second parameter represents the latitude of the output layer
net[0].weight.data.normal_(0, 0.01)# First floor weight initialization
net[0].bias.data.fill_(0)# First floor bias initialization
''' Define the loss function MSE '''
loss = nn.MSELoss()
''' establish SGD An optimization method '''
trainer = torch.optim.SGD(net.parameters(), lr=0.03)# establish SGD Optimization trainer
''' Formal training '''
num_epochs = 3# The number of iterations
for epoch in range(num_epochs):
for X, y in data_iter:
l = loss(net(X) ,y)# Calculate the loss function
trainer.zero_grad()
l.backward()
trainer.step()# Update parameters
l = loss(net(features), labels)# Calculate the final loss
print(f'epoch {
epoch + 1}, loss {
l:f}')
epoch 1, loss 0.000240
epoch 2, loss 0.000099
epoch 3, loss 0.000100
Recommended reading
- 《100 Study together every day PyTorch》 The first day : Data manipulation and automatic derivation
- 《100 Study together every day PyTorch》 the second day : Linear regression from zero ( With detailed code )
This is the introduction of this chapter , If it helps you , Please do more thumb up 、 Collection 、 Comment on 、 Focus on supporting !!
边栏推荐
- ABAQUS GUI interface solves the problem of Chinese garbled code (plug-in Chinese garbled code is also applicable)
- 【指针内功修炼】字符指针 + 指针数组 + 数组指针 + 指针参数(一)
- 2019年全球移动通信基站市场:爱立信、华为、诺基亚分列前三
- Efficiency comparison of three methods for obtaining timestamp
- Installation of QT learning
- 小程序:scroll-view默认滑倒最下面
- Rsync 服务部署与参数详解
- Leetcode9. Palindromes
- Leetcode70 suppose you are climbing stairs. You need n steps to reach the roof. You can climb one or two steps at a time. How many different ways can you climb to the roof?
- 综合设计一个OPPE主页--页面的售后服务
猜你喜欢

【指针内功修炼】字符指针 + 指针数组 + 数组指针 + 指针参数(一)

Applet: scroll view slides to the bottom by default

Re10:读论文 Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous gr

Re14:读论文 ILLSI Interpretable Low-Resource Legal Decision Making

Call DLL file without source code

微软:Edge 浏览器已内置磁盘缓存压缩技术,可节省空间占用且不降低系统性能

Interesting kotlin 0x0a:fun with composition

MySQL5.7及SQLyogV12安装及使用破解及常用命令

Applet: get element node information

Leetcode learn to insert and sort unordered linked lists (detailed explanation)
随机推荐
PostgreSQL每周新闻—2022年7月20日
parseJson
CRC16数据校验支持ModelBus和XMODEM校验模式(C语言)
Binary representation of negative integers and floating point numbers
Optimization of network request success rate in IM instant messaging software development
MySQL CDC if the binlog log file is incomplete, can you read all the data in the full volume stage
阿里大哥教你如何正确认识关于标准IO缓冲区的问题
在AD中添加差分对及连线
给定正整数N、M,均介于1~10 ^ 9之间,N <= M,找出两者之间(含N、M)的位数为偶数的数有多少个
Record development issues
epoll水平出发何边沿触发
记录开发问题
leetcode70假设你正在爬楼梯。需要 n 阶你才能到达楼顶。每次你可以爬 1 或 2 个台阶。你有多少种不同的方法可以爬到楼顶呢?
华为Mate 40系列曝光:大曲率双曲面屏,5nm麒麟1020处理器!还将有天玑1000+的版本
Microsoft question 100 - do it every day - question 16
[JS] eight practical new functions of 1394-es2022
Oracle system composition
Mysql与Oracle的13点区别
Analysis of echo service model in the first six chapters of unp
SUSE Ceph 增加节点、减少节点、 删除OSD磁盘等操作 – Storage6