当前位置:网站首页>PyTorch framework to train linear regression model (CPU and GPU environment)
PyTorch framework to train linear regression model (CPU and GPU environment)
2022-08-03 13:15:00 【csp_】
活动地址:CSDN21天学习挑战赛
构造数据
代码是在jupyter下运行的,First construct a set of input dataxand its corresponding output valuey:
import numpy as np
import torch
import torch.nn as nn
x_values = [i for i in range(11)]
x_train = np.array(x_values, dtype=np.float32)
x_train = x_train.reshape(-1, 1)
y_values = [9*i + 9 for i in x_values]
y_train = np.array(y_values, dtype=np.float32)
y_train = y_train.reshape(-1, 1)
print(x_values)
print(y_values)
结果:
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
[9, 18, 27, 36, 45, 54, 63, 72, 81, 90, 99]
CPU
其实线性回归就是一个不加激活函数的全连接层,First define a class for linear regressionLinearRegressionModel
:
class LinearRegressionModel(nn.Module):
# 构造函数
def __init__(self, input_dim, output_dim):
super(LinearRegressionModel, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
# Override the forward propagation method,继承自module
def forward(self, x):
out = self.linear(x)
return out
初始化模型:
input_dim = 1
output_dim = 1
model = LinearRegressionModel(input_dim, output_dim)
model
输出(bias=true
,That is, consider the case of bias):
LinearRegressionModel(
(linear): Linear(in_features=1, out_features=1, bias=True)
)
指定好参数和损失函数:
epochs = 1000
learning_rate = 0.01
# 随机梯度下降,优化器
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# 均方误差,损失函数
criterion = nn.MSELoss()
训练模型:
对于outputs = model(inputs)
给出解释:
上述代码与outputs = model.__call__(forward(inputs))
等价,因为 __call__
A method allows an instance of a class to be called as a function,And generally it will be calledforward
方法,可看:
python中这种写法,为什么可以直接outputs = model(inputs)?What features are used?
for epoch in range(epochs):
epoch += 1
# 注意转行成tensor
inputs = torch.from_numpy(x_train)
labels = torch.from_numpy(y_train)
# 梯度要清零每一次迭代
optimizer.zero_grad()
# 前向传播
outputs = model(inputs)
# 计算损失
loss = criterion(outputs, labels)
# 返向传播
loss.backward()
# 更新权重参数
optimizer.step()
if epoch % 50 == 0:
print('epoch {}, loss {}'.format(epoch, loss.item()))
结果:
epoch 50, loss 7.823598861694336
epoch 100, loss 4.462289333343506
epoch 150, loss 2.545119524002075
epoch 200, loss 1.451640009880066
epoch 250, loss 0.8279607892036438
epoch 300, loss 0.47223785519599915
epoch 350, loss 0.269347220659256
epoch 400, loss 0.1536264419555664
epoch 450, loss 0.08762145042419434
epoch 500, loss 0.04997712001204491
epoch 550, loss 0.028505485504865646
epoch 600, loss 0.016258591786026955
epoch 650, loss 0.009272975847125053
epoch 700, loss 0.005288919433951378
epoch 750, loss 0.003016551025211811
epoch 800, loss 0.0017205380136147141
epoch 850, loss 0.0009813279611989856
epoch 900, loss 0.0005597395356744528
epoch 950, loss 0.0003192391886841506
epoch 1000, loss 0.00018208501569461077
测试模型预测结果:
predicted = model(torch.from_numpy(x_train).requires_grad_()).data.numpy()
predicted
输出:
array([[ 8.974897],
[17.978512],
[26.982128],
[35.98574 ],
[44.989357],
[53.992973],
[62.99659 ],
[72.000206],
[81.00382 ],
[90.00744 ],
[99.011055]], dtype=float32)
模型的保存与读取:
torch.save(model.state_dict(), 'model.pkl')
model.load_state_dict(torch.load('model.pkl'))
输出:
<All keys matched successfully>
GPU
First make sure you have itGPU环境:
之后对变量device
初始化:device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
看一下device
的值:
之后把数据(inputs
,labels
)和模型model
传入到cuda(device
)里面就可以了
代码:
import torch
import torch.nn as nn
import numpy as np
class LinearRegressionModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegressionModel, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = self.linear(x)
return out
input_dim = 1
output_dim = 1
model = LinearRegressionModel(input_dim, output_dim)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
criterion = nn.MSELoss()
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
epochs = 1000
for epoch in range(epochs):
epoch += 1
inputs = torch.from_numpy(x_train).to(device)
labels = torch.from_numpy(y_train).to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if epoch % 50 == 0:
print('epoch {}, loss {}'.format(epoch, loss.item()))
结果:
epoch 50, loss 8.292747497558594
epoch 100, loss 4.729876518249512
epoch 150, loss 2.6977455615997314
epoch 200, loss 1.5386940240859985
epoch 250, loss 0.8776141405105591
epoch 300, loss 0.5005567669868469
epoch 350, loss 0.28549838066101074
epoch 400, loss 0.16283737123012543
epoch 450, loss 0.09287738054990768
epoch 500, loss 0.052974116057157516
epoch 550, loss 0.0302141010761261
epoch 600, loss 0.017233209684491158
epoch 650, loss 0.009829038754105568
epoch 700, loss 0.0056058201007544994
epoch 750, loss 0.0031971274875104427
epoch 800, loss 0.0018234187737107277
epoch 850, loss 0.0010400479659438133
epoch 900, loss 0.0005932282656431198
epoch 950, loss 0.0003383254224900156
epoch 1000, loss 0.000192900508409366
GPU和CPU的定义:
借鉴自:详解gpu是什么和cpu的区别
- CPU(Central Processing Unit-中央处理器),是一块超大规模的集成电路,是一台计算机的运算核心(Core)和控制核心( Control Unit),它的功能主要是解释计算机指令以及处理计算机软件中的数据
- GPU(Graphics Processing Unit-图形处理器),是一种专门在个人电脑、工作站、游戏机和一些移动设备(如平板电脑、智能手机等)上图像运算工作的微处理器
GPU和CPU的区别:
1、缓存
- CPUThere are tons of cache structures,目前主流的CPUThere is a L4 cache on the chip,These cache structures consume a lot of transistors,It requires a lot of power when running
- GPUCache is simple,目前主流的GPUThe chip has up to two layers of cache,而且GPUIt can be made using the space and power consumption on the transistorALU单元,因此GPU比CPU的效率要高一些
2、响应方式
- CPUWhat is required is a real-time response,The speed requirements for a single task are very high,Therefore, it is necessary to use many layers of caching to ensure the speed of a single task
- GPUis to arrange all the tasks,Then batch again,The cache requirements are relatively low
3、floating point arithmetic
- CPUIn addition to being responsible for floating-point integer operations,There are loads of other instruction sets as well,Such as multimedia decoding,hardware decoding, etc.,因此CPU是多才多艺的.CPUThe focus is on single-threaded performance,To ensure that the instruction flow is not interrupted,Need to consume more transistors and power consumption for the control part,于是CPUThe power allocated to floating-point calculations will be less
- GPUBasically only do floating point arithmetic,设计结构简单,It can also be done faster.GPU注重的是吞吐量,A single instruction can drive more computations,相比较GPULess energy is consumed in the control part,Therefore, the resources saved by the power can be used for floating-point calculations
4、应用方向
- CPUGood at applications like operating systems,Rapid response to real-time information is required,Need to optimize for latency,So both transistor count and power consumption need to be used for branch prediction、乱序执行、Control parts such as low-latency caches
- GPUSuitable for extremely predictable and large numbers of similar operations and high latency、High-throughput architecture computing
边栏推荐
猜你喜欢
An动画基础之散件动画原理与形状提示点
Basic principle of the bulk of the animation and shape the An animation tip point
Unsupervised learning KMeans notes and examples
力扣刷题 每日两题(一)
setTimeout, setInterval requestAnimationFrame
ECCV 2022 | AirDet: 无需微调的小样本目标检测方法
setTimeout 、setInterval、requestAnimationFrame
Real number rounding and writing to file (C language file)
An动画优化之补间形状与传统补间的优化
期货公司开户关注的关键点
随机推荐
Golang 结构体&方法
An animation optimization of traditional guide layer animation
超多精美礼品等你来拿!2022年中国混沌工程调查启动
欧曼自动挡、银河大马力、行星新产品 欧曼全新产品以燎原之势赢领市场
[Blue Bridge Cup Trial Question 48] Scratch Dance Machine Game Children's Programming Scratch Blue Bridge Cup Trial Question Explanation
海外代购系统/代购网站怎么搭建——源码解析
The Yangtze river commercial Banks to the interview
长江商业银行面试
IronOS, an open source system for portable soldering irons, supports a variety of portable DC, QC, PD powered soldering irons, and supports all standard functions of smart soldering irons
Jmeter使用
软件测试自学还是报班好?
B站回应“HR 称核心用户都是 Loser”:该面试官去年底已被劝退,会吸取教训加强管理
self-discipline
Notepad++ 安装jsonview插件
Redis connection pool tool class
安全自定义 Web 应用程序登录
An动画基础之元件的影片剪辑动画与传统补间
云计算服务主要安全风险及应对措施初探
shell编程条件语句
有趣的opencv-记录图片二值化和相似度实现