当前位置:网站首页>[deep learning]: day 4 of pytorch introduction to project practice: realize logistic regression from 0 to 1 (with source code)
[deep learning]: day 4 of pytorch introduction to project practice: realize logistic regression from 0 to 1 (with source code)
2022-07-28 16:57:00 【JOJO's data analysis Adventure】
【 Deep learning 】:《PyTorch Introduction to project practice 》 from 0 To 1 Realization logistic Return to
- This article is included in 【 Deep learning 】:《PyTorch Introduction to project practice 》 special column , This column mainly records how to use
PyTorchRealize deep learning notes , Try to keep updating every week , You are welcome to subscribe ! - Personal home page :JoJo Data analysis adventure
- Personal introduction : I'm reading statistics in my senior year , At present, Baoyan has reached statistical top3 Colleges and universities continue to study for Postgraduates in Statistics
- If it helps you , welcome
Focus on、give the thumbs-up、Collection、subscribespecial column
Reference material : This column focuses on bathing God 《 Hands-on deep learning 》 For learning materials , Take notes of your study , Limited ability , If there is a mistake , Welcome to correct . At the same time, Musen uploaded teaching videos and teaching materials , You can go to study .
- video : Hands-on deep learning
- The teaching material : Hands-on deep learning

List of articles
- 【 Deep learning 】:《PyTorch Introduction to project practice 》 from 0 To 1 Realization logistic Return to
- 1. Create a dataset
- 2. Initialize parameters
- 3. Calculation model
- 4. Define the loss function
- 5. Gradient descent solution
- 6. Model training complete code
- 7. Evaluation model
- 🥤8. Use nn.Module Realization logistic Return to
logistic Return to Although the name is return , But it's actually a classification algorithm , It mainly deals with the second classification problem , For the specific theoretical part, you can see my article . Machine learning algorithm : Detailed classification algorithm
The basic model is as follows :
H ( x ) = 1 1 + e − W T X H(x) = \frac{1}{1+e^{-W^TX}} H(x)=1+e−WTX1
Loss function :
c o s t ( W ) = − 1 m ∑ y l o g ( H ( x ) ) + ( 1 − y ) ( l o g ( 1 − H ( x ) ) cost(W) = -\frac{1}{m}\sum ylog(H(x))+(1-y)(log(1-H(x)) cost(W)=−m1∑ylog(H(x))+(1−y)(log(1−H(x))
among y=1 or 0. As can be seen from the loss function , If y y y and H ( x ) H(x) H(x) Very close to , Then the smaller the loss function , Let's see how to use pytorch Realization logistic Return to
1. Create a dataset
import torch
import torch.nn as nn
import torch.nn.functional as F# Neural network built-in function
import torch.optim as optim
# Design random number , For the reproducibility of the results
torch.manual_seed(1)
<torch._C.Generator at 0x23225f75f78>
x = torch.tensor([[1, 2], [2, 3], [3, 1], [4, 3], [5, 3], [6, 2]],dtype=torch.float)
y = torch.tensor([[0], [0], [0], [1], [1], [1]],dtype=torch.float)
Consider the following classification issues : Give each student the number of hours to watch lectures and work in the code lab , Predict whether students have passed the course . for example , first ( Indexes 0) The students watched the lecture for an hour , Spent two hours in the experimental class ([1,2]), As a result, I failed the course ([0]).
2. Initialize parameters
Same as before , We initialize two parameters
W = torch.zeros((2, 1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)
3. Calculation model
h = 1 / (1 + torch.exp(-(torch.matmul(x,W) + b)))
tensor([[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000]], grad_fn=<SigmoidBackward0>)
stay torch in , We can also use torch.sigmoid() Function gets the same result
h = torch.sigmoid(torch.matmul(x,W)+b)
4. Define the loss function
def loss_fun(y,h):
return (-(y * torch.log(h) +
(1 - y) * torch.log(1 - h))).mean()
stay nn in , Contains many built-in functions , It includes calculating the cross entropy function F.binary_cross_entropy, You can achieve the same result as the above code
F.binary_cross_entropy(h, y)
tensor(0.6931, grad_fn=<BinaryCrossEntropyBackward0>)
5. Gradient descent solution
optim It contains common optimization algorithms , Include Adam,SGD etc. , Here we still use the same as before Stochastic gradient descent , Other optimization algorithms will be introduced later
optimizer = optim.SGD([W, b], lr=0.05)
6. Model training complete code
''' Generate data set '''
x = torch.tensor([[1, 2], [2, 3], [3, 1], [4, 3], [5, 3], [6, 2]],dtype=torch.float)
y = torch.tensor([[0], [0], [0], [1], [1], [1]],dtype=torch.float)
''' Initialize parameters '''
W = torch.zeros((2, 1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)
''' Training models '''
optimizer = optim.SGD([W, b], lr=0.5)
nb_epochs = 1000
for epoch in range(nb_epochs + 1):
# Calculation h
h = torch.sigmoid(x.matmul(W) + b)
# Calculate the loss function
cost = -(y * torch.log(h) +
(1 - y) * torch.log(1 - h)).mean()
# Gradient descent optimization
optimizer.zero_grad()
cost.backward()
optimizer.step()
if epoch % 100 == 0:
print('Epoch {:4d}/{} Cost: {:.6f}'.format(
epoch, nb_epochs, cost.item()
))
Epoch 0/1000 Cost: 0.693147
Epoch 100/1000 Cost: 0.232941
Epoch 200/1000 Cost: 0.147042
Epoch 300/1000 Cost: 0.107431
Epoch 400/1000 Cost: 0.084848
Epoch 500/1000 Cost: 0.070247
Epoch 600/1000 Cost: 0.060012
Epoch 700/1000 Cost: 0.052428
Epoch 800/1000 Cost: 0.046575
Epoch 900/1000 Cost: 0.041916
Epoch 1000/1000 Cost: 0.038117
7. Evaluation model
After we finish training the model , We want to check whether our model is suitable for the training set .
# First, calculate according to the estimated parameter results h
h = torch.sigmoid(x.matmul(W) + b)
print(h)
tensor([[0.0033],
[0.0791],
[0.1106],
[0.8929],
[0.9880],
[0.9968]], grad_fn=<SigmoidBackward0>)
# Greater than 0.5 For the True, Less than 0.5 For the False
prediction = h >= torch.FloatTensor([0.5])
print(prediction)
tensor([[False],
[False],
[False],
[ True],
[ True],
[ True]])
# Pay attention to python in 0=False,1=True
print(prediction)
print(y)
tensor([[False],
[False],
[False],
[ True],
[ True],
[ True]])
tensor([[0.],
[0.],
[0.],
[1.],
[1.],
[1.]])
# Calculate the number of predicted and real values
correct_prediction = prediction.float() == y
print(correct_prediction)
tensor([[True],
[True],
[True],
[True],
[True],
[True]])
# Calculate the proportion of the predicted correct quantity to the total quantity
accuracy = correct_prediction.sum().item() / len(correct_prediction)
print('The model has an accuracy of {:2.2f}% for the training set.'.format(accuracy * 100))
The model has an accuracy of 100.00% for the training set.
🥤8. Use nn.Module Realization logistic Return to
The above is to demonstrate logistic Return to The concrete implementation principle of , We implemented it step by step , But in practice , Often use nn.module perhaps nn Realization , Here is the implementation logistic Simplified code for .
''' Define a binary classifier '''
class BinaryClassifier(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(2, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
return self.sigmoid(self.linear(x))
model = BinaryClassifier()
''' Define random gradient descent '''
optimizer = optim.SGD(model.parameters(), lr=0.7)
''' model training '''
nb_epochs = 100
for epoch in range(nb_epochs + 1):
# Calculation h
hypothesis = model(x)
# Calculate the loss function
cost = F.binary_cross_entropy(hypothesis, y)
# gradient descent
optimizer.zero_grad()
cost.backward()
optimizer.step()
# Output results
if epoch % 10 == 0:
prediction = hypothesis >= torch.FloatTensor([0.5])
correct_prediction = prediction.float() == y
accuracy = correct_prediction.sum().item() / len(correct_prediction)
print('Epoch {:4d}/{} Cost: {:.6f} Accuracy {:2.2f}%'.format(
epoch, nb_epochs, cost.item(), accuracy * 100,
))
Epoch 0/100 Cost: 0.734527 Accuracy 50.00%
Epoch 10/100 Cost: 0.446570 Accuracy 66.67%
Epoch 20/100 Cost: 0.448868 Accuracy 66.67%
Epoch 30/100 Cost: 0.375859 Accuracy 83.33%
Epoch 40/100 Cost: 0.318583 Accuracy 83.33%
Epoch 50/100 Cost: 0.268096 Accuracy 83.33%
Epoch 60/100 Cost: 0.222295 Accuracy 100.00%
Epoch 70/100 Cost: 0.183465 Accuracy 100.00%
Epoch 80/100 Cost: 0.158036 Accuracy 100.00%
Epoch 90/100 Cost: 0.144541 Accuracy 100.00%
Epoch 100/100 Cost: 0.134652 Accuracy 100.00%
This is the introduction of this chapter , If it helps you , Please do more thumb up 、 Collection 、 Comment on 、 Focus on supporting !!
边栏推荐
- USB产品(FX3、CCG3PA)的调试方法
- Detailed record of steps to configure web server (many references)
- Re13: read the paper gender and racial stereotype detection in legal opinion word embeddings
- MD5加密验证
- leetcode70假设你正在爬楼梯。需要 n 阶你才能到达楼顶。每次你可以爬 1 或 2 个台阶。你有多少种不同的方法可以爬到楼顶呢?
- 深入理解 DeepSea 和 Salt 部署工具 – Storage6
- Debugging methods of USB products (fx3, ccg3pa)
- Re14:读论文 ILLSI Interpretable Low-Resource Legal Decision Making
- asp.net大文件分块上传断点续传demo
- 【JS】1394- ES2022 的 8 个实用的新功能
猜你喜欢

Re13: read the paper gender and racial stereotype detection in legal opinion word embeddings

Each account corresponds to all passwords, and then each password corresponds to all accounts. How to write the brute force cracking code

Re14:读论文 ILLSI Interpretable Low-Resource Legal Decision Making

Ruoyi集成flyway后启动报错的解决方法

Re10:读论文 Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous gr

关于 CMS 垃圾回收器,你真的懂了吗?

Ruoyi's solution to error reporting after integrating flyway

Do you really understand CMS garbage collector?

Interesting kotlin 0x08:what am I

LeetCode每日一练 —— 160. 相交链表
随机推荐
Leetcode daily practice - the number of digits in the offer 56 array of the sword finger
向高通支付18亿美元专利费之后,传华为向联发科订购了1.2亿颗芯片!官方回应
Some opinions on bug handling
About mit6.828_ HW9_ Some problems of barriers xv6 homework9
Microsoft question 100 - do it every day - question 11
MySQL 5.7 and sqlyogv12 installation and use cracking and common commands
【深度学习】:《PyTorch入门到项目实战》第二天:从零实现线性回归(含详细代码)
NoSQL introduction practice notes I
Call DLL file without source code
Multiple commands produce '... /xxx.app/assets.car' problem
MD5 encryption verification
Time complexity
【指针内功修炼】字符指针 + 指针数组 + 数组指针 + 指针参数(一)
13 differences between MySQL and Oracle
IM即时通讯开发优化提升连接成功率、速度等
Re10:读论文 Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous gr
【深度学习】:《PyTorch入门到项目实战》第八天:权重衰退(含源码)
阿里大哥教你如何正确认识关于标准IO缓冲区的问题
Fx3 development board and schematic diagram
Interesting kotlin 0x09:extensions are resolved statically