当前位置:网站首页>Pytorch linear regression
Pytorch linear regression
2022-07-05 11:42:00 【My abyss, my abyss】
1、 Import table data
filename = "./data.csv"
data = pd.read_csv(filename)
features = data.iloc[:,1:]
labels = data.iloc[:,0]
2、 Turn into Tensor form
''' DataFrame ----> Tensor '''
features = torch.tensor(features.values, dtype=torch.float32)
labels = torch.tensor(labels.values, dtype=torch.float32)
labels = torch.reshape(labels,(-1,1))
3、 Generate iterators , Read data in batches
from torch.utils import data
def load_array(data_arrays , batch_size , is_train = True):
dataset = data.TensorDataset(*data_arrays)
return data.DataLoader(dataset , batch_size , shuffle = is_train)
batch_size = 10
data_iter = load_array((features,labels),batch_size)
next(iter(data_iter))
4、 Define neural networks
class LinearRegression(nn.Module):
def __init__(self):
super(LinearRegression, self).__init__()
self.layer1 = nn.Linear(2, 1)
def forward(self, x):
return self.layer1(x)
5、 Training network
model = LinearRegression()
gpu = torch.device('cuda')
mse_loss = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
Loss = []
epochs = 1000
def train():
for i in range(epochs):
for X , y in data_iter:
y_hat = model(X) # Calculation model output results
loss = mse_loss(y_hat, y) # Loss function
loss_numpy = loss.detach().numpy()
Loss.append(loss_numpy)
optimizer.zero_grad() # Gradient clear
loss.backward() # Calculate weights
optimizer.step() # Modify weights
print(i, loss.item(), sep='\t')
train() # Training
for parameter in model.parameters():
print(parameter)
plt.plot(Loss)
6 The overall code
import matplotlib.pyplot as plt
import pandas as pd
import torch
from torch import nn, optim
from torch.utils import data
class LinearRegression (nn.Module):
def __init__(self, feature_nums):
super (LinearRegression, self).__init__ ()
self.layer1 = nn.Linear (feature_nums, 1)
def forward(self, x):
return self.layer1 (x)
def load_array(data_arrays, batch_size, is_train=True):
dataset = data.TensorDataset (*data_arrays)
return data.DataLoader (dataset, batch_size, shuffle=is_train)
def Linear_Regression_pytorch(filepath, feature_nums, batch_size, learning_rate, epochs):
filename = filepath
data = pd.read_csv (filename)
features = data.iloc[:, 1:]
labels = data.iloc[:, 0]
features = torch.tensor (features.values)
features = torch.tensor (features, dtype=torch.float32)
labels = torch.tensor (labels.values)
labels = torch.tensor (labels, dtype=torch.float32)
labels = torch.reshape (labels, (-1, 1))
batch_size = batch_size
data_iter = load_array ((features, labels), batch_size)
next (iter (data_iter))
model = LinearRegression (feature_nums)
gpu = torch.device ('cuda')
mse_loss = nn.MSELoss ()
optimizer = optim.Adam (model.parameters (), lr=learning_rate)
Loss = []
epochs = epochs
for i in range (epochs):
for X, y in data_iter:
y_hat = model (X) # Calculation model output results
loss = mse_loss (y_hat, y) # Loss function
loss_numpy = loss.detach ().numpy ()
optimizer.zero_grad () # Gradient clear
loss.backward () # Calculate weights
optimizer.step () # Modify weights
print (i, loss.item (), sep='\t')
Loss.append (loss.item ())
for parameter in model.parameters ():
print (parameter)
plt.plot (Loss)
plt.title("Loss")
plt.show ()
if __name__ == "__main__":
Linear_Regression_pytorch (filepath="data.csv",
feature_nums=2,
batch_size=10,
learning_rate=0.05,
epochs=1000
)
边栏推荐
- yolov5目標檢測神經網絡——損失函數計算原理
- [leetcode] wild card matching
- Home office things community essay
- Manage multiple instagram accounts and share anti Association tips
- C#实现WinForm DataGridView控件支持叠加数据绑定
- How to protect user privacy without password authentication?
- 【 YOLOv3中Loss部分计算】
- NFT 交易市场主要使用 ETH 本位进行交易的局面是如何形成的?
- Cdga | six principles that data governance has to adhere to
- 13. (map data) conversion between Baidu coordinate (bd09), national survey of China coordinate (Mars coordinate, gcj02), and WGS84 coordinate system
猜你喜欢
How can China Africa diamond accessory stones be inlaid to be safe and beautiful?
【无标题】
Redis集群(主从)脑裂及解决方案
COMSOL -- three-dimensional graphics random drawing -- rotation
MySQL 巨坑:update 更新慎用影响行数做判断!!!
XML parsing
7.2 daily study 4
Cdga | six principles that data governance has to adhere to
Advanced technology management - what is the physical, mental and mental strength of managers
Yolov 5 Target Detection Neural Network - Loss Function Calculation Principle
随机推荐
Mysql统计技巧:ON DUPLICATE KEY UPDATE用法
MySQL 巨坑:update 更新慎用影响行数做判断!!!
百问百答第45期:应用性能探针监测原理-node JS 探针
liunx禁ping 详解traceroute的不同用法
Web API configuration custom route
跨境电商是啥意思?主要是做什么的?业务模式有哪些?
ACID事务理论
Install esxi 6.0 interactively
高校毕业求职难?“百日千万”网络招聘活动解决你的难题
Proof of the thinking of Hanoi Tower problem
Web API配置自定义路由
解决grpc连接问题Dial成功状态为TransientFailure
【L1、L2、smooth L1三类损失函数】
How did the situation that NFT trading market mainly uses eth standard for trading come into being?
Crawler (9) - scrape framework (1) | scrape asynchronous web crawler framework
AutoCAD -- mask command, how to use CAD to locally enlarge drawings
NFT 交易市场主要使用 ETH 本位进行交易的局面是如何形成的?
[office] eight usages of if function in Excel
Risc-v-qemu-virt in FreeRTOS_ Scheduling opportunity of GCC
Solve the grpc connection problem. Dial succeeds with transientfailure