当前位置:网站首页>Deep learning framework pytorch rapid development and actual combat chapter4
Deep learning framework pytorch rapid development and actual combat chapter4
2022-08-02 14:18:00 【weixin_50862344】
报错
第一个
This error should be related to the fact that the network in front of me cannot be disconnected,I deleted the directorydataFolder to run againok了
第二个
The old problem is stilldata[]改成item()
前馈神经网络
import torch
import torch.nn as nn
import torchvision.datasets as dsets
import torchvision.transforms as transforms
from torch.autograd import Variable
import torch.utils.data as Data
import matplotlib.pyplot as plt
# Hyper Parameters
input_size = 784
hidden_size = 500
num_classes = 10
num_epochs = 5
batch_size = 100
learning_rate = 0.001
# MNIST Dataset
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
# Data Loader (Input Pipeline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
test_y=test_dataset.test_labels
# Neural Network Model (1 hidden layer)
class Net(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(Net, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
net = Net(input_size, hidden_size, num_classes)
# Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
# Train the Model
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Convert torch tensor to Variable
images = Variable(images.view(-1, 28*28))
labels = Variable(labels)
# Forward + Backward + Optimize
optimizer.zero_grad() # zero the gradient buffer
outputs = net(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [%d/%d], Step [%d/%d], Loss: %.4f'
%(epoch+1, num_epochs, i+1, len(train_dataset)//batch_size, loss.item()))
# Test the Model
correct = 0
total = 0
for images, labels in test_loader:
images = Variable(images.view(-1, 28*28))
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))
# Save the Model
for i in range(1,4):
plt.imshow(train_dataset.train_data[i].numpy(), cmap='gray')
plt.title('%i' % train_dataset.train_labels[i])
plt.show()
torch.save(net.state_dict(), 'model.pkl')
test_output = net(images[:20])
pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze()
print('prediction number',pred_y)
print('real number',test_y[:20].numpy())
- torch.max返回最大值和索引
- 要使用torch.optim必须先构造一个Optimizer对象
(1)He must be given an include parameter(必须是 Variable对象)进行优化
net.parameters()
(2)Parameter options can be specified,It can also be directly set individually
- torchvision
(1)torchvision.datasets包含数据集(p78)
(2)torchvision.modelsContains pretrained model structures
#加载预训练的
resnet18=models.resnet18(pretrained=True)
#具有随机权重的
resnet18=models.resnet18()
(3)图片转化
自定义ConvNet
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Jan 1 22:03:51 2018
@author: pc
"""
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class MNISTConvNet(nn.Module):
def __init__(self):
super(MNISTConvNet, self).__init__()
self.conv1 = nn.Conv2d(1, 10, 5)
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(10, 20, 5)
self.pool2 = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, input):
x = self.pool1(F.relu(self.conv1(input)))
x = self.pool2(F.relu(self.conv2(x)))
return x
net = MNISTConvNet()
print(net)
input = Variable(torch.randn(1, 1, 28, 28))
out = net(input)
print(out.size())
(2) torch.nn
层结构
(1)卷积
(2)池化
函数
位于torch.nn.functional包中(p86)
边栏推荐
猜你喜欢
The future of financial services will never stop, and the bull market will continue 2021-05-28
跑yolov5又出啥问题了(1)p,r,map全部为0
Interview | with questions to learn, Apache DolphinScheduler Wang Fuzheng
配置zabbix自动发现和自动注册。
世界上最大的开源基金会 Apache 是如何运作的?
ZABBIX配置邮件报警和微信报警
【ONE·Data || Getting Started with Sorting】
第八单元 中间件
【ROS】工控机的软件包不编译
定了!就在7月30日!
随机推荐
window10下半自动标注
How to solve mysql service cannot start 1069
Interview | with questions to learn, Apache DolphinScheduler Wang Fuzheng
Swagger 的使用
Flask框架的搭建及入门
网络安全第三次作业
Supervision strikes again, what about the market outlook?2021-05-22
关于密码加密的一点思路
[ROS](01)创建ROS工作空间
Minio文件上传
[ROS](04)package.xml详解
mysql的case when如何用
Sentinel源码(六)ParamFlowSlot热点参数限流
Unit 4 Routing Layer
此次519暴跌的几点感触 2021-05-21
shell脚本“画画”
Flask上下文,蓝图和Flask-RESTful
标量替换、栈上分配、同步消除
未来的金融服务永远不会停歇,牛市仍将继续 2021-05-28
uview 2.x版本 tabbar在uniapp小程序里头点击两次才能选中图标