当前位置:网站首页>Station B Liu Erden softmx classifier and MNIST implementation -structure 9
Station B Liu Erden softmx classifier and MNIST implementation -structure 9
2022-07-06 05:42:00 【Ning Ranye】
Series articles :
List of articles
softmax classifier


Loss function : Cross entropy

Numpty Realize the cross entropy loss function

Pytorch Realized cross entropy loss 

MNIST Realization




Guide pack
import torch
from torch.utils.data import DataLoader
from torchvision import datasets,transforms
# Use relu()
import torch.nn.functional as F
# Construct optimizer
import torch.optim as optim
1- Prepare the data
# 1- Prepare the data
batch_size = 64
# take PIL Image capture and change to Tensor
transforms = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,),(0.3081,))])
train_dataset = datasets.MNIST(root='./datasets/mnist', train=True,
transform=transforms,
download=False)
test_dataset = datasets.MNIST(root='./datasets/mnist', train=False,
transform=transforms,
download=False)
train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size,
shuffle=False)
2- Design the network model
# 2- Design the network model
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.lay1 = torch.nn.Linear(784,512)
self.lay2 = torch.nn.Linear(512,256)
self.lay3 = torch.nn.Linear(256,128)
self.lay4 = torch.nn.Linear(128,64)
self.lay5 = torch.nn.Linear(64,10)
def forward(self,x):
x = x.view(-1,784)
x = F.relu(self.lay1(x))
x = F.relu(self.lay2(x))
x = F.relu(self.lay3(x))
x = F.relu(self.lay4(x))
x = F.relu(self.lay5(x))
return x
3- Build a model 、 Loss function 、 Optimizer
# 3- Construct loss function and optimizer
model = Net()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.005,momentum=0.5)
4- Training 、 test
# 4- Training test
def train(epoch):
running_loss = 0.0
# enumerate(train_loader, 0): batch_idx from 0 Count
for batch_idx, data in enumerate(train_loader, 0):
inputs, target = data
optimizer.zero_grad()
# forward + backward + update
outputs = model(inputs)
loss = criterion(outputs, target)
loss.backward()
optimizer.step()
running_loss += loss.item()
if(batch_idx % 300 == 299):
print('[%d, %5d] loss: %.3f'%(epoch + 1, batch_idx + 1, running_loss/300))
running_loss = 0.0
def test():
correct = 0
total = 0
# The test does not need to generate a calculation diagram , No gradient update is required 、 Back propagation
with torch.no_grad():
# data yes len =2 Of list
# input yes data[0], target yes data[1]
for data in test_loader:
images, label = data
outputs = model(images)
# _ Is the maximum value returned , predicted Is the subscript corresponding to the maximum
_, predicted = torch.max(outputs.data, dim=1)
total += label.size(0)
correct += (predicted == label).sum().item()
print('Accutacy on test set : %d %%'%(100*correct/total))
if __name__ == '__main__':
for epoch in range(10):
train(epoch)
test()
About :(predicted == label).sum()
Will predicted Each element in is associated with the corresponding position label Opposite edge , Same back True, Different back False. .sum Seeking True The number of 
inputs, target = data Explain the assignment 
Complete code
import torch
from torch.utils.data import DataLoader
from torchvision import datasets,transforms
# Use relu()
import torch.nn.functional as F
# Construct optimizer
import torch.optim as optim
# 1- Prepare the data
batch_size = 64
# take PIL Image capture and change to Tensor
transforms = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,),(0.3081,))])
train_dataset = datasets.MNIST(root='./datasets/mnist', train=True,
transform=transforms,
download=False)
test_dataset = datasets.MNIST(root='./datasets/mnist', train=False,
transform=transforms,
download=False)
train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size,
shuffle=False)
# 2- Design the network model
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.lay1 = torch.nn.Linear(784,512)
self.lay2 = torch.nn.Linear(512,256)
self.lay3 = torch.nn.Linear(256,128)
self.lay4 = torch.nn.Linear(128,64)
self.lay5 = torch.nn.Linear(64,10)
def forward(self,x):
x = x.view(-1,784)
x = F.relu(self.lay1(x))
x = F.relu(self.lay2(x))
x = F.relu(self.lay3(x))
x = F.relu(self.lay4(x))
x = F.relu(self.lay5(x))
return x
# 3- Construct loss function and optimizer
model = Net()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.005,momentum=0.5)
# 4- Training test
def train(epoch):
running_loss = 0.0
# enumerate(train_loader, 0): batch_idx from 0 Count
for batch_idx, data in enumerate(train_loader, 0):
inputs, target = data
optimizer.zero_grad()
# forward + backward + update
outputs = model(inputs)
loss = criterion(outputs, target)
loss.backward()
optimizer.step()
running_loss += loss.item()
if(batch_idx % 300 == 299):
print('[%d, %5d] loss: %.3f'%(epoch + 1, batch_idx + 1, running_loss/300))
running_loss = 0.0
def test():
correct = 0
total = 0
# The test does not need to generate a calculation diagram , No gradient update is required 、 Back propagation
with torch.no_grad():
for data in test_loader:
images, label = data
outputs = model(images)
# _ Is the maximum value returned , predicted Is the subscript corresponding to the maximum
_, predicted = torch.max(outputs.data, dim=1)
total += label.size(0)
correct += (predicted == label).sum().item()
print('Accutacy on test set : %d %%'%(100*correct/total))
if __name__ == '__main__':
for epoch in range(10):
train(epoch)
test()
边栏推荐
- Note the various data set acquisition methods of jvxetable
- 无代码六月大事件|2022无代码探索者大会即将召开;AI增强型无代码工具推出...
- Redis消息队列
- UCF (summer team competition II)
- (column 22) typical column questions of C language: delete the specified letters in the string.
- The ECU of 21 Audi q5l 45tfsi brushes is upgraded to master special adjustment, and the horsepower is safely and stably increased to 305 horsepower
- Redis message queue
- 注释、接续、转义等符号
- Questions d'examen écrit classiques du pointeur
- B站刘二大人-多元逻辑回归 Lecture 7
猜你喜欢
![[experience] install Visio on win11](/img/f5/42bd597340d0aed9bfd13620bb0885.png)
[experience] install Visio on win11

First knowledge database

The ECU of 21 Audi q5l 45tfsi brushes is upgraded to master special adjustment, and the horsepower is safely and stably increased to 305 horsepower

进程和线程

Deep learning -yolov5 introduction to actual combat click data set training

Yygh-11-timing statistics

Summary of deep learning tuning tricks

PDK工艺库安装-CSMC

网站进行服务器迁移前应做好哪些准备?

Check the useful photo lossless magnification software on Apple computer
随机推荐
01. Project introduction of blog development project
LeetCode_ String inversion_ Simple_ 557. Reverse word III in string
[SQL Server fast track] - authentication and establishment and management of user accounts
什么是独立IP,独立IP主机怎么样?
Unity Vector3. Use and calculation principle of reflect
【经验】win11上安装visio
Deep learning -yolov5 introduction to actual combat click data set training
How to get list length
Vulhub vulnerability recurrence 71_ Unomi
[detailed explanation of Huawei machine test] statistics of shooting competition results
[imgui] unity MenuItem shortcut key
Jushan database appears again in the gold fair to jointly build a new era of digital economy
移植InfoNES到STM32
Application Security Series 37: log injection
Jvxetable用slot植入j-popup
Game push image / table /cv/nlp, multi-threaded start
59. Spiral matrix
[cloud native] 3.1 kubernetes platform installation kubespher
LeetCode_字符串反转_简单_557. 反转字符串中的单词 III
How to download GB files from Google cloud hard disk