当前位置:网站首页>Deep Learning with Pytorch-Train A Classifier
Deep Learning with Pytorch-Train A Classifier
2022-06-30 09:14:00 【The man of Jike will never admit defeat】
Deep Learning with Pytorch: A 60 Minute Blitz
Training a classifier
We have seen how to define a neural network , Calculate the generation value and update the weight of the network , Now you might think ,
Data? ?
Usually , When you deal with pictures 、 Text 、 Sound or video data , You use standard python package Load data to numpy array Of python package , Then you put array convert to torch.*Tensor
Special , For the image , We created one called torchvision My bag ,torchvision It can be loaded directly, such as Imagenet, CIFAR10, MNIST Common data sets such as , There are also some very common data converters , This provides great convenience , Avoid the writing of sample file code
In this tutorial, we will use CIFAR10 Data sets . There are ten categories in total : ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. CIFAR10 Picture in 3 passageway ,32*32 size
Train a picture classifier
- We will perform the following steps in turn
- 1. Use torchvision Load and normalize CIFAR10 Of training and testing Data sets
- 2. Define a convolutional neural network CNN
- 3. Define the cost function loss function
- 4. stay training data Train the neural network
- 5. stay testing data Test the neural network
1. Load and normalize CIFAR10
Use torchvision, load CIFAR10 so easy,( Mother no longer has to worry about my study …)
import torch
import torchvision
import torchvision.transforms as tfs
# torchvision The output of the dataset is [0, 1] Scope PILImage picture
# We use the normalization method to convert it into [-1, 1] Within the scope of Tensor
import torch
import torchvision
import torchvision.transforms as tfs
transform = tfs.Compose([tfs.ToTensor(),
tfs.Normalize((0.5, 0.5, 0.5),(0.5, 0.5, 0.5))])
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=Flase,download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck')
Let's look at some training pictures
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))2. Define a convolutional neural network
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5) # 3 input image channel, 6 output channels, 5x5 square convolution kernel
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120) # an affine operation: y = Wx + b
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv2(x)), 2) # If the size is a square you can only specify a single number
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()3. Define the cost function ( Loss Function ) And optimizer ( Optimizer )
Use Classification Cross-Entropy and SGD
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)4. Train this network
Now things are starting to get interesting , We simply iterate over the data and feed it into the network to optimize it .
for epoch in range(2): # Loop over the data set multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statoistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training.')5. stay Testing Data Test network on
We've been training data Train twice on the Internet , But we need to check whether the network has learned anything
We compare the class labels output from the network with Ground-Truth Compare to check the network , If the prediction is correct , Let's add the sample to the list of correct predictions
First step , First, let's show you from testing set Get some photos
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
# GroundTruth: cat ship ship planeNow let's look at what the neural network thinks the above sample is
outputs = net(Variable(images))
# the outputs are energies for the 10 classes.
# Higher the energy for a class, the more the network
# thinks that the image is of the particular class
# So, Let's get the index of the highest energy
_, predicted = torch.max(outputs.data, 1)
print('print')
# Training results
[1, 2000] loss: 2.195
[1, 4000] loss: 1.789
[1, 6000] loss: 1.633
[1, 8000] loss: 1.534
[1, 10000] loss: 1.511
[1, 12000] loss: 1.433
[2, 2000] loss: 1.387
[2, 4000] loss: 1.368
[2, 6000] loss: 1.338
[2, 8000] loss: 1.307
[2, 10000] loss: 1.273
[2, 12000] loss: 1.281
Finished Training.
predicted: horse bird plane truck The result of the training is very good
Let's take a look at the whole network testing data How are you doing on the
corret = 0
total = 0
with torch.no_grad():
for data in testloader:
images, lables = data
outputs = net(images)
_, predicts = torch.max(outputs.data, 1)
total += labels.size(0)
corret += (predicted == labels).sum().iterm()
print('Accuracy of the network on the 10000 test images: %d %%'
% 100 * corret / total)The result of training is better than random , If you want to choose one of the ten, the accuracy rate is only 10%
So in which categories does it perform well , Which categories do not perform well ?
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
prit('Accucary of %5s : %2d %%' %
classes[i], 100 * class_correct[i]/class_total[i])边栏推荐
- Row column (vertical and horizontal table) conversion of SQL
- Talk about the kotlin cooperation process and the difference between job and supervisorjob
- Wikimedia Foundation announces the first customers of its new commercial product "Wikimedia enterprise"
- [wechat applet] realize applet pull-down refresh and pull-up loading
- Design specification for smart speakers v1.0
- Raspberry pie 4B no screen installation system and networking using VNC wireless projection function
- Flutter theme (skin) changes
- [paid promotion] collection of frequently asked questions, FAQ of recommended list
- 基于Svelte3.x桌面端UI组件库Svelte UI
- Pytorch BERT
猜你喜欢

Harmonyos actual combat - ten thousand words long article understanding service card development process

Opencv learning notes -day 12 (ROI region extraction and inrange() function operation)

Agp7.0|kts makes a reinforced plug-in

Opencv learning notes-day6-7 (scroll bar operation demonstration is used to adjust image brightness and contrast, and createtrackbar() creates a scroll bar function)

Tutorial for beginners of small programs day01

Dart asynchronous task

Wechat development tool (applet)

Explanation on the use of password profiteering cracking tool Hydra

Summary of common pytoch APIs

Flink Exception -- No ExecutorFactory found to execute the application
随机推荐
What kind of experience is it to develop a "grandson" who will call himself "Grandpa"?
Introduction to MySQL basics day3 power node [Lao Du] class notes
关于Lombok的@Data注解
[untitled]
Maxiouassigner of mmdet line by line interpretation
Rew acoustic test (I): microphone calibration
icon资源
Why must redis exist in distributed systems?
Evaluation standard for audio signal quality of intelligent speakers
Introduction to the runner of mmcv
Flutter 0001, environment configuration
Opencv learning notes -day10 logical operation of image pixels (usage of rectangle function and rect function and bit related operation in openCV)
Opencv learning notes-day14 drawing of image geometry (rect class rotatedrect class, rectangle drawing rectangle circle drawing circular function line drawing line function ellipse drawing elliptic fu
Tutorial for beginners of small programs day01
Introduction to MySQL basics day4 power node [Lao Du] class notes
Abstract factory pattern
Influencing factors of echo cancellation for smart speakers
Esp32 (6): Bluetooth and WiFi functions for function development
Implementing custom drawer component in quick application
Detailed explanation of pipline of mmdetection