当前位置:网站首页>Image recognition and detection -- Notes
Image recognition and detection -- Notes
2022-07-03 07:27:00 【Deer holding grass】
Image recognition and detection
1.Variable
stay Torch Medium Variable It is a place to store values that will change ( Basket ). The value inside will keep changing . The value of the inside , Namely Tensor tensor ( egg ).
import torch
from torch.autograd import Variable # torch in Variable modular
# Prepare eggs
tensor = torch.FloatTensor([[1,2],[3,4]])
# Put the eggs in the basket , requires_grad Whether to participate in error back propagation , Do you want to calculate the gradient
variable = Variable(tensor, requires_grad=True)
print("Tensor:\n" + str(tensor))
print("\n")
print("Variable:\n" + str(variable))
2. Variable Calculation , gradient
t_out = torch.mean(tensor*tensor) # x^2
v_out = torch.mean(variable*variable) # x^2
print(t_out)
print(v_out)
up to now , We can't see any difference , But always remember , Variable When calculating , It silently builds a huge system step by step behind the background curtain , It's called a calculation chart , computational graph. What is this picture for ? It turns out that all the calculation steps ( node ) All connected , Finally, when the error is transmitted back , All at once variable The modification range ( gradient ) All worked out , and tensor There is no such ability .
v_out = torch.mean(variable*variable)# It is a calculation step added to the calculation diagram , He contributed to the calculation of the reverse transmission of errors , Let's take an example :
v_out.backward() # simulation v_out The error is transmitted in reverse
# v_out = torch.mean(variable*variable) This is the definition
# v_out = 1/4 * sum(variable*variable) This is the v_out The actual mathematical formula of
# Aim at v_out The gradient is :
# d(v_out)/d(variable) = 1/4*2*variable = variable/2
print("variable.grad:\n")
print(variable.grad) # initial Variable Gradient of
3. obtain Variable The data in it
direct print(variable) Only output Variable Data in form , In many cases, it is useless ( For example, I want to use plt drawing ), So we need to switch , Turn it into tensor form
print(variable) # Variable form
print(variable.data) # tensor form
print(variable.data.numpy()) # numpy form
4. Excitation function (Activation)
Common excitation functions :Relu, Sigmoid, Tanh, Softplus
import torch
import torch.nn.functional as F # The excitation functions are all here
from torch.autograd import Variable
x = torch.linspace(-5, 5, 200) # x data (tensor), shape=(100, 1)
x = Variable(x)
print(x.shape)
import matplotlib.pyplot as plt
plt.figure(1, figsize=(8, 6))
plt.subplot(221)
plt.plot(x_np, y_relu, c='red', label='relu')
plt.ylim((-1, 5))
plt.legend(loc='best')
plt.subplot(222)
plt.plot(x_np, y_sigmoid, c='red', label='sigmoid')
plt.ylim((-0.2, 1.2))
plt.legend(loc='best')
plt.subplot(223)
plt.plot(x_np, y_tanh, c='red', label='tanh')
plt.ylim((-1.2, 1.2))
plt.legend(loc='best')
plt.subplot(224)
plt.plot(x_np, y_softplus, c='red', label='softplus')
plt.ylim((-0.2, 6))
plt.legend(loc='best')
plt.show()
5. Regression
import torch
import torch.nn.functional as F # The excitation functions are all here
import matplotlib.pyplot as plt
x = torch.unsqueeze(torch.linspace(-1, 1, 100), dim=1) # x data (tensor), shape=(100, 1)
y = x.pow(2) + 0.2*torch.rand(x.size()) # noisy y data (tensor), shape=(100, 1)
# drawing
plt.scatter(x.data.numpy(), y.data.numpy())
plt.show()
torch.unsqueeze(input, dim, out=None) effect : Expand dimensions Returns a new tensor , Insert a dimension into the given location of the input 1 Be careful : Return tensor and input tensor share memory , So changing the content of one will change the other
torch.linspace(start, end, steps=100, out=None) Return to one 1 D tensor , Included in interval start and end Evenly spaced on step A little bit . The length of the output tensor is determined by steps decision .
Parameters : start (float) - The starting point of the interval end (float) - The end of the interval steps (int) - stay start and end Number of samples generated between out (Tensor, optional) - The result tensor
torch.rand(*sizes, out=None) → Tensor Return a tensor , Contains the from interval [0,1) A set of random numbers extracted from the uniform distribution of , The shape consists of variable parameters sizes Definition .
Parameters : sizes (int…) – Sequence of integers , Defines the output shape out (Tensor, optinal) - The result tensor
6. Building neural networks
class Net(torch.nn.Module): # Inherit torch Of Module
def __init__(self, n_feature, n_hidden, n_output):
super(Net, self).__init__() # Inherit __init__ function
# Define the form of each layer
self.hidden = torch.nn.Linear(n_feature, n_hidden) # Hidden layer linear output
self.predict = torch.nn.Linear(n_hidden, n_output) # Output layer linear output
def forward(self, x): # It's also Module Medium forward function
# Forward propagation input value , The neural network analyzes the output value
x = F.relu(self.hidden(x)) # Excitation function ( The linear value of the hidden layer )
x = self.predict(x) # Output value
return x
net = Net(n_feature=1, n_hidden=10, n_output=1)
print(net)
net2 = torch.nn.Sequential(
torch.nn.Linear(1, 10),
torch.nn.ReLU(),
torch.nn.Linear(10, 1)
)
print(net2)
7. Training network
# optimizer It's a training tool
optimizer = torch.optim.SGD(net.parameters(), lr=0.2) # Pass in net All parameters of , Learning rate
loss_func = torch.nn.MSELoss() # Calculation formula of error between predicted value and real value ( Mean square error )
epochs = 200
plt.ion() # drawing
plt.show()
for t in range(epochs):
prediction = net(x) # Feeding net Training data x, Output predicted value
loss = loss_func(prediction, y) # Calculate the error between the two
optimizer.zero_grad() # Clear the residual update parameter value of the previous step
loss.backward() # Error back propagation , Calculate parameter update values
optimizer.step() # Apply the parameter update value to net Of parameters On
# drawing
if t % 5 == 0:
# plot and show learning process
plt.cla()
plt.scatter(x.data.numpy(), y.data.numpy())
plt.plot(x.data.numpy(), prediction.data.numpy(), 'r-', lw=5)
plt.text(0.5, 0, 'Loss=%.4f' % loss.data.numpy(), fontdict={
'size': 20, 'color': 'red'})
plt.pause(0.1)
边栏推荐
- 4EVERLAND:IPFS 上的 Web3 开发者中心,部署了超过 30,000 个 Dapp!
- 【最詳細】最新最全Redis面試大全(50道)
- 3311. 最长算术
- Margin left: -100% understanding in the Grail layout
- IO stream system and FileReader, filewriter
- Visit Google homepage to display this page, which cannot be displayed
- 2. E-commerce tool cefsharp autojs MySQL Alibaba cloud react C RPA automated script, open source log
- File operation serialization recursive copy
- 4279. 笛卡尔树
- gstreamer ffmpeg avdec解码数据流向分析
猜你喜欢
“百度杯”CTF比赛 2017 二月场,Web:爆破-1
[set theory] Stirling subset number (Stirling subset number concept | ball model | Stirling subset number recurrence formula | binary relationship refinement relationship of division)
Discussion on some problems of array
Distributed transactions
Various postures of CS without online line
Pat grade a real problem 1166
Homology policy / cross domain and cross domain solutions /web security attacks CSRF and XSS
Le Seigneur des anneaux: l'anneau du pouvoir
不出网上线CS的各种姿势
FileInputStream and fileoutputstream
随机推荐
Common operations of JSP
Interview questions about producers and consumers (important)
Beginners use Minio
Raspberry pie update tool chain
LeetCode
Some basic operations of reflection
twenty million two hundred and twenty thousand three hundred and nineteen
Mail sending of vertx
Leetcode 213: 打家劫舍 II
PgSQL converts string to double type (to_number())
IO stream system and FileReader, filewriter
Jeecg menu path display problem
高并发内存池
Summary of Arduino serial functions related to print read
Advanced API (byte stream & buffer stream)
VMware network mode - bridge, host only, NAT network
TypeScript let与var的区别
Use of other streams
2021-07-18
Circuit, packet and message exchange