当前位置:网站首页>Image recognition and detection -- Notes
Image recognition and detection -- Notes
2022-07-03 07:27:00 【Deer holding grass】
Image recognition and detection
1.Variable
stay Torch Medium Variable It is a place to store values that will change ( Basket ). The value inside will keep changing . The value of the inside , Namely Tensor tensor ( egg ).
import torch
from torch.autograd import Variable # torch in Variable modular
# Prepare eggs
tensor = torch.FloatTensor([[1,2],[3,4]])
# Put the eggs in the basket , requires_grad Whether to participate in error back propagation , Do you want to calculate the gradient
variable = Variable(tensor, requires_grad=True)
print("Tensor:\n" + str(tensor))
print("\n")
print("Variable:\n" + str(variable))
2. Variable Calculation , gradient
t_out = torch.mean(tensor*tensor) # x^2
v_out = torch.mean(variable*variable) # x^2
print(t_out)
print(v_out)
up to now , We can't see any difference , But always remember , Variable When calculating , It silently builds a huge system step by step behind the background curtain , It's called a calculation chart , computational graph. What is this picture for ? It turns out that all the calculation steps ( node ) All connected , Finally, when the error is transmitted back , All at once variable The modification range ( gradient ) All worked out , and tensor There is no such ability .
v_out = torch.mean(variable*variable)# It is a calculation step added to the calculation diagram , He contributed to the calculation of the reverse transmission of errors , Let's take an example :
v_out.backward() # simulation v_out The error is transmitted in reverse
# v_out = torch.mean(variable*variable) This is the definition
# v_out = 1/4 * sum(variable*variable) This is the v_out The actual mathematical formula of
# Aim at v_out The gradient is :
# d(v_out)/d(variable) = 1/4*2*variable = variable/2
print("variable.grad:\n")
print(variable.grad) # initial Variable Gradient of
3. obtain Variable The data in it
direct print(variable) Only output Variable Data in form , In many cases, it is useless ( For example, I want to use plt drawing ), So we need to switch , Turn it into tensor form
print(variable) # Variable form
print(variable.data) # tensor form
print(variable.data.numpy()) # numpy form
4. Excitation function (Activation)
Common excitation functions :Relu, Sigmoid, Tanh, Softplus
import torch
import torch.nn.functional as F # The excitation functions are all here
from torch.autograd import Variable
x = torch.linspace(-5, 5, 200) # x data (tensor), shape=(100, 1)
x = Variable(x)
print(x.shape)
import matplotlib.pyplot as plt
plt.figure(1, figsize=(8, 6))
plt.subplot(221)
plt.plot(x_np, y_relu, c='red', label='relu')
plt.ylim((-1, 5))
plt.legend(loc='best')
plt.subplot(222)
plt.plot(x_np, y_sigmoid, c='red', label='sigmoid')
plt.ylim((-0.2, 1.2))
plt.legend(loc='best')
plt.subplot(223)
plt.plot(x_np, y_tanh, c='red', label='tanh')
plt.ylim((-1.2, 1.2))
plt.legend(loc='best')
plt.subplot(224)
plt.plot(x_np, y_softplus, c='red', label='softplus')
plt.ylim((-0.2, 6))
plt.legend(loc='best')
plt.show()
5. Regression
import torch
import torch.nn.functional as F # The excitation functions are all here
import matplotlib.pyplot as plt
x = torch.unsqueeze(torch.linspace(-1, 1, 100), dim=1) # x data (tensor), shape=(100, 1)
y = x.pow(2) + 0.2*torch.rand(x.size()) # noisy y data (tensor), shape=(100, 1)
# drawing
plt.scatter(x.data.numpy(), y.data.numpy())
plt.show()
torch.unsqueeze(input, dim, out=None) effect : Expand dimensions Returns a new tensor , Insert a dimension into the given location of the input 1 Be careful : Return tensor and input tensor share memory , So changing the content of one will change the other
torch.linspace(start, end, steps=100, out=None) Return to one 1 D tensor , Included in interval start and end Evenly spaced on step A little bit . The length of the output tensor is determined by steps decision .
Parameters : start (float) - The starting point of the interval end (float) - The end of the interval steps (int) - stay start and end Number of samples generated between out (Tensor, optional) - The result tensor
torch.rand(*sizes, out=None) → Tensor Return a tensor , Contains the from interval [0,1) A set of random numbers extracted from the uniform distribution of , The shape consists of variable parameters sizes Definition .
Parameters : sizes (int…) – Sequence of integers , Defines the output shape out (Tensor, optinal) - The result tensor
6. Building neural networks
class Net(torch.nn.Module): # Inherit torch Of Module
def __init__(self, n_feature, n_hidden, n_output):
super(Net, self).__init__() # Inherit __init__ function
# Define the form of each layer
self.hidden = torch.nn.Linear(n_feature, n_hidden) # Hidden layer linear output
self.predict = torch.nn.Linear(n_hidden, n_output) # Output layer linear output
def forward(self, x): # It's also Module Medium forward function
# Forward propagation input value , The neural network analyzes the output value
x = F.relu(self.hidden(x)) # Excitation function ( The linear value of the hidden layer )
x = self.predict(x) # Output value
return x
net = Net(n_feature=1, n_hidden=10, n_output=1)
print(net)
net2 = torch.nn.Sequential(
torch.nn.Linear(1, 10),
torch.nn.ReLU(),
torch.nn.Linear(10, 1)
)
print(net2)
7. Training network
# optimizer It's a training tool
optimizer = torch.optim.SGD(net.parameters(), lr=0.2) # Pass in net All parameters of , Learning rate
loss_func = torch.nn.MSELoss() # Calculation formula of error between predicted value and real value ( Mean square error )
epochs = 200
plt.ion() # drawing
plt.show()
for t in range(epochs):
prediction = net(x) # Feeding net Training data x, Output predicted value
loss = loss_func(prediction, y) # Calculate the error between the two
optimizer.zero_grad() # Clear the residual update parameter value of the previous step
loss.backward() # Error back propagation , Calculate parameter update values
optimizer.step() # Apply the parameter update value to net Of parameters On
# drawing
if t % 5 == 0:
# plot and show learning process
plt.cla()
plt.scatter(x.data.numpy(), y.data.numpy())
plt.plot(x.data.numpy(), prediction.data.numpy(), 'r-', lw=5)
plt.text(0.5, 0, 'Loss=%.4f' % loss.data.numpy(), fontdict={
'size': 20, 'color': 'red'})
plt.pause(0.1)
边栏推荐
- Arduino 软串口通信 的几点体会
- Basic components and intermediate components
- Advanced APL (realize group chat room)
- [most detailed] latest and complete redis interview book (50)
- Basic knowledge about SQL database
- Dora (discover offer request recognition) process of obtaining IP address
- Homology policy / cross domain and cross domain solutions /web security attacks CSRF and XSS
- Use of file class
- 4279. 笛卡尔树
- JS monitors empty objects and empty references
猜你喜欢
691. Cube IV
最全SQL与NoSQL优缺点对比
Final, override, polymorphism, abstraction, interface
How long is the fastest time you can develop data API? One minute is enough for me
[solved] unknown error 1146
SecureCRT取消Session记录的密码
Understanding of class
【已解决】SQLException: Invalid value for getInt() - ‘田鹏‘
The embodiment of generics in inheritance and wildcards
New stills of Lord of the rings: the ring of strength: the caster of the ring of strength appears
随机推荐
2021-07-18
Advanced API (multithreading)
SharePoint modification usage analysis report is more than 30 days
An overview of IfM Engage
Why is data service the direction of the next generation data center?
JS monitors empty objects and empty references
Map interface and method
Wireshark software usage
sharepoint 2007 versions
4279. Cartesian tree
691. 立方体IV
【CMake】CMake链接SQLite库
Jeecg menu path display problem
Lombok cooperates with @slf4j and logback to realize logging
pgAdmin 4 v6.11 发布,PostgreSQL 开源图形化管理工具
Advanced API (local simulation download file)
【最詳細】最新最全Redis面試大全(50道)
Jeecg request URL signature
你开发数据API最快多长时间?我1分钟就足够了
[set theory] order relation (partial order relation | partial order set | example of partial order set)