当前位置:网站首页>Activate function and its gradient

Activate function and its gradient

2022-07-05 22:43:00 dying_ star

Activation function ----sigmoid()

tanh Activation function

relu Activation function  

softmax Activation function

here yi representative pi, zi Independent variable ,0<=pi<=1,p1+p2+...+pi+...+pc=1

Find gradient function

autograd.grad() Directly return gradient information

.backward() Attach gradient attributes to variables

Perceptron gradient derivation

import torch
from torch.nn import functional as F
x=torch.randn(1,10)
w=torch.randn(2,10,requires_grad=True)# Mark w For objects that need gradient information 
o=torch.sigmoid([email protected]())# To sum by weight , use sigmoid() Activation function 
loss=F.mse_loss(torch.ones(1,2),o)# Mean square loss function 
loss.backward()# Right w Gradient of 
print(w.grad)

Output gradient

tensor([[ 0.0066, -0.0056, -0.0027,  0.0118, -0.0050,  0.0314,  0.0100, -0.0274,
         -0.0006, -0.0448],
        [ 0.0182, -0.0155, -0.0075,  0.0326, -0.0138,  0.0871,  0.0277, -0.0760,
         -0.0017, -0.1241]])

原网站

版权声明
本文为[dying_ star]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202140406278635.html