当前位置:网站首页>Pytorch MLP

Pytorch MLP

2022-07-05 11:43:00 My abyss, my abyss

1、 Hidden layer

Input layer and hidden layer are fully connected
The hidden layer and the output layer are fully connected
Alt

Alt

2、 Activation function

Activation function (activation function) Determine whether neurons should be activated by calculating the weighted sum and adding bias , They convert an input signal into a differentiable operation of an output .

3、 Summary

The multi-layer perceptron adds one or more fully connected hidden layers between the output layer and the input layer , And convert the output of the hidden layer through the activation function . So that the multi-layer perceptron can carry out nonlinear fitting .
Common activation functions include ReLU function 、sigmoid Functions and tanh function .

import torch
from torch import nn
from d2l import torch as d2l

net = nn.Sequential(nn.Flatten(),
                    nn.Linear(784, 256),
                    nn.ReLU(),
                    nn.Linear(256, 10))

def init_weights(m):
    if type(m) == nn.Linear:
        nn.init.normal_(m.weight, std=0.01)

net.apply(init_weights);

batch_size, lr, num_epochs = 256, 0.1, 10
loss = nn.CrossEntropyLoss(reduction='none')
trainer = torch.optim.SGD(net.parameters(), lr=lr)

train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)

Alt

原网站

版权声明
本文为[My abyss, my abyss]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/186/202207051134019214.html