当前位置:网站首页>Sequence model
Sequence model
2022-07-04 08:41:00 【Doraemon AI dream】
- In the time series model , Current data is related to previously observed data
- Autoregressive models use their own past data to predict the future
- The Markov model assumes that it is currently only related to a few recent data , And then simplify the model
- Latent variable model uses latent variables to summarize historical information
Import related packages
import torch
from torch import nn
from d2l import torch as d2l
import matplotlib.pyplot as plt
Use sine function and some additive noise to generate sequence data
T = 1000 # Total points
x = torch.arange(1,T+1,dtype=torch.float32)
y = torch.sin(0.01*x)+torch.normal(0,0.2,(T,))
plt.figure(figsize=(8,4))
plt.xlim((0,1000))
plt.ylim((-1.5,1.5))
plt.xlabel('time')
plt.ylabel('y')
plt.plot(x,y)
plt.show()

The model predicts the next time step
#PyTorch Data Iterative loading
def load_array(data_arrays,batch_size,is_train=True):
dataset = TensorDataset(*data_arrays)
return DataLoader(dataset,batch_size,shuffle=is_train)
# The sequence is transformed into a model “ features - label ”
tau = 4
features = torch.zeros((T - tau, tau))
for i in range(tau):
features[:, i] = x[i:T - tau + i]
labels = x[tau:].reshape((-1, 1))
batch_size, n_train = 16, 600
train_iter = d2l.load_array((features[:n_train], labels[:n_train]),
batch_size, is_train=True)
# Initialize the function of network weight
def init_weights(m):
if type(m) == nn.Linear:
nn.init.xavier_uniform_(m.weight)
#MPL
def get_net():
net = nn.Sequential(
nn.Linear(4,10),
nn.ReLU(),
nn.Linear(10,1)
)
net.apply(init_weights)
return net
def evaluate_loss(net, data_iter, loss):
metric = d2l.Accumulator(2) # The sum of the losses , Number of samples
for X, y in data_iter:
out = net(X)
y = y.reshape(out.shape)
l = loss(out, y)
metric.add(l.sum(), l.numel())
return metric[0] / metric[1]
# Training
def train(net,train_iter,loss,epochs,lr):
optimilizer = torch.optim.Adam(net.parameters(),lr)
for epoch in range(epochs):
for X,y in train_iter:
optimilizer.zero_grad()
l = loss(net(X),y)
l.sum().backward()
optimilizer.step()
print(f'epoch {
epoch + 1}, '
f'loss: {
evaluate_loss(net, train_iter, loss):f}')
loss = nn.MSELoss(reduction='none')
net = get_net()
train(net,train_iter,loss,10,0.01)
onestep_preds = net(features)
d2l.plot(
[time, time[tau:]],
[x.detach().numpy(), onestep_preds.detach().numpy()], 'time', 'x',
legend=['data', '1-step preds'], xlim=[1, 1000], figsize=(6, 3))
d2l.plt.show()

Multi step prediction
max_steps = 64
features = torch.zeros((T - tau - max_steps + 1, tau + max_steps))
for i in range(tau):
features[:, i] = x[i:i + T - tau - max_steps + 1]
for i in range(tau, tau + max_steps):
features[:, i] = net(features[:, i - tau:i]).reshape(-1)
steps = (1, 4, 16, 64)
d2l.plot([time[tau + i - 1:T - max_steps + i] for i in steps],
[features[:, (tau + i - 1)].detach().numpy() for i in steps], 'time',
'x', legend=[f'{
i}-step preds'
for i in steps], xlim=[5, 1000], figsize=(6, 3))

summary :
- For a causal model where time is advancing , Positive estimation usually ⽐ Reverse estimation is easier .
- For up to time steps t The sequence of observations , It is in the time step t + k The predicted output is “k Next step prediction ”. As we predict the time k An increase in value , It will cause the rapid accumulation of errors and the rapid decline of prediction quality .
边栏推荐
- awk从入门到入土(9)循环语句
- awk从入门到入土(15)awk执行外部命令
- Cancel ctrl+alt+delete when starting up
- C#实现一个万物皆可排序的队列
- Redis 哨兵机制
- SQL statement view SQL Server 2005 version number
- What sparks can applet container technology collide with IOT
- Go zero micro service practical series (IX. ultimate optimization of seckill performance)
- Educational Codeforces Round 115 (Rated for Div. 2)
- Use preg_ Match extracts the string into the array between: & | people PHP
猜你喜欢

ctfshow web255 web 256 web257

Azure ad domain service (II) configure azure file share disk sharing for machines in the domain service

Unity text superscript square representation +text judge whether the text is empty

Display Chinese characters according to numbers

Call Baidu map to display the current position

yolov5 xml数据集转换为VOC数据集

Codeforces Global Round 21(A-E)

转:优秀的管理者,关注的不是错误,而是优势

Mouse over to change the transparency of web page image

Turn: excellent managers focus not on mistakes, but on advantages
随机推荐
The upper layer route cannot Ping the lower layer route
Bishi blog (13) -- oral arithmetic test app
Const string inside function - C #
Codeforces Round #793 (Div. 2)(A-D)
@Role of pathvariable annotation
Getting started with microservices: gateway gateway
Redis 哨兵机制
Comparison between sentinel and hystrix
Four essential material websites for we media people to help you easily create popular models
What does range mean in PHP
微服务入门:Gateway网关
一文了解數據异常值檢測方法
Manjaro install wechat
没有Kubernetes怎么玩Dapr?
SSRF vulnerability exploitation - attack redis
OpenFeign 服务接口调用
[attack and defense world | WP] cat
FRP intranet penetration, reverse proxy
Unity write word
string. Format without decimal places will generate unexpected rounding - C #