当前位置:网站首页>Problems in loading and saving pytorch trained models

Problems in loading and saving pytorch trained models

2022-07-06 08:33:00 MAR-Sky

stay gpu Finish training , stay cpu Load on

torch.save(model.state_dict(), PATH)#  stay gpu Save after training 

#  stay cpu Loaded on the model of 
model.load_state_dict(torch.load(PATH, map_location='cpu'))

stay cpu Finish training , stay gpu Load on

torch.save(model.state_dict(), PATH)#  stay gpu Save after training 

#  stay cpu Loaded on the model of 
model.load_state_dict(torch.load(PATH, map_location='cuda:0'))

Loading contents that need attention in use

When data is put into GPU, Models that need training should also be put into GPU

''' data_loader:pytorch Load data in  '''
 for i, sample in enumerate(data_loader):  #  Traverse the data by batch 
     image, target = sample  #  The return value of each batch loading 
     if CUDA:
         image = image.cuda()   #  Input / output input gpu
         target = target.cuda()
     # print(target.size)
     optimizer.zero_grad()     #  Optimization function 
     output = mymodel(image)

mymodel.to(torch.device("cuda"))

 Insert picture description here

Multiple gpu Loading during training

Reference resources :https://blog.csdn.net/weixin_43794311/article/details/120940090

import torch.nn as nn
mymodel = nn.DataParallel(mymodel)

pytorch Medium nn Module USES nn.DataParallel Load the model into multiple GPU, We need to pay attention to , The weight saved by this loading method The parameters will Not used nn.DataParallel Before loading the keywords of the weight parameters saved by the model More than a "module.". Whether to use nn.DataParallel Load model , It may cause the following problems when loading the model next time ,
 Insert picture description here
When there is one more in front of the weight parameter “module." when , The easiest way is to use nn.DataParallel Load model ,

原网站

版权声明
本文为[MAR-Sky]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/187/202207060827241619.html