当前位置:网站首页>Problem solving: runtimeerror: CUDA out of memory Tried to allocate 20.00 MiB

Problem solving: runtimeerror: CUDA out of memory Tried to allocate 20.00 MiB

2022-07-08 02:20:00 Programming newbird

Three methods commonly used on the network

Method 1 :

Just reduce batchsize

  Change the configuration of the file cfg Of batchsize=1, Generally in cfg Search under file batch or batchsize, take batchsize Turn it down , Run again , Similar to changing the following

 

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBAamF2YWhhb2dl,size_20,color_FFFFFF,t_70,g_se,x_16

 

Method 2 : 

The above method has not been solved yet , Don't change batchsize, Consider links to the following methods

 

Don't calculate the gradient :

 

ps: On which line of code is the error reported , Add the following line of code , Don't calculate the gradient

 

with torch.no_grad()

 

The method of not calculating the gradient

 

Method 3 :

Free memory : Links are as follows

 

Free memory

 

if hasattr(torch.cuda, 'empty_cache'):

 torch.cuda.empty_cache()

ps: On the top of the line of code that reported the error , Add the following two lines of code , Free irrelevant memory

 

if hasattr(torch.cuda, 'empty_cache'):

 torch.cuda.empty_cache()

Method four : My solution

      I didn't use the above method , The most important thing is that I'm a novice and don't know where to change , So after reading a lot of online solutions , Try this method , You can train , Successfully solved GPU Out of memory

resolvent : take img-size The small

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBAamF2YWhhb2dl,size_20,color_FFFFFF,t_70,g_se,x_16

  I put the original [640,640] As shown in the figure above , Solve the problem successfully

原网站

版权声明
本文为[Programming newbird]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202130540018329.html