当前位置:网站首页>Pytoch machine learning GPU usage (conversion from CPU to GPU)

Pytoch machine learning GPU usage (conversion from CPU to GPU)

2022-06-11 04:43:00 saya1009

torch.cuda.is_available() Return value judgment of GPU Is it available , return True Have the ability to use GPU.
torch.cuda.device_count() You can get what you can use GPU Number .
nvidia-smi see GPU Configuration information .

Transfer data from memory to GPU, Generally for tensors ( The data we need ) And models . For tensors ( The type is FloatTensor Or is it LongTensor etc. ), Always use it directly .to(device) or .cuda() that will do .
device = torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”)
# or device = torch.device(“cuda:0”)
device1 = torch.device(“cuda:1”)
for batch_idx, (img, label) in enumerate(train_loader):
img=img.to(device)
label=label.to(device)

For the model , In the same way , Use .to(device) or .cuda To put the network into GPU memory .
# Instantiate the network
model = Net()
model.to(device) # Use the serial number 0 Of GPU
# or model.to(device1) # Use the serial number 1 Of GPU

many GPU Speed up
Here we introduce single host multi server GPUs The situation of , Single machine many GPUs Mainly used DataParallel function , instead of DistributedParallel, The latter is generally used for multiple hosts GPUs, Of course, it can also be used for single machine and multi GPU.
There are many ways to use multi card training , Of course, the premise is that there are two or more GPU.
Use directly model Pass in torch.nn.DataParallel Function , The following code :

# On the model  net = torch.nn.DataParallel(model)

At this time , By default, all existing graphics cards will be used .
If your computer has many graphics cards , But I just want to use some of them , If only use the number 0、1、3、4 The four one. GPU, Then you can use the following methods :
# Suppose there is 4 individual GPU, Its id Set as follows
device_ids =[0,1,2,3]
# Data pair
input_data=input_data.to(device=device_ids[0])
# For models
net = torch.nn.DataParallel(model)
net.to(device)

perhaps

os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(map(str, [0,1,2,3])) net = torch.nn.DataParallel(model)
原网站

版权声明
本文为[saya1009]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/03/202203020546110810.html