当前位置:网站首页>Pytoch machine learning GPU usage (conversion from CPU to GPU)
Pytoch machine learning GPU usage (conversion from CPU to GPU)
2022-06-11 04:43:00 【saya1009】
torch.cuda.is_available() Return value judgment of GPU Is it available , return True Have the ability to use GPU.
torch.cuda.device_count() You can get what you can use GPU Number .
nvidia-smi see GPU Configuration information .
Transfer data from memory to GPU, Generally for tensors ( The data we need ) And models . For tensors ( The type is FloatTensor Or is it LongTensor etc. ), Always use it directly .to(device) or .cuda() that will do .
device = torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”)
# or device = torch.device(“cuda:0”)
device1 = torch.device(“cuda:1”)
for batch_idx, (img, label) in enumerate(train_loader):
img=img.to(device)
label=label.to(device)
For the model , In the same way , Use .to(device) or .cuda To put the network into GPU memory .
# Instantiate the network
model = Net()
model.to(device) # Use the serial number 0 Of GPU
# or model.to(device1) # Use the serial number 1 Of GPU
many GPU Speed up
Here we introduce single host multi server GPUs The situation of , Single machine many GPUs Mainly used DataParallel function , instead of DistributedParallel, The latter is generally used for multiple hosts GPUs, Of course, it can also be used for single machine and multi GPU.
There are many ways to use multi card training , Of course, the premise is that there are two or more GPU.
Use directly model Pass in torch.nn.DataParallel Function , The following code :
# On the model net = torch.nn.DataParallel(model)
At this time , By default, all existing graphics cards will be used .
If your computer has many graphics cards , But I just want to use some of them , If only use the number 0、1、3、4 The four one. GPU, Then you can use the following methods :
# Suppose there is 4 individual GPU, Its id Set as follows
device_ids =[0,1,2,3]
# Data pair
input_data=input_data.to(device=device_ids[0])
# For models
net = torch.nn.DataParallel(model)
net.to(device)
perhaps
os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(map(str, [0,1,2,3])) net = torch.nn.DataParallel(model)
边栏推荐
- 正大国际 至秋天的第一个主帐户
- Qt生成二维码图片方法
- Unity 在不平坦的地形上创建河流
- Crmeb/v4.4 Standard Version open version mall source code applet official account h5+app mall source code
- 强大新UI装逼神器微信小程序源码+多模板支持多种流量主模式
- Getting started with mathmatica
- Legend has it that setting shader attributes with shader ID can improve efficiency:)
- Decision tree (hunt, ID3, C4.5, cart)
- [Transformer]Is it Time to Replace CNNs with Transformers for Medical Images?
- New UI learning method subtraction professional version 34235 question bank learning method subtraction professional version applet source code
猜你喜欢
随机推荐
PCB地线设计_单点接地_底线加粗
An adaptive chat site - anonymous online chat room PHP source code
Codesys get System Time
Unity message framework notificationcenter
Exness: Liquidity Series - order Block, Unbalanced (II)
New UI learning method subtraction professional version 34235 question bank learning method subtraction professional version applet source code
Redis持久化(少年一贯快马扬帆,道阻且长不转弯)
Unity MonoSingleton
无刷电机调试经验与可靠性设计
Emlog new navigation source code / with user center
Project architecture evolution
codesys 获取系统时间
Implementation of unity transport mechanism
Win10+manjaro dual system installation
Guanghetong officially released the annual theme of 2022 5g Huanxin: Five Forces co drive · Jizhi future
Crmeb/v4.4 Standard Version open version mall source code applet official account h5+app mall source code
用万用表检测数码管
ACTS:如何让缺陷无处藏身?
Check the digital tube with a multimeter
Overview of guanghetong industrial 5g module product line









