当前位置:网站首页>Pytorch distributed parallel processing
Pytorch distributed parallel processing
2022-08-05 06:48:00 【ProfSnail】
In the official documentation of version 1.9 of Pytorch, it is clearly stated that nn.DataParallel or multiprocessing is no longer recommended, but nn is recommended.parallel.DistributedDataParllel.Even if there is only one GPU core, nn.paralle.DistributeDataParalle is also recommended.The reason given in the official documentation is:
The difference between
DistributedDataParallelandDataParallelis:DistributedDataParalleluses multiprocessing where a process is created for each GPU, whileDataParalleluses multithreading. By using multiprocessing, each GPU has its dedicated process, this avoids the performance overhead caused by GIL of Python interpreter.
The general idea is that DistributedDataParallel is better because it allocates a fixed process to each GPU; and DataParallel is not recommended because it uses a multi-threaded method, which may incur performance overhead from the GIL or the Python interpreter.
Another Basic document mentions that for torch.multiprocessing or torch.nn.DataParallel, the user must displayCreate an independent copy of the main training script for each process.This is not convenient.
边栏推荐
猜你喜欢
随机推荐
网络排错基础-学习笔记
input详解之文件上传
## 简讲protobuf-从原理到使用
超简单的白鹭egret项目添加图片详细教程
document.querySelector() method
el-progress实现进度条颜色不同
Nacos集群的搭建过程详解
滚动条问题,未解决
【FAQ】CCAPI兼容EOS相机列表(2022年8月 更新)
H5开发调试-Fiddler手机抓包
Matplotlib绘图笔记
DevOps - Understanding Learning
前置++和后置++的区别
单片机期末复习大题
HelloWorld
txt文件英语单词词频统计
DevOps-了解学习
多用户商城多商户B2B2C拼团砍价秒杀支持小程序H5+APP全开源
白鹭egret添加新页面教程,如何添加新页面
In-depth analysis if according to data authority @datascope (annotation + AOP + dynamic sql splicing) [step by step, with analysis process]









