当前位置:网站首页>Pytorch distributed parallel processing
Pytorch distributed parallel processing
2022-08-05 06:48:00 【ProfSnail】
In the official documentation of version 1.9 of Pytorch, it is clearly stated that nn.DataParallel
or multiprocessin
g is no longer recommended, but nn is recommended.parallel.DistributedDataParllel
.Even if there is only one GPU core, nn.paralle.DistributeDataParalle
is also recommended.The reason given in the official documentation is:
The difference between
DistributedDataParallel
andDataParallel
is:DistributedDataParallel
uses multiprocessing where a process is created for each GPU, whileDataParallel
uses multithreading. By using multiprocessing, each GPU has its dedicated process, this avoids the performance overhead caused by GIL of Python interpreter.
The general idea is that DistributedDataParallel
is better because it allocates a fixed process to each GPU; and DataParallel
is not recommended because it uses a multi-threaded method, which may incur performance overhead from the GIL or the Python interpreter.
Another Basic
document mentions that for torch.multiprocessing
or torch.nn.DataParallel
, the user must displayCreate an independent copy of the main training script for each process.This is not convenient.
边栏推荐
猜你喜欢
随机推荐
NB-IOT智能云家具项目系列实站
Network Protocol Fundamentals - Study Notes
LaTeX image captioning text column automatic line wrapping
H5 的浏览器存储
txt文件英语单词词频统计
盒子模型小练习
Difference between link and @improt
多线程之传递参数
What is the website ICP record?
Collection of error records (write down when you encounter them)
字体样式及其分类
In-depth analysis if according to data authority @datascope (annotation + AOP + dynamic sql splicing) [step by step, with analysis process]
LaTeX uses frame to make PPT pictures without labels
product learning materials
BIO,NIO,AIO实践学习笔记(便于理解理论)
The future of cloud gaming
深入分析若依数据权限@datascope (注解+AOP+动态sql拼接) 【循序渐进,附分析过程】
Transformer interprets and predicts instance records in detail
Alibaba Cloud Video on Demand
Dry!Teach you to use industrial raspberries pie combining CODESYS configuration EtherCAT master station