当前位置:网站首页>[in depth learning] pytorch 1.12 was released, officially supporting Apple M1 chip GPU acceleration and repairing many bugs
[in depth learning] pytorch 1.12 was released, officially supporting Apple M1 chip GPU acceleration and repairing many bugs
2022-07-06 21:11:00 【Demeanor 78】
Almost Human reports
PyTorch 1.12 Official release , Friends who have not been updated can be updated .
distance PyTorch 1.11 Just a few months after launch ,PyTorch 1.12 came ! This version is provided by 1.11 Since version 3124 many times commits form , from 433 Contributors complete .1.12 The version has been significantly improved , And fixed a lot Bug.
With the release of the new version , The most talked about is probably PyTorch 1.12 Support for apple M1 chip .
As early as this year 5 month ,PyTorch Officials have announced their official support for M1 Version of Mac on GPU Accelerated PyTorch Machine learning model training . before ,Mac Upper PyTorch Training can only use CPU, But as the PyTorch 1.12 Release of version , Developers and researchers can take advantage of apple GPU Greatly speed up model training .
stay Mac Introduction of acceleration PyTorch Training
PyTorch GPU Training acceleration is using apple Metal Performance Shaders (MPS) Implemented as a back-end .MPS The back end extends PyTorch frame , Provided in Mac Scripts and functions for setting up and running operations on .MPS Use for each Metal GPU Series of unique features to fine tune the kernel capability to optimize computing performance . The new device maps machine learning computing diagrams and primitives to MPS Graph The framework and MPS The provided tuning kernel .
Each machine equipped with Apple's self-developed chip Mac All have a unified memory architecture , Give Way GPU You can directly access the complete memory storage .PyTorch Official expression , This makes Mac Become an excellent platform for machine learning , Enables users to train larger networks or batch sizes locally . This reduces the costs associated with cloud based development or for additional local resources GPU Computing power demand . The unified memory architecture also reduces data retrieval latency , Improved end-to-end performance .
You can see , And CPU Compared to baseline ,GPU Acceleration has doubled the training performance :
With GPU Blessing , Train and evaluate faster than CPU
The picture above shows apple in 2022 year 4 Monthly use is equipped with Apple M1 Ultra(20 nucleus CPU、64 nucleus GPU)128GB Memory ,2TB SSD Of Mac Studio Test results of the system . The test model is ResNet50(batch size = 128)、HuggingFace BERT(batch size = 64) and VGG16(batch size = 64). Performance testing is carried out using a specific computer system , Reflects Mac Studio General performance of .
PyTorch 1.12 Other new features
front end API:TorchArrow
PyTorch The official has released a new Beta Version for users to try :TorchArrow. This is a machine learning preprocessing library , Batch data processing . It has high performance , Both of them Pandas style , It also has easy to use API, To speed up user preprocessing workflow and development .
(Beta)PyTorch Medium Complex32 and Complex Convolutions
at present ,PyTorch Native support for plural 、 The plural autograd、 Complex number module and a large number of complex number operations ( Linear algebra and fast Fourier transform ). Include torchaudio and ESPNet In many libraries , Have used the plural , also PyTorch 1.12 Through complex convolution and experimental complex32 Data types further expand the complex function , This data type supports half precision FFT operation . because CUDA 11.3 There is... In the package bug, If the user wants to use the plural , Official recommended use CUDA 11.6 package .
(Beta)Forward-mode Automatic differentiation
Forward-mode AD It is allowed to calculate the directional derivative in the forward transfer ( Or equivalent Diya comparable vector product ).PyTorch 1.12 Significantly improved forward-mode AD Coverage of .
BetterTransformer
PyTorch Now support multiple CPU and GPU fastpath Realization (BetterTransformer), That is to say Transformer Encoder module , Include TransformerEncoder、TransformerEncoderLayer and MultiHeadAttention (MHA) The implementation of the . In the new version ,BetterTransformer In many common scenes, the speed is fast 2 times , It also depends on the model and input characteristics . The new version API Support with previous PyTorch Transformer API compatible , If the existing model meets fastpath Carry out the requirements , They will accelerate existing models , And read using the previous version PyTorch Training model .
Besides , There are some updates in the new version :
modular : A new method of module calculation beta Feature is functionality API. This new functional_call() API It allows the user to fully control the parameters used in the module calculation ;
TorchData:DataPipe Improved communication with DataLoader The compatibility of .PyTorch Support is now based on AWSSDK Of DataPipes.DataLoader2 Has been introduced as a management DataPipes And others API A way to interact with the backend ;
nvFuser: nvFuser It's new 、 Faster default fuser, Used to compile to CUDA equipment ;
Matrix multiplication accuracy : By default ,float32 Matrix multiplication on data types will now work in full precision mode , This mode is slow , But it will produce more consistent results ;
Bfloat16: It provides faster calculation time for less precise data types , So in 1.12 Chinese vs Bfloat16 New improvements have been made to data types ;
FSDP API: As a prototype in 1.11 It's published in ,FSDP API stay 1.12 The beta version has been released , And added some improvements .
See more :https://pytorch.org/blog/pytorch-1.12-released/
Past highlights
It is suitable for beginners to download the route and materials of artificial intelligence ( Image & Text + video ) Introduction to machine learning series download Chinese University Courses 《 machine learning 》( Huang haiguang keynote speaker ) Print materials such as machine learning and in-depth learning notes 《 Statistical learning method 》 Code reproduction album machine learning communication qq Group 955171419, Please scan the code to join wechat group
边栏推荐
- 愛可可AI前沿推介(7.6)
- Study notes of grain Mall - phase I: Project Introduction
- Aiko ai Frontier promotion (7.6)
- Reflection operation exercise
- After working for 5 years, this experience is left when you reach P7. You have helped your friends get 10 offers
- R語言可視化兩個以上的分類(類別)變量之間的關系、使用vcd包中的Mosaic函數創建馬賽克圖( Mosaic plots)、分別可視化兩個、三個、四個分類變量的關系的馬賽克圖
- MLP (multilayer perceptron neural network) is a multilayer fully connected neural network model.
- ICML 2022 | flowformer: task generic linear complexity transformer
- New database, multidimensional table platform inventory note, flowus, airtable, seatable, Vig table Vika, Feishu multidimensional table, heipayun, Zhixin information, YuQue
- ACdreamoj1110(多重背包)
猜你喜欢
967- letter combination of telephone number
Aiko ai Frontier promotion (7.6)
968 edit distance
OAI 5G NR+USRP B210安装搭建
SAP UI5 框架的 manifest.json
966 minimum path sum
OAI 5g nr+usrp b210 installation and construction
嵌入式开发的7大原罪
This year, Jianzhi Tencent
ICML 2022 | flowformer: task generic linear complexity transformer
随机推荐
Reflection operation exercise
Web开发小妙招:巧用ThreadLocal规避层层传值
039. (2.8) thoughts in the ward
基于深度学习的参考帧生成
【论文解读】用于白内障分级/分类的机器学习技术
Nodejs tutorial expressjs article quick start
Pat 1078 hashing (25 points) ⼆ times ⽅ exploration method
Performance test process and plan
PHP saves session data to MySQL database
Chris LATTNER, the father of llvm: why should we rebuild AI infrastructure software
R语言做文本挖掘 Part4文本分类
for循环中break与continue的区别——break-完全结束循环 & continue-终止本次循环
过程化sql在定义变量上与c语言中的变量定义有什么区别
全网最全的知识库管理工具综合评测和推荐:FlowUs、Baklib、简道云、ONES Wiki 、PingCode、Seed、MeBox、亿方云、智米云、搜阅云、天翎
Forward maximum matching method
R语言可视化两个以上的分类(类别)变量之间的关系、使用vcd包中的Mosaic函数创建马赛克图( Mosaic plots)、分别可视化两个、三个、四个分类变量的关系的马赛克图
JS according to the Chinese Alphabet (province) or according to the English alphabet - Za sort &az sort
Nodejs教程之让我们用 typescript 创建你的第一个 expressjs 应用程序
Seven original sins of embedded development
Opencv learning example code 3.2.3 image binarization