当前位置:网站首页>Spark AQE
Spark AQE
2022-07-06 00:28:00 【The south wind knows what I mean】
Spark AQE
cbo shortcoming
We all know , Previous CBO, Are based on static information Implement the plan to optimize , We all know static statistical information , Not necessarily accurate , such as hive Medium catalog The statistical information recorded in can be considered untrustworthy , The execution plan optimized on the basis of inaccurate statistical information is not necessarily optimal .AQE It was born to solve this problem , With spark The official response to AQE Continuous optimization , Here are some user scenarios to show AQE How it works
Optimize Shuffles The process
Spark shuffles It can be considered as the most important factor affecting query performance , stay shuffle When , How many configurations reducer It has always been spark Users' persistent problems , I believe a lot of use spark My friends are configuring spark.sql.shuffle.partitions Parameter time , Are somewhat confused , The configuration is too large , There will be many small task Affect operating performance , The configuration of small , It will lead to task The number is small , Single task Pull a lot of data , So as to bring GC,spill disk , even to the extent that OOM The problem of , I believe many friends have met executor lost, fetch failure And so on , The essential problem here is that we don't know how much real data is , Even if I know , Because this parameter is global , One of us application There are different query Between , Even the same one query The same job Different stage Between shuffle read The amount of data is not the same , So it is difficult to use a fixed value to unify .
Now? AQE It realizes dynamic adjustment shuffler partition number The mechanism of , Running different query Different stage When , Will be based on map End shuffle write The actual amount of data , To decide how many reducer To deal with it
, In this way, no matter how the amount of data changes , Through different reducer Number to balance the data , So as to ensure a single reducer The amount of data pulled is not too large .
Here's the thing to note ,AQE It's not everything ,AQE I don't know map How many copies of data does the client need to separate
, So in actual use , You can put spark.sql.shuffle.partitions The parameter is set larger .
adjustment Join Strategy
In cost optimization , choice join The type of is an important part , Because choose at the right time broadcast join, Just avoid shuffle, It will greatly improve the efficiency of implementation , But if the static data is wrong , For a larger ( The statistics look small ) Of relation the broadcast, I'll just put driver Memory burst .
AQE in , It will judge according to the real data at runtime , If there is a table smaller than broadcast join Configured threshold , Will implement the plan shuffle join Dynamically change it to broadcast join.
Handle Join Data skew in the process
Data skew has always been a difficult problem , Data skew
, seeing the name of a thing one thinks of its function , It refers to some of the data key It's a huge amount of data , And then according to hash When you partition , The amount of data in a partition is particularly large , This data distribution will lead to serious performance degradation , Especially in sort merge join Under the circumstances
, stay spark ui You can see up here , Some task The amount of data pulled is much larger than others task, The running time is also much longer than others task, Thus, this short board slows down the overall running time . Because of some task Pull most of the data , It will lead to spill To disk , In this case , It will be slower , More seriously , Put it right away. executor Memory burst .
Because it is difficult for us to know the characteristics of the data in advance , So in join It is difficult to avoid data skew through static statistical information , Even with hint, stay AQE in , By collecting runtime statistics , We can dynamically detect tilted partitions , Thus, the inclined partition , Split sub partitions , Each sub partition corresponds to one reducer, So as to mitigate the impact of data skew on performance .
from Spark UI Upper observation AQE Operating condition
Understand AQE Query Plans
AQE The execution plan of is dynamically changed during operation , stay spark 3.0 in , in the light of AQE Several specific execution plan nodes are introduced ,AQE Will be in Spark UI It also shows the initial plan , And the final optimized plan , Now let's show it in a graphic way .
The AdaptiveSparkPlan Node
Open the AQE, One or more... Will be added to the query AdaptiveSparkPlan Node as query Or the root node of the subquery
, Before and during execution ,isFinalPlan Will be marked as false, query After execution ,isFinalPlan Will turn into true, Once marked true stay AdaptiveSparkPlan The plan under the node will no longer change .
The CustomShuffleReader Node
CustomShuffleReader yes AQE The key link in optimization , This operator node will be based on the previous stage Real statistical data after operation , Dynamically adjust the latter stage The number of partitions , stay spark UI On , The mouse is on it , If you see coalesced
Marked words , Just explain AQE A large number of small partitions have been detected , According to the configured partition data volume , Merge them together , You can turn it on details, You can see the original partition data , The number of partitions that have been merged .
When there is a skewed
When marking , explain AQE stay sort-merge During the calculation of , A tilted partition was detected ,details You can see it in it , How many inclined partitions are there , The number of partitions that have been split from these slanted partitions .
Of course, the above two optimization effects can be superimposed :
Detecting Join Strategy Change
Compare the execution plan , You can see it's in AQE The difference between the implementation plan before and after optimization , In the execution plan , Will show the initial Implementation plan , and Final Implementation plan , In the following example , It can be seen that , Initial SortMergeJoin Optimized for BroadcastHashJoin.
stay Spark UI The optimization effect can be seen more clearly above , Of course spark ui Only the current execution plan chart will be displayed on , You can query At the beginning , and query When it's done , Compare the difference between the execution plan at that time .
Detecting Skew Join
The following legend can be based on skew=true To judge Does the engine perform data skew optimization :
AQE It's still very powerful , Because it is based on the statistical information of real data ,AQE You can accurately choose the most suitable reducer number , conversion join Strategy , And processing data skew .
边栏推荐
- Multithreading and high concurrency (8) -- summarize AQS shared lock from countdownlatch (punch in for the third anniversary)
- 云导DNS和知识科普以及课堂笔记
- Room cannot create an SQLite connection to verify the queries
- MySQL之函数
- Notepad++ regular expression replacement string
- There is no network after configuring the agent by capturing packets with Fiddler mobile phones
- Folding and sinking sand -- weekly record of ETF
- MySQL存储引擎
- DEJA_ Vu3d - cesium feature set 055 - summary description of map service addresses of domestic and foreign manufacturers
- FFmpeg抓取RTSP图像进行图像分析
猜你喜欢
Intranet Security Learning (V) -- domain horizontal: SPN & RDP & Cobalt strike
Detailed explanation of APP functions of door-to-door appointment service
FFmpeg学习——核心模块
Configuring OSPF GR features for Huawei devices
认识提取与显示梅尔谱图的小实验(观察不同y_axis和x_axis的区别)
Teach you to run uni app with simulator on hbuilderx, conscience teaching!!!
Ffmpeg learning - core module
FFMPEG关键结构体——AVCodecContext
AtCoder Beginner Contest 254【VP记录】
Determinant learning notes (I)
随机推荐
SQLServer连接数据库读取中文乱码问题解决
如何解决ecology9.0执行导入流程流程产生的问题
多线程与高并发(8)—— 从CountDownLatch总结AQS共享锁(三周年打卡)
Natural language processing (NLP) - third party Library (Toolkit):allenlp [library for building various NLP models; based on pytorch]
电机的简介
State mode design procedure: Heroes in the game can rest, defend, attack normally and attack skills according to different physical strength values.
Folding and sinking sand -- weekly record of ETF
Huawei equipment is configured with OSPF and BFD linkage
选择致敬持续奋斗背后的精神——对话威尔价值观【第四期】
XML配置文件
Notepad++ regular expression replacement string
硬件及接口学习总结
Atcoder beginer contest 258 [competition record]
Go learning --- read INI file
How to solve the problems caused by the import process of ecology9.0
Spark获取DataFrame中列的方式--col,$,column,apply
JS can really prohibit constant modification this time!
Spark AQE
Configuring OSPF GR features for Huawei devices
小程序技术优势与产业互联网相结合的分析