当前位置:网站首页>Spark independent cluster dynamic online and offline worker node

Spark independent cluster dynamic online and offline worker node

2022-07-06 16:37:00 Ruo Miaoshen

( One ) Offline Worker node

My operation : Just turn it off ……
Anyway, the work will naturally be handed over to other nodes .

Of course, it should be like this , Log in to the node host that needs to be offline :
Execute script to shut down the machine Worker:

$> $SPARK_HOME/sbin/stop-worker.sh

Notice there's another one stop-workers.sh, It is used to close all Worker Of , Don't get me wrong .
See clearly one more s!!!

If you are in the Master I accidentally executed this multiple S Script for (Master To all Worker It must be configured ssh Key login ),
Then it's all closed ……

Before shutdown :
 Insert picture description here
After shutting down :
 Insert picture description here
After a while DEAD Of Worker Will disappear ( The cluster must take some time to see if it can reconnect ).

.

( Two ) go online Worker node

Log in to the node host that needs to go online :
Execute script to start the machine Worker, And connect to Spark Master:

$> $SPARK_HOME/sbin/start-worker.sh spark://vm00:7077

Notice there's another one start-workers.sh Script . The difference is the same as above .

Last , If it's a new one Worker, Don't forget it. $SPARK_CONF_DIR Editor inside workers( or slaves).
Add new Worker The host name , It is convenient for all clusters to start together next time .

by the way , A new start Worker There are new ones ID.
Uh …… End .

原网站

版权声明
本文为[Ruo Miaoshen]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/187/202207060920282944.html