当前位置:网站首页>08 spark 集群搭建
08 spark 集群搭建
2022-08-01 16:20:00 【蓝风9】
前言
呵呵 最近有一系列环境搭建的相关需求
记录一下
spark 三个节点 : 192.168.110.150, 192.168.110.151, 192.168.110.152
150 为 master, 151 为 slave01, 152 为 slave02
三台机器都做了 trusted shell
spark 版本是 spark-3.2.1-bin-hadoop2.7
spark 集群搭建
spark 三个节点 : 192.168.110.150, 192.168.110.151, 192.168.110.152
1. 基础环境准备
192.168.110.150, 192.168.110.151, 192.168.110.152 上面安装 jdk, 上传 spark 的安装包
安装包来自于 Downloads | Apache Spark
2. spark 配置调整
复制如下 三个配置文件, 进行调整, 调整了之后 scp 到 slave01, slave02 上面
[email protected]:/usr/local/ProgramFiles/spark-3.2.1-bin-hadoop2.7# cp conf/spark-defaults.conf.template conf/spark-defaults.conf
[email protected]:/usr/local/ProgramFiles/spark-3.2.1-bin-hadoop2.7# cp conf/spark-env.sh.template conf/spark-env.sh
[email protected]:/usr/local/ProgramFiles/spark-3.2.1-bin-hadoop2.7# cp conf/workers.template conf/workers
更新 workers
# A Spark Worker will be started on each of the machines listed below.
slave01
slave02
更新 spark-defaults.conf
spark.master spark://master:7077
# spark.eventLog.enabled true
# spark.eventLog.dir hdfs://namenode:8021/directory
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.driver.memory 1g
# spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
更新 spark-env.sh
export JAVA_HOME=/usr/local/ProgramFiles/jdk1.8.0_291
export HADOOP_HOME=/usr/local/ProgramFiles/hadoop-2.10.1
export HADOOP_CONF_DIR=/usr/local/ProgramFiles/hadoop-2.10.1/etc/hadoop
export SPARK_DIST_CLASSPATH=$(/usr/local/ProgramFiles/hadoop-2.10.1/bin/hadoop classpath)
export SPARK_MASTER_HOST=master
export SPARK_MASTER_PORT=7077
3. 启动集群
master 所在的机器执行 start-all.sh
[email protected]:/usr/local/ProgramFiles/spark-3.2.1-bin-hadoop2.7# ./sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/ProgramFiles/spark-3.2.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-master.out
slave01: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/ProgramFiles/spark-3.2.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave01.out
slave02: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/ProgramFiles/spark-3.2.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave02.out
[email protected]:/usr/local/ProgramFiles/spark-3.2.1-bin-hadoop2.7#
测试集群
使用 spark-submit 提交 SparkPI 迭代 1000 次
spark-submit --class org.apache.spark.examples.SparkPi /usr/local/ProgramFiles/spark-3.2.1-bin-hadoop2.7/examples/jars/spark-examples_2.12-3.2.1.jar 1000
java driver 提交 spark 任务
spark web ui 监控页面
完
边栏推荐
猜你喜欢
随机推荐
显示为弹出窗口是什么意思(电脑总是弹出广告)
27英寸横置大屏+实体按键,全新探险者才是安全而合理的做法!
The untiy Resources directory dynamically loads resources
提速!进口婴幼儿配方产品出证仅需1-3天
pynlpir更新license Error: unable to fetch newest license解决方案
百图生科卓越开发者计划全面升级暨《计算免疫问题白皮书》发布
ESP8266-Arduino编程实例-MLX90614红外测温传感器驱动
nodejs安装淘宝镜像(配置淘宝镜像)
便携烙铁开源系统IronOS,支持多款便携DC, QC, PD供电烙铁,支持所有智能烙铁标准功能
27英寸横置大屏+实体按键,全新探险者才是安全而合理的做法
预定义和自定义
选择合适的 DevOps 工具,从理解 DevOps 开始
pytorch中tensor转成图片保存
MySQL INTERVAL 关键字指南
Zhaoqi Science and Technology Innovation Platform attracts talents and attracts talents, and attracts high-level talents at home and abroad
怎么安装汉化包(svn中文语言包安装)
暑气渐敛,8月让我们开源一夏!
Break the limit of file locks and use storage power to help enterprises grow new momentum
探讨if...else的替代方案
Flink - SQL can separate a certain parallelism of operator node configuration?