当前位置:网站首页>4. Install and deploy spark (spark on Yan mode)

4. Install and deploy spark (spark on Yan mode)

2022-07-06 11:30:00 @Little snail

Catalog

4.1 Use the following command , decompression Spark Install package to user root directory :

[[email protected] ~]$ cd /home/zkpk/tgz/spark/
[[email protected] spark]$ tar -xzvf spark-2.1.1-bin-hadoop2.7.tgz -C /home/zkpk/
[[email protected] spark]$ cd
[[email protected] ~]$ cd spark-2.1.1-bin-hadoop2.7/
[[email protected] spark-2.1.1-bin-hadoop2.7]$ ls -l

perform ls -l The command will see the content shown in the following picture , These are Spark Included files :

 Insert picture description here

4.2 To configure Hadoop environment variable

4.2.1 stay Yarn Up operation Spark Need configuration HADOOP_CONF_DIR、YARN_CONF_DIR and HDFS_CONF_DIR environment variable

4.2.1.1 command :

[[email protected] ~]$ cd
[[email protected] ~]$ gedit ~/.bash_profile

4.2.1.2 Add the following at the end of the file ; preservation 、 sign out

#SPARK ON YARN
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop

4.2.1.3 Recompile file , Enable environment variables

[[email protected] ~]$ source ~/.bash_profile

4.3 verification Spark install

4.3.1 modify ${HADOOP_HOME}/etc/Hadoop/yarn-site.xml;

explain : stay master and slave01、slave02 Nodes should modify this file in this way

4.3.2 Add two property

[[email protected] ~]$ vim ~/hadoop-2.7.3/etc/hadoop/yarn-site.xml
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>
![ Insert picture description here ](https://img-blog.csdnimg.cn/30b25836994545c191442ff18f227621.png)

4.4 restart hadoop colony ( Make configuration effective )

[[email protected] ~]$ stop-all.sh
[[email protected] ~]$ start-all.sh

4.5 Get into Spark Install home directory

[[email protected] ~]$ cd ~/spark-2.1.1-bin-hadoop2.7

4.5.1 Execute the following command ( Notice this is 1 Line code ):

[[email protected] spark-2.1.1-bin-hadoop2.7]$  ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --num-executors 3 --driver-memory 1g --executor-memory 1g --executor-cores 1 examples/jars/spark-examples*.jar 10

4.5.2 After executing the command, the following interface will appear :

 Insert picture description here

4.5.3Web UI verification

4.5.3.1 Get into spark-shell Interactive terminal , The order is as follows :

[[email protected] spark-2.1.1-bin-hadoop2.7]$ ./bin/spark-shell

4.5.3.2 Open the browser , Enter the following address , View the operation interface ( Address :http://master:4040/)
 Insert picture description here

4.5.3.3 Exit interactive terminal , Press ctrl+d Composite key

scala> :quit

4.6 Installation and deployment Spark-SQL

4.6.1 take hadoop Install under directory hdfs-site.xml File copy to spark Install under directory conf Under the table of contents

[[email protected] spark-2.1.1-bin-hadoop2.7]$ cd
[[email protected] ~]$ cd hadoop-2.7.3/etc/hadoop/ 
[[email protected] hadoop]$ cp hdfs-site.xml /home/zkpk/spark-2.1.1-bin-hadoop2.7/conf

4.6.2 take Hive The installation directory conf Under subdirectories hive-site.xml file , copy to spark Configuration subdirectory of

[[email protected] hadoop]$ cd
[[email protected] ~]$ cd apache-hive-2.1.1-bin/conf/
[[email protected] conf]$ cp hive-site.xml /home/zkpk/spark-2.1.1-bin-hadoop2.7/conf/

4.6.3 modify spark Configure... In the directory hive-site.xml file

[[email protected] conf]$ cd
[[email protected] ~]$ cd spark-2.1.1-bin-hadoop2.7/conf/
[[email protected] conf]$ vim hive-site.xml

4.6.3.1 Add the following properties


<property>
          <name>hive.metastore.warehouse.dir</name>
          <value>/user/spark/warehouse</value>
 </property>

 Insert picture description here

4.6.4 take mysql Copy the connected driver package to spark The directory jars subdirectories

[[email protected] conf]$ cd
[[email protected] ~]$ cd apache-hive-2.1.1-bin/lib/
[[email protected] lib]$ cp mysql-connector-java-5.1.28.jar /home/zkpk/spark-2.1.1-bin-hadoop2.7/jars/

4.6.5 restart Hadoop Cluster and verify spark-sql; The figure below , Get into spark shell client , explain spark sql Configuration is successful

[[email protected] lib]$ cd
[[email protected] ~]$ stop-all.sh
[[email protected] ~]$ start-all.sh
[[email protected] ~]$ cd ~/spark-2.1.1-bin-hadoop2.7
[[email protected] spark-2.1.1-bin-hadoop2.7]$ ./bin/spark-sql --master yarn

 Insert picture description here

4.6.6 Press ctrl+d Composite key , sign out spark shell

4.6.7 if hadoop The cluster is no longer used , Please shut down the cluster

[[email protected] spark-2.1.1-bin-hadoop2.7]$ cd
[[email protected] ~]$ stop-all.sh
原网站

版权声明
本文为[@Little snail]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/187/202207060913090212.html