当前位置:网站首页>Centralized management of clusters
Centralized management of clusters
2022-07-28 01:24:00 【Brother Xing plays with the clouds】
One 、 The current problems
Before reading this article , You should read : Get to know namenode and datanode.
Before , We started Hadoop In clusters , The first is to start namenode, Then start datanode. Be careful : Our previous practice is to manually start datanode The command of is sent to all datanode, Obviously, if the cluster is very large , This is not appropriate . We hope to pass start-dfs.sh Start all nodes . So we need to configure namenode On top of the machine slaves file , This file manages this namenode All of the following datanode. The location of this file is :{hadoop_home}/etc/hadoop, among {hadoop_home} yes Hadoop Installation directory .
Two 、 To configure namenode On the machine slaves file
1. function cd /usr/local/hadoop/hadoop-2.7.3/etc/hadoop Enter into slaves Directory of files
2. function vim slaves command , add to slave Machine name , Save and exit .
3、 ... and 、 Configuration free SSH Remote login
1. stay namenode On the machine , Get into /root/.ssh Below directory , Run the command : ssh-keygen -t rsa
2. take namenode The public key on the machine copy To npfdev1( This machine ),npfdev2,npfdev3,npfdev4 above .
Run the following command :
ssh-copy-id npfdev1
ssh-copy-id npfdev2
ssh-copy-id npfdev3
ssh-copy-id npfdev4
Four 、 stay namenode Machine running start-dfs.sh Start cluster
1. Once the boot is complete , see :
Be careful :start-dfs.sh Will start by default secondarynamenode
5、 ... and 、 stay namenode Machine running stop-dfs.sh Start cluster
1. After stopping , see :
6、 ... and 、 Startup and shutdown Hadoop Summary of cluster command steps :
1. modify master On /etc/hadoop/slaves file , every last slave Occupy a line .
2. Configuration free SSH Remote login .
3. start-dfs.sh Start cluster .
4. stop-dfs.sh Stop the cluster .
Be careful : If you make similar mistakes ,
The solution is in hadoop-env.sh and yarn-env.sh Add the following two lines :
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib"
reference :
1. Hadoop 2.2.0 Cluster installation
边栏推荐
猜你喜欢

Thoroughly understand kubernetes scheduling framework and plug-ins

ABAP CDs table function introduction and examples

File system mount

Lua快速上手

Detailed explanation of retinanet network structure

Harmonyos 3 was officially released: Hongmeng mobile phones are smooth and safe, and Hongmeng terminals are often used

Redis-哨兵模式

Cesium add dynamic pop-up

二维数组相关知识

实现ABCD字母递增
随机推荐
力挺吴雄昂!Arm中国管理层发公开信:对莫须有的指控感到震惊和愤怒!
Flutter 通话界面UI
4月全球智能手机出货同比下滑41%,华为首次超三星成全球第一
Swoole协程
If asynchronous processing is implemented according to the framework
Cross domain requests in nodejs
Swoole collaboration
Oxygen temperature and humidity module
swoole-WebSocket服务
BSP video tutorial issue 21: easy one key implementation of serial port DMA variable length transceiver, support bare metal and RTOS, including MDK and IAR, which is more convenient than stm32cubemx (
Knowledge of two-dimensional array
Operator depth anatomy
idea常用的快捷键汇总
MySQL进阶--存储过程以及自定义函数
Lua advanced
彻底搞懂kubernetes调度框架与插件
BigDecimal common API
Anfulai embedded weekly report no. 275: 2022.07.18--2022.07.24
mysql-JPA对数据库中JSON类型数据的支持
Fabric2.4.4 version building process (complete process)