欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

2.搭建Hadoop HA

程序员文章站 2022-03-05 10:21:23
...

title: 2.搭建Hadoop HA
categories: Big Data learning
tags: [hadoop,HDFS,YARN]

准备工作

  • 1.准备好3台虚拟机
  • 2.3台虚拟机互信
  • 3.准备好安装包
    • jdk-8u161-linux-x64.tar.gz
    • hadoop-2.6.0-cdh5.7.0.tar.gz
    • zookeeper-3.4.12.tar.gz
  • 4.Xshell5

新增用户hadoop

在3台虚拟机上面都新建一个hadoop用户

# useradd hadoop

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-mLrTwXIh-1587624948530)(2.搭建Hadoop HA\1.png)]

在3台虚拟机上面都新建目录

# su - hadoop
$ mkdir app source software data tmp

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Hms93pxt-1587624948532)(2.搭建Hadoop HA\5.png)]

安装JDK

创建jdk存放路径

# mkdir -p /usr/java
# cd /usr/java

上传jdk压缩包

# rz

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QDhvxkQQ-1587624948535)(2.搭建Hadoop HA\2.png)]

解压jdk压缩包

# tar -xzvf jdk-8u161-linux-x64.tar.gz
# rm -f jdk-8u161-linux-x64.tar.gz

传输到其他两台

# scp -r jdk1.8.0_161 [email protected]:/usr/java
# scp -r jdk1.8.0_161 [email protected]:/usr/java

修改三台机子上的jdk的持有者

# chown -R root:root /usr/java

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-bOj2KHbP-1587624948537)(2.搭建Hadoop HA\3.png)]

配置JDK环境变量

# vi /etc/profile

在文件的末尾加入,以下语句,并保存

#env
export JAVA_HOME=/usr/java/jdk1.8.0_161
export PATH=$JAVA_HOME/bin:$PATH

使配置生效

# source /etc/profile

查看jdk是否成功安装

# java -version

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IOvT9oZp-1587624948539)(2.搭建Hadoop HA\4.png)]

在另外两台机子也一样做一遍,至此我们的jdk就安装完成了!

集群规划

IP HOST 安装软件 进程
192.168.137.190 hadoop001 Hadoop、Zookeeper NameNode
DFSZKFailoverController
JournalNode
DataNode
ResourceManager
JobHistoryServer
NodeManager
QuorumPeerMain
192.168.137.191 hadoop002 Hadoop、Zookeeper NameNode
DFSZKFailoverController
JournalNode
DataNode
ResourceManager
NodeManager
QuorumPeerMain
192.168.137.192 hadoop003 Hadoop、Zookeeper JournalNode
DataNode
QuorumPeerMain
NodeManager

安装ZK

上传压缩包

上传ZK的压缩文件文件到hadoop001的software里面

$ cd software
$ rz

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-sCEjFy8S-1587624948541)(2.搭建Hadoop HA\6.png)]

传给另外两台机子

$ scp ~/software/zookeeper-3.4.12.tar.gz [email protected]:/home/hadoop/software/
$ scp ~/software/zookeeper-3.4.12.tar.gz [email protected]:/home/hadoop/software/

在所有机子上一起运行解压ZK压缩文件到app文件夹里面

$ cd ~/software
$ tar -xzvf zookeeper-3.4.12.tar.gz -C ../app/

配置ZK_HOME

$ vi ~/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs
export ZK_HOME=/home/hadoop/app/zookeeper-3.4.12
export PATH=$ZK_HOME/bin:$PATH
$ source ~/.bash_profile

修改配置

$ cd ~/app/zookeeper-3.4.12/conf
$ cp zoo_sample.cfg zoo.cfg

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AeJxHa6n-1587624948542)(2.搭建Hadoop HA\7.png)]

$ vi zoo.conf
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/home/hadoop/data/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=hadoop001:2888:3888
server.2=hadoop002:2888:3888
server.3=hadoop003:2888:3888

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IEoxqnNq-1587624948543)(2.搭建Hadoop HA\8.png)]

传给其他节点

$ scp zoo.cfg [email protected]:/home/hadoop/app/zookeeper-3.4.12/conf/
$ scp zoo.cfg [email protected]:/home/hadoop/app/zookeeper-3.4.12/conf/

在三个节点上

切换到data目录,并创建一个zookeeper目录

$ cd ~/data
$ mkdir zookeeper

在zookeeper目录里面,创建一个myid文件

$ touch myid

各个节点分开做

然后在hadoop001节点上,在myid文件写入1

$ echo 1 >myid

在hadoop002节点上,在myid文件写入2

$ echo 2 >myid

在hadoop003节点上,在myid文件写入3

$ echo 3 >myid

启动集群

$ zkServer.sh start

安装Hadoop

上传hadoop的压缩包

在hadoop001机子上使用hadoop用户在software目录上传hadoop的压缩包

$ cd ~/software
$ rz

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-GddyoBtp-1587624948545)(2.搭建Hadoop HA\11.png)]

拷贝到其他两台机子上

$ scp hadoop-2.6.0-cdh5.7.0.tar.gz [email protected]:/home/hadoop/software/
$ scp hadoop-2.6.0-cdh5.7.0.tar.gz [email protected]:/home/hadoop/software/

解压缩包

三台自己一起执行

$ cd ~/software
$ tar -zxvf hadoop-2.6.0-cdh5.7.0.tar.gz -C ../app

配置HADOOP_HOME

在hadoop001上面配置一下HADOOP_HOME

$ vi ~/.bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs
export ZK_HOME=/home/hadoop/app/zookeeper-3.4.12
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZK_HOME/bin:$PATH
$ source ~/.bash_profile

修改hadoop-env.sh

$ vi /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/hadoop-env.sh

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Dxc43OF4-1587624948546)(2.搭建Hadoop HA\12.png)]

修改core-site.xml

$ vi /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <!--Yarn 需要使用 fs.defaultFS 指定NameNode URI -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://zzfHadoop</value>
    </property>
    <!-- ==============================Trash机制======================================= -->
    <property>
    <!--多长时间创建CheckPoint NameNode截点上运行的CheckPointer 从Current文件夹创建CheckPoint;默认:0 由fs.trash.interval项指定 -->
        <name>fs.trash.checkpoint.interval</name>
        <value>0</value>
    </property>
    <property>
    <!--多少分钟.Trash下的CheckPoint目录会被删除,该配置服务器设置优先级大于客户端,默认:0 不删除 -->
        <name>fs.trash.interval</name>
        <value>1440</value>
    </property>
    <!--指定hadoop临时目录, hadoop.tmp.dir 是hadoop文件系统依赖的基础配置,很多路径都依赖它。如果hdfs-site.xml中不>配 置namenode和datanode的存放位置,默认就放在这>个路径中 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/tmp</value>
    </property>
    <!-- 指定zookeeper地址 -->
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value>
    </property>
    <!--指定ZooKeeper超时间隔,单位毫秒 -->
    <property>
        <name>ha.zookeeper.session-timeout.ms</name>
        <value>2000</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>io.compression.codecs</name>
        <value>org.apache.hadoop.io.compress.GzipCodec,
          org.apache.hadoop.io.compress.DefaultCodec,
          org.apache.hadoop.io.compress.BZip2Codec,
          org.apache.hadoop.io.compress.SnappyCodec</value>
    </property>
</configuration>

修改hdfs-site.xml

$ mkdir -p /home/hadoop/data/dfs/name /home/hadoop/data/dfs/data
$ vi /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<!--HDFS超级用户 -->
	<property>
		<name>dfs.permissions.superusergroup</name>
		<value>hadoop</value>
	</property>

	<!--开启web hdfs -->
	<property>
		<name>dfs.webhdfs.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>/home/hadoop/data/dfs/name</value>
		<description> namenode 存放name table(fsimage)本地目录(需要修改)</description>
	</property>
	<property>
		<name>dfs.namenode.edits.dir</name>
		<value>${dfs.namenode.name.dir}</value>
		<description>namenode存放 transaction file(edits)本地目录(需要修改)</description>
	</property>
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>/home/hadoop/data/dfs/data</value>
		<description>datanode存放block本地目录(需要修改)</description>
	</property>
	<property>
		<name>dfs.replication</name>
		<value>3</value>
	</property>
	<!-- 块大小256M (默认128M) -->
	<property>
		<name>dfs.blocksize</name>
		<value>268435456</value>
	</property>
	<!--======================================================================= -->
	<!--HDFS高可用配置 -->
	<!--指定hdfs的nameservice为zzfHadoop,需要和core-site.xml中的保持一致 -->
	<property>
		<name>dfs.nameservices</name>
		<value>zzfHadoop</value>
	</property>
	<property>
		<!--设置NameNode IDs 此版本最大只支持两个NameNode -->
		<name>dfs.ha.namenodes.zzfHadoop</name>
		<value>nn1,nn2</value>
	</property>

	<!-- Hdfs HA: dfs.namenode.rpc-address.[nameservice ID] rpc 通信地址 -->
	<property>
		<name>dfs.namenode.rpc-address.zzfHadoop.nn1</name>
		<value>hadoop001:8020</value>
	</property>
	<property>
		<name>dfs.namenode.rpc-address.zzfHadoop.nn2</name>
		<value>hadoop002:8020</value>
	</property>

	<!-- Hdfs HA: dfs.namenode.http-address.[nameservice ID] http 通信地址 -->
	<property>
		<name>dfs.namenode.http-address.zzfHadoop.nn1</name>
		<value>hadoop001:50070</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.zzfHadoop.nn2</name>
		<value>hadoop002:50070</value>
	</property>

	<!--==================Namenode editlog同步 ============================================ -->
	<!--保证数据恢复 -->
	<property>
		<name>dfs.journalnode.http-address</name>
		<value>0.0.0.0:8480</value>
	</property>
	<property>
		<name>dfs.journalnode.rpc-address</name>
		<value>0.0.0.0:8485</value>
	</property>
	<property>
		<!--设置JournalNode服务器地址,QuorumJournalManager 用于存储editlog -->
		<!--格式:qjournal://<host1:port1>;<host2:port2>;<host3:port3>/<journalId> 端口同journalnode.rpc-address -->
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://hadoop001:8485;hadoop002:8485;hadoop003:8485/zzfHadoop</value>
	</property>

	<property>
		<!--JournalNode存放数据地址 -->
		<name>dfs.journalnode.edits.dir</name>
		<value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/jn</value>
	</property>
	<!--==================DataNode editlog同步 ============================================ -->
	<property>
		<!--DataNode,Client连接Namenode识别选择Active NameNode策略 -->
                             <!-- 配置失败自动切换实现方式 -->
		<name>dfs.client.failover.proxy.provider.zzfHadoop</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<!--==================Namenode fencing:=============================================== -->
	<!--Failover后防止停掉的Namenode启动,造成两个服务 -->
	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>sshfence</value>
	</property>
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/hadoop/.ssh/id_rsa</value>
	</property>
	<property>
		<!--多少milliseconds 认为fencing失败 -->
		<name>dfs.ha.fencing.ssh.connect-timeout</name>
		<value>30000</value>
	</property>

	<!--==================NameNode auto failover base ZKFC and Zookeeper====================== -->
	<!--开启基于Zookeeper  -->
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>
	<!--动态许可datanode连接namenode列表 -->
	 <property>
	   <name>dfs.hosts</name>
	   <value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/slaves</value>
	 </property>
</configuration>

修改/mapred-site.xml

$ cd ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop
$ cp mapred-site.xml.template mapred-site.xml
$ vi mapred-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<!-- 配置 MapReduce Applications -->
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
	<!-- JobHistory Server ============================================================== -->
	<!-- 配置 MapReduce JobHistory Server 地址 ,默认端口10020 -->
	<property>
		<name>mapreduce.jobhistory.address</name>
		<value>hadoop001:10020</value>
	</property>
	<!-- 配置 MapReduce JobHistory Server web ui 地址, 默认端口19888 -->
	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<value>hadoop001:19888</value>
	</property>

<!-- 配置 Map段输出的压缩,snappy-->
  <property>
      <name>mapreduce.map.output.compress</name> 
      <value>true</value>
  </property>
              
  <property>
      <name>mapreduce.map.output.compress.codec</name> 
      <value>org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

</configuration>

修改yarn-site.xml

$ vi /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/yarn-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<!-- nodemanager 配置 ================================================= -->
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
		<value>org.apache.hadoop.mapred.ShuffleHandler</value>
	</property>
	<property>
		<name>yarn.nodemanager.localizer.address</name>
		<value>0.0.0.0:23344</value>
		<description>Address where the localizer IPC is.</description>
	</property>
	<property>
		<name>yarn.nodemanager.webapp.address</name>
		<value>0.0.0.0:23999</value>
		<description>NM Webapp address.</description>
	</property>

	<!-- HA 配置 =============================================================== -->
	<!-- Resource Manager Configs -->
	<property>
		<name>yarn.resourcemanager.connect.retry-interval.ms</name>
		<value>2000</value>
	</property>
	<property>
		<name>yarn.resourcemanager.ha.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>
	<!-- 使嵌入式自动故障转移。HA环境启动,与 ZKRMStateStore 配合 处理fencing -->
	<property>
		<name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
		<value>true</value>
	</property>
	<!-- 集群名称,确保HA选举时对应的集群 -->
	<property>
		<name>yarn.resourcemanager.cluster-id</name>
		<value>yarn-cluster</value>
	</property>
	<property>
		<name>yarn.resourcemanager.ha.rm-ids</name>
		<value>rm1,rm2</value>
	</property>


    <!--这里RM主备结点需要单独指定,(可选)
	<property>
		 <name>yarn.resourcemanager.ha.id</name>
		 <value>rm2</value>
	 </property>
	 -->

	<property>
		<name>yarn.resourcemanager.scheduler.class</name>
		<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
	</property>
	<property>
		<name>yarn.resourcemanager.recovery.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
		<value>5000</value>
	</property>
	<!-- ZKRMStateStore 配置 -->
	<property>
		<name>yarn.resourcemanager.store.class</name>
		<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
	</property>
	<property>
		<name>yarn.resourcemanager.zk-address</name>
		<value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value>
	</property>
	<property>
		<name>yarn.resourcemanager.zk.state-store.address</name>
		<value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value>
	</property>
	<!-- Client访问RM的RPC地址 (applications manager interface) -->
	<property>
		<name>yarn.resourcemanager.address.rm1</name>
		<value>hadoop001:23140</value>
	</property>
	<property>
		<name>yarn.resourcemanager.address.rm2</name>
		<value>hadoop002:23140</value>
	</property>
	<!-- AM访问RM的RPC地址(scheduler interface) -->
	<property>
		<name>yarn.resourcemanager.scheduler.address.rm1</name>
		<value>hadoop001:23130</value>
	</property>
	<property>
		<name>yarn.resourcemanager.scheduler.address.rm2</name>
		<value>hadoop002:23130</value>
	</property>
	<!-- RM admin interface -->
	<property>
		<name>yarn.resourcemanager.admin.address.rm1</name>
		<value>hadoop001:23141</value>
	</property>
	<property>
		<name>yarn.resourcemanager.admin.address.rm2</name>
		<value>hadoop002:23141</value>
	</property>
	<!--NM访问RM的RPC端口 -->
	<property>
		<name>yarn.resourcemanager.resource-tracker.address.rm1</name>
		<value>hadoop001:23125</value>
	</property>
	<property>
		<name>yarn.resourcemanager.resource-tracker.address.rm2</name>
		<value>hadoop002:23125</value>
	</property>
	<!-- RM web application 地址 -->
	<property>
		<name>yarn.resourcemanager.webapp.address.rm1</name>
		<value>hadoop001:8088</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.address.rm2</name>
		<value>hadoop002:8088</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.https.address.rm1</name>
		<value>hadoop001:23189</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.https.address.rm2</name>
		<value>hadoop002:23189</value>
	</property>

	<property>
	   <name>yarn.log-aggregation-enable</name>
	   <value>true</value>
	</property>
	<property>
		 <name>yarn.log.server.url</name>
		 <value>http://hadoop001:19888/jobhistory/logs</value>
	</property>


	<property>
		<name>yarn.nodemanager.resource.memory-mb</name>
		<value>2048</value>
	</property>
	<property>
		<name>yarn.scheduler.minimum-allocation-mb</name>
		<value>1024</value>
		<discription>单个任务可申请最少内存,默认1024MB</discription>
	 </property>

  
  <property>
	<name>yarn.scheduler.maximum-allocation-mb</name>
	<value>2048</value>
	<discription>单个任务可申请最大内存,默认8192MB</discription>
  </property>

   <property>
       <name>yarn.nodemanager.resource.cpu-vcores</name>
       <value>2</value>
    </property>
</configuration>

修改 slaves

[[email protected] hadoop]$ vi slaves
hadoop001
hadoop002
hadoop003

将配置文件传送到其他节点

$ cd ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/
$ scp * [email protected]:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/
$ scp * [email protected]:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/

启动JN集群

$ hadoop-daemon.sh start journalnode
$ jps

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4RlRyMA9-1587624948547)(2.搭建Hadoop HA\13.png)]

格式化NameNode

在hadoop001节点上输入以下命令

 $ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
..........
..........
18/11/27 17:33:34 INFO namenode.FSImage: Allocated new BlockPoolId: BP-793957562-192.168.137.190-1543368814407
18/11/27 17:33:34 INFO common.Storage: Storage directory /home/hadoop/data/dfs/name has been successfully formatted.
18/11/27 17:33:34 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/11/27 17:33:34 INFO util.ExitUtil: Exiting with status 0
18/11/27 17:33:34 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop001/192.168.137.190
************************************************************/

同步到第二个NN的目录下

$ scp -r /home/hadoop/data/dfs/* [email protected]:/home/hadoop/data/dfs/

初始化ZKFC

在hadoop001节点上:

$ hdfs zkfc -formatZK

启动HDFS集群

在hadoop001节点上

$  start-dfs.sh

检查一下

$ jps| grep -v Jps

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-el6D1t6E-1587624948548)(2.搭建Hadoop HA\14.png)]

启动YARN

在hadoop001节点

$ start-yarn.sh

在hadoop002节点启动RN(standby)

$ yarn-daemon.sh start resourcemanager

检查一下

$ jps| grep -v Jps

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AqtWjfzT-1587624948550)(2.搭建Hadoop HA\15.png)]

至此我们集群的搭建就完成了。

关闭集群

hadoop001

$ stop-yarn.sh

hadoop002

$ yarn-daemon.sh stop resourcemanager

hadoop001

$ stop-dfs.sh

三个节点

$  zkServer.sh stop

再次启动集群

三个节点

$ zkServer.sh start

hadoop001

$ start-dfs.sh

hadoop001

$ start-yarn.sh

hadoop002

$ yarn-daemon.sh start resourcemanager

监控集群

hadoop001

$ hdfs dfsadmin -report

开启步骤整理

  • ZK
  • NN
  • DN
  • JN
  • ZKFC
  • YARN RM(active) NN
  • YARN RM(standby)

关闭步骤整理

  • YARN RM(active) NN
  • YARN RM(standby)
  • namenode
  • datanode
  • journalnode
  • zkfc
  • ZK

成了。

关闭集群

hadoop001

$ stop-yarn.sh

hadoop002

$ yarn-daemon.sh stop resourcemanager

hadoop001

$ stop-dfs.sh

三个节点

$  zkServer.sh stop

再次启动集群

三个节点

$ zkServer.sh start

hadoop001

$ start-dfs.sh

hadoop001

$ start-yarn.sh

hadoop002

$ yarn-daemon.sh start resourcemanager

监控集群

hadoop001

$ hdfs dfsadmin -report

开启步骤整理

  • ZK
  • NN
  • DN
  • JN
  • ZKFC
  • YARN RM(active) NN
  • YARN RM(standby)

关闭步骤整理

  • YARN RM(active) NN
  • YARN RM(standby)
  • namenode
  • datanode
  • journalnode
  • zkfc
  • ZK
相关标签: 大数据 大数据