欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Hadoop HA 搭建

程序员文章站 2022-07-06 16:20:44
...

主机名  IP 软件包
001 192.168.xxx.xx jdk,hadoop,zk
002 192.168.xxx.xx jdk,hadoop,zk
003 192.168.xxx.xx jdk,hadoop,zk
基础设置见——————大数据集群环境搭建手册

安装Zookeeper见————ZOOKEEPER搭建
安装Hadoop 

1、解压安装包

$> tar -zxvf xxxxxx -C /soft/

2、修改环境变量

HADOOP_HOME=/soft/hadoop-2.7.2/
PATH=$HADOOP_HOME/bin:$PATH
export PATH

 测试环境变量

which hadoop

3、修改配置文件(/soft/hadoop-2.7.2/etc/hadoop)

    【hadoop-env.sh】

export JAVA_HOME=/etc/tdng/jdk1.8.0_101

  【core-site.xml

	<configuration>
		<property>
			<name>fs.defaultFS</name>
			<value>hdfs://mycluster</value>
		</property>
		<property>
			<name>hadoop.tmp.dir</name>
			<value>/etc/tdng/hadoop-2.7.2/tmp</value>
		</property>
		<property>
			<name>ha.zookeeper.quorum</name>
			<value>td23001:2181,td23002:2181,td23003:2181</value>
		</property>
	</configuration>
    【hdfs-site.xml】
<configuration>
	<property>
		<name>dfs.replication</name>
		<value>3</value>
	</property>
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>file://${hadoop.tmp.dir}/dfs/name1,file://${hadoop.tmp.dir}/dfs/name2</value>
	</property>
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>file://${hadoop.tmp.dir}/dfs/data1,file://${hadoop.tmp.dir}/dfs/data2</value>
	</property>
	<property>
		<name>dfs.namenode.secondary.http-address</name>
		<value>td23003:50090</value>
	</property>
	<property>
		<name>dfs.nameservices</name>
		<value>mycluster</value>
	</property>
	<property>
		<name>dfs.ha.namenodes.mycluster</name>
		<value>nn1,nn2</value>
	</property>
	<property>
		<name>dfs.namenode.rpc-address.mycluster.nn1</name>
		<value>td23001:8020</value>
	</property>
	<property>
		<name>dfs.namenode.rpc-address.mycluster.nn2</name>
		<value>td23002:8020</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.mycluster.nn1</name>
		<value>td23001:50070</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.mycluster.nn2</name>
		<value>td23002:50070</value>
	</property>
	<property>
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://td23001:8485;td23002:8485;td23003:8485/mycluster</value>
	</property>
	<property>
		<name>dfs.client.failover.proxy.provider.mycluster</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>
			sshfence
			shell(/bin/true)
		</value>
	</property>
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/root/.ssh/id_rsa</value>
	</property>
	<property>
		<name>dfs.journalnode.edits.dir</name>
		<value>/etc/tdng/hadoop-2.7.2/journal</value>
	</property>
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>
</configuration>

mapred-site.xml

<configuration>
	<!--通知框架MR使用YARN -->
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
</configuration>

yarn-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
	<!--指定YARN的RM的地址-->
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>td23003</value>
	</property>
	<!--reducer取数据的方式是mapreduce_shuffle-->
        <protery>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
        </protery>
</configuration>

slaves

001
002
003

创建目录在hadoop-2.7.2的根目录:

$>mkdir tmp

分发

 

启动

启动的时候注意启动顺序

1、启动zookeeper(在所在集群 )

2、启动journal node(在配置文件中所设置的集群)

#hadoop-2.7.2/sbin

./sbin/hadoop-daemon.sh start journalnode

3、格式化HDFSnamenode)第一次要格式化(在namenode中任意一台)

./bin/hdfs namenode -format

并把tmp 文件夹拷贝到另一台namenode的目录下

 

scp -r tmp [email protected]:/etc/tdng/hadoop-2.7.2/

4、格式化 zk(在namenode1即可)

 

./bin/hdfs zkfc -formatZK

5、启动zkfc来监控NN状态(在所有的namenode

./sbin/hadoop-daemon.sh start zkfc

6、启动HDFSnamenode)(在namenode1即可)

#hadoop-2.7.2/sbin

./sbin/start-dfs.sh

7、启动YARNMR)(在配置文件中设置的节点上)

#hadoop-2.7.2/sbin

./sbin/start-yarn.sh

查看进程

$> jps

查看web页面

192.168.xxx.xx:50070

192.168.xxx.xx:8088