Hadoop 完全分布式搭建
搭建环境
https://www.cnblogs.com/yuanweiblogger/p/11456623.html
修改主机名
-------------------
1./etc/hostname
s129
2./etc/hosts
127.0.0.1 localhost
192.168.248.129 s129
192.168.248.128 s128
192.168.248.127 s127
192.168.248.126 s126
完全分布式
1.克隆3台client(centos6.8)
右键centos-->管理->克隆-> ... -> 完整克隆
2.启动client
3.启用客户机共享文件夹。
4.修改hostname和ip地址文件
[/etc/hostname]
s127
[/etc/sysconfig/network-scripts/ifcfg-ethxxxx]
...
ipaddr=..
5.重启网络服务
$>sudo service network restart
6.修改/etc/resolv.conf文件
nameserver 192.168.231.2
7.重复以上3 ~ 6过程.
准备完全分布式主机的ssh
-------------------------
1.删除所有主机上的/home/yw/.ssh/*
2.在s129主机上生成密钥对
$>ssh-keygen -t rsa -p '' -f ~/.ssh/id_rsa
3.将s129的公钥文件id_rsa.pub远程复制到126 ~128主机上。
并放置/home/yw/.ssh/authorized_keys
$>scp id_rsa.pub yw@s129:/home/yw/.ssh/authorized_keys
$>scp id_rsa.pub yw@s128:/home/yw/.ssh/authorized_keys
$>scp id_rsa.pub yw@s127:/home/yw/.ssh/authorized_keys
$>scp id_rsa.pub yw@s126:/home/yw/.ssh/authorized_keys
4.配置完全分布式(${hadoop_home}/etc/hadoop/)
修改完全分布式的xml文件
[core-site.xml]
<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultfs</name>
<value>hdfs://s129/</value>
</property>
</configuration>
[hdfs-site.xml]
<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
[mapred-site.xml]
不变
[yarn-site.xml]
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>s129</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
[slaves]
s128
s127
s126
[hadoop-env.sh]
...
export java_home=/opt/jdk
...
5.分发配置
$>cd /usr/local/hadoop/hadoop-2.7.3/etc
$>scp -r full-hadoop root@s128:/usr/local/hadoop/hadoop-2.7.3/etc/
$>scp -r full-hadoop root@s127:/usr/local/hadoop/hadoop-2.7.3/etc/
$>scp -r full-hadoop root@s126:/usr/local/hadoop/hadoop-2.7.3/etc/
6.删除符号连接 (把原来的伪分布式连接删掉)
$>cd /usr/local/hadoop/hadoop-2.7.3/etc
$>rm -rf hadoop
$>ssh s128 rm -rf /usr/local/hadoop/hadoop-2.7.3/etc/hadoop/
$>ssh s127 rm -rf /usr/local/hadoop/hadoop-2.7.3/etc/hadoop/
$>ssh s126 rm -rf /usr/local/hadoop/hadoop-2.7.3/etc/hadoop/
7.创建符号连接
$>cd /usr/local/hadoop/hadoop-2.7.3/etc
$>ln -s full-hadoop hadoop
$>ssh s126 ln -s /usr/local/hadoop/hadoop-2.7.3/etc/full-hadoop /usr/local/hadoop/hadoop-2.7.3/etc/hadoop
$>ssh s127 ln -s /usr/local/hadoop/hadoop-2.7.3/etc/full-hadoop /usr/local/hadoop/hadoop-2.7.3/etc/hadoop
$>ssh s128 ln -s /usr/local/hadoop/hadoop-2.7.3/etc/full-hadoop /usr/local/hadoop/hadoop-2.7.3/etc/hadoop
8.删除临时目录文件
$>cd /tmp
$>rm -rf hadoop-centos
$>ssh s128 rm -rf /tmp/hadoop-centos
$>ssh s127 rm -rf /tmp/hadoop-centos
$>ssh s126 rm -rf /tmp/hadoop-centos
9.删除hadoop日志
$>cd /usr/local/hadoop/hadoop-2.7.3/logs
$>rm -rf *
$>ssh s128 rm -rf /usr/local/hadoop/hadoop-2.7.3/logs/*
$>ssh s127 rm -rf /usr/local/hadoop/hadoop-2.7.3/logs/*
$>ssh s126 rm -rf /usr/local/hadoop/hadoop-2.7.3/logs/*
10.格式化文件系统
$>hadoop namenode -format
11.启动hadoop进程
$>start-all.sh
完全分布式搭建完成。
推荐阅读
-
SpringBoot+Dubbo+Zookeeper整合搭建简单的分布式应用
-
一张图讲解最少机器搭建FastDFS高可用分布式集群安装说明
-
[Hadoop] Windows 下的 Hadoop 2.7.5 环境搭建
-
H01_Linux系统中搭建Hadoop和Spark集群
-
在win7上hadoop环境搭建的方法(图)
-
SpringBoot整合SpringCloud搭建分布式应用
-
hadoop 完全分布式部署
-
基于ubuntu16.04伪分布式安装hadoop2.9.1以及hive2.3.1
-
分布式项目--后台管理系统工程搭建
-
详解从 0 开始使用 Docker 快速搭建 Hadoop 集群环境