centos6.8下hadoop3.1.1完全分布式安装指南
前述:这篇文档是建立在三台虚拟机相互ping通,防火墙关闭,hosts文件修改,ssh 免密码登录,主机名修改等的基础上开始的。
一.传入文件
1.创建安装目录
mkdir /usr/local/soft
2.打开xftp,找到对应目录,将所需安装包传入进去
查看安装包:cd /usr/local/soft
二.安装java
1.查看是否已安装jdk: java -version
2.未安装,解压java安装包: tar -zxvf jdk-8u181-linux-x64.tar.gz
(每个人安装包可能不一样,自己参考)
3.给jdk重命名,并查看当前位置:mv jdk1.8.0_181 java
4.配置jdk环境:vim /etc/profile.d/jdk.sh
export java_home=/usr/local/soft/java
export path=$path:$java_home/bin
export classpath=.:$java_home/lib/tools.jar:$java_home/lib/rt.jar
5.更新环境变量并检验:source /etc/profile
三.安装hadoop
1.解压hadoop安装包:tar -zxvf hadoop-3.1.1.tar.gz
2.查看并重命名:mv hadoop-3.1.1 hadoop
3.配置 hadoop 配置文件
3.1修改 core-site.xml 配置文件:vim hadoop/etc/hadoop/core-site.xml
<property>
<name>fs.defaultfs</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/soft/hadoop/tmp</value>
<description>abase for other temporary directories.</description>
</property>
<property>
<name>fs.trash.interval</name>
<value>1440</value>
</property>
3.2修改 hdfs-site.xml 配置文件:vim hadoop/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node1:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/soft/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/soft/hadoop/tmp/dfs/data</value>
</property>
3.3修改 workers 配置文件:vim hadoop/etc/hadoop/workers
3.4修改hadoop-env.sh文件:vim hadoop/etc/hadoop/hadoop-env.sh
export java_home=/usr/local/soft/java
3.5修改yarn-site.xml文件:vim hadoop/etc/hadoop/yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
3.6更新配置文件:source hadoop/etc/hadoop/hadoop-env.sh
3.7修改 start-dfs.sh配置文件: im hadoop/sbin/start-dfs.sh
export hdfs_namenode_secure_user=root
export hdfs_datanode_secure_user=root
export hdfs_secondarynamenode_user=root
export hdfs_namenode_user=root
export hdfs_datanode_user=root
export hdfs_secondarynamenode_user=root
export yarn_resourcemanager_user=root
export yarn_nodemanager_user=root
3.8修改 stop-dfs.sh配置文件: vim hadoop/sbin/stop-dfs.sh
export hdfs_namenode_secure_user=root
export hdfs_datanode_secure_user=root
export hdfs_secondarynamenode_user=root
export hdfs_namenode_user=root
export hdfs_datanode_user=root
export hdfs_secondarynamenode_user=root
export yarn_resourcemanager_user=root
export yarn_nodemanager_user=root
3.9修改 start-yarn.sh配置文件:vim hadoop/sbin/start-yarn.sh
export yarn_resourcemanager_user=root
export hadoop_secure_dn_user=root
export yarn_nodemanager_user=root
3.10修改 stop-yarn.sh配置文件:vim hadoop/sbin/stop-yarn.sh
export yarn_resourcemanager_user=root
export hadoop_secure_dn_user=root
export yarn_nodemanager_user=root
3.11 取消打印警告信息:vim hadoop/etc/hadoop/log4j.properties
log4j.logger.org.apache.hadoop.util.nativecodeloader=error
四.同步配置信息:
1.同步node1:scp -r soft root@node1:/usr/local/
同步node2:scp -r soft root@node2:/usr/local/
2.等待所有传输完成,配置profile文件:vim /etc/profile.d/hadoop.sh
#set hadoop
export hadoop_home=/usr/local/soft/hadoop
export hadoop_install=$hadoop_home
export hadoop_mapred_home=$hadoop_home
export hadoop_common_home=$hadoop_home
export hadoop_hdfs_home=$hadoop_home
export yarn_home=$hadoop_home
export hadoop_common_lib_native_dir=$hadoop_home/lib/native
export path=$path:$hadoop_home/sbin:$hadoop_home/bin
export path=$path:$hadoop_home/sbin:$hadoop_home/bin
3.继续传输
对node1: scp /etc/profile.d/jdk.sh root@node1:/etc/profile.d/
scp /etc/profile.d/hadoop.sh root@node1:/etc/profile.d/
对node2: scp /etc/profile.d/jdk.sh root@node2:/etc/profile.d/
scp /etc/profile.d/hadoop.sh root@node2:/etc/profile.d/
4.在三台虚拟机上都要执行
source /etc/profile
source /usr/local/soft/hadoop/etc/hadoop/hadoop-env.sh
(只显示一台)
5.格式化 hdfs 文件系统:hdfs namenode -format(只在master上)
五.启动集群
cd /usr/local/soft/hadoop/sbin/
./start-all.sh
启动后在三台虚拟机上分别输入jps
结果如下:
windows下谷歌浏览器检验:
(输入自己的master的ip地址)
870
hadoop测试(mapreduce 执行计算测试):
hadoop jar /usr/local/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar wordcount /input /output
查看运行结果:
以上hadoop配置完成。