欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Hadoop初体验

程序员文章站 2022-03-21 17:00:56
...

初识Hadoop

什么是Hadoop?

Hadoop就是存储海量数据和分析海量数据的工具,Hadoop是由java语言编写的,在分布式服务器集群上存储海量数据并运行分布式分析应用的开源框架,Hadoop是专为离线和大规模数据分析而设计的,并不适合那种对几个记录随机读写的在线事务处理模式

Hadoop的核心(HDFSMapReduce

HDFS:为海量的数据提供了存储

MapReduce:为海量的数据提供了计算

Hadoop擅长干什么?

1.大数据存储(分布式存储)

2.日志的处理(擅长日志分析)

3.ETL:数据抽取到oracle、mysql、mongdb及主流数据库

4.机器学习(Apache、Mahout)

5.搜索引擎(Hadoop+lucene实现)

6.数据挖掘

它的实际应用有:Flume+Logstash+Kafka+Spark Streaming进行实时日志处理分析

简单安装配置Hadoop群集

环境:

192.168.100.11(hostname:openstack1)

192.168.100.101(hostname:openstack2)

192.168.100.111(hostname:openstack3)

添加修改hosts文件(三台都要做)

vim /etc/hosts

192.168.100.11 openstack1
192.168.100.101 openstack2
192.168.100.111 openstack3

创建用户并添加权限(三台都要做)

groupadd hadoop && useradd -g hadoop hduser

passwd hduser

chmod 777 /etc/sudoers

vim /etc/sudoers

#添加在root ALL=(ALL)ALL下面
hduser  ALL=(ALL)       ALL

chmod 440 /etc/sudoers

然后重启电脑:init 6

在openstack1上做ssh无密登录发送给其他两个节点方便后面的配置

ssh-****** -t rsa

ssh-copy-id aaa@qq.com

ssh-copy-id aaa@qq.com

安装配置jdk和hadoop(三台都要执行)

tar zxf jdk-8u91-linux-x64.tar.gz

mv jdk1.8.0_91 /usr/local/jdk1.8

tar zxf hadoop-2.6.1.tar.gz

mv hadoop-2.6.1 /home/hduser/hadoop

vim /etc/profile

export JAVA_HOME=/usr/local/jdk1.8
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
export PATH=$PATH:${JAVA_PATH}
export HADOOP_HOME=/home/hduser/hadoop
export PATH=$HADOOP_HOME/bin:$PATH

source /etc/profile

配置各个配置文件(只需要配置openstack1就好其他的做了无密码登录,直接scp过去就行)

配置文件的目录在安装目录的/etc/hadoop/目录下(我的目录是/home/hduser/hadoop/etc/hadoop/)

cd /home/hduser/hadoop/etc/hadoop/

vim hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1.8

vim yarn-env.sh

export JAVA_HOME=/usr/local/jdk1.8

vim slaves

openstack2
openstack3

vim core-site.xml

<configuration>
  <property>
     <name>fs.defaultFS</name>
     <value>hdfs://openstack1:9000</value>
</property>
<property>
     <name>hadoop.tmp.dir</name>
     <value>/home/hduser/hadoop/tmp</value>
</property>
</configuration>

vim hdfs-site.xml

<configuration>
<property>
      <name>dfs.namenode.secondary.http-address</name>
      <value>openstack1:50090</value>
</property>
<property>
      <name>dfs.replication</name>
      <value>2</value>
</property>
<property>
      <name>dfs.namenode.name.dir</name>
      <value>file:/home/hduser/hadoop/hdfs/name</value>
</property>
<property>
      <name>dfs.datanode.data.dir</name>
      <value>file:/home/hduser/hadoop/hdfs/data</value>
</property>
<property>
      <name>dfs.webhdfs.enabled</name>
      <value>true</value>
</property>
</configuration>

vim mapred-site.xml

(这个文件安装之后只有模板文件mapred-site.xml.template,复制改名为mapred-site.xml就可以使用了)

<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
<property>
    <name>mapreduce.jobhistory.address</name>
    <value>openstack1:10020</value>
</property>
<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>openstack1:19888</value>
</property>
</configuration>

vim yarn-site.xml

<configuration>
<property>
          <name>yarn.nodemanager.aux-services</name>
          <value>mapreduce_shuffle</value>
</property>
<property>
          <name>yarn.nodemanager.aux-service.mapreduce.shuffle.class</name>
          <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
           <name>resourcemanager.address</name>
           <value>openstack1:8032</value>
</property>
<property>
          <name>yarn.resourcemanager.scheduler.address</name>
          <value>openstack1:8030</value>
</property>
<property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
</property>
<property>
         <name>yarn.resourcemanager.resource-tracker.address</name>
         <value>openstack1:8035</value>
</property>
<property>
         <name>yarn.resourcemanager.admin.address</name>
         <value>openstack1:8033</value>
</property>
<property>
         <name>yarn.resourcemanager.webapp.address</name>
         <value>openstack1:8088</value>
</property>
</configuration>

把这7个配置文件传给其他两个节点

scp -r /home/hduser/hadoop/etc/hadoop/ aaa@qq.com:/home/hduser/hadoop/etc/

scp -r /home/hduser/hadoop/etc/hadoop/ aaa@qq.com:/home/hduser/hadoop/etc/

验证hadoop是否配置正确

1.格式化NameNode

/home/hduser/hadoop/bin/hdfs namenode -format

2.启动HDFS

/home/hduser/hadoopsbin/start-dfs.sh

3.jps查看java进程

Hadoop初体验

4.启动YARN

/home/hduser/hadoop/sbin/start-yarn.sh

补充:可以直接执行/home/hduser/hadoopsbin/start-all.sh就可以同时启动HDFS和YARN

5.查看群集状态

/home/hduser/hadoop/bin/hdfs dfsadmin -report

[aaa@qq.com hadoop]# bin/hdfs dfsadmin -report
Configured Capacity: 101838282752 (94.84 GB)
Present Capacity: 101117652992 (94.17 GB)
DFS Remaining: 101117644800 (94.17 GB)
DFS Used: 8192 (8 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (2):

Name: 192.168.100.111:50010 (openstack3)
Hostname: openstack3
Decommission Status : Normal
Configured Capacity: 50919141376 (47.42 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 360673280 (343.96 MB)
DFS Remaining: 50558464000 (47.09 GB)
DFS Used%: 0.00%
DFS Remaining%: 99.29%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Jun 03 02:12:45 EDT 2020

Name: 192.168.100.101:50010 (openstack2)
Hostname: openstack2
Decommission Status : Normal
Configured Capacity: 50919141376 (47.42 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 359956480 (343.28 MB)
DFS Remaining: 50559180800 (47.09 GB)
DFS Used%: 0.00%
DFS Remaining%: 99.29%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Jun 03 02:12:45 EDT 2020

也可以访问网页查看:http://192.168.101.11:50070

Hadoop初体验

相关标签: 云计算学习笔记