Hbase-2.0.0_01_安装部署
该文章是基于 hadoop2.7.6_01_部署 进行的
1. 主机规划
主机名称 |
ip信息 |
内网ip |
操作系统 |
安装软件 |
备注:运行程序 |
mini01 |
10.0.0.11 |
172.16.1.11 |
centos 7.4 |
jdk、hadoop、zookeeper、hbase |
quorumpeermain、namenode、hmaster |
mini02 |
10.0.0.12 |
172.16.1.12 |
centos 7.4 |
jdk、hadoop、zookeeper、hbase |
quorumpeermain、resourcemanager、hmaster |
mini03 |
10.0.0.13 |
172.16.1.13 |
centos 7.4 |
jdk、hadoop、zookeeper、hbase |
quorumpeermain、datanode、nodemanager、hregionserver |
mini04 |
10.0.0.14 |
172.16.1.14 |
centos 7.4 |
jdk、hadoop、zookeeper、hbase |
quorumpeermain、datanode、nodemanager、hregionserver |
mini05 |
10.0.0.15 |
172.16.1.15 |
centos 7.4 |
jdk、hadoop、zookeeper、hbase |
quorumpeermain、datanode、nodemanager、hregionserver |
2. zookeeper部署
共部署5台,所以在mini01~mini05都得部署
2.1. 配置信息
1 [yun@mini01 conf]$ pwd 2 /app/zookeeper/conf 3 [yun@mini01 conf]$ cat zoo.cfg 4 #单个客户端与单台服务器之间的连接数的限制,是ip级别的,默认是60,如果设置为0,那么表明不作任何限制。 5 maxclientcnxns=1500 6 # the number of milliseconds of each tick 7 ticktime=2000 8 # the number of ticks that the initial 9 # synchronization phase can take 10 initlimit=10 11 # the number of ticks that can pass between 12 # sending a request and getting an acknowledgement 13 synclimit=5 14 # the directory where the snapshot is stored. 15 # do not use /tmp for storage, /tmp here is just 16 # example sakes. 17 # datadir=/tmp/zookeeper 18 datadir=/app/bigdata/zookeeper/data 19 # the port at which the clients will connect 20 clientport=2181 21 # 22 # be sure to read the maintenance section of the 23 # administrator guide before turning on autopurge. 24 # 25 # http://zookeeper.apache.org/doc/current/zookeeperadmin.html#sc_maintenance 26 # 27 # the number of snapshots to retain in datadir 28 #autopurge.snapretaincount=3 29 # purge task interval in hours 30 # set to "0" to disable auto purge feature 31 #autopurge.purgeinterval=1 32 33 # leader和follow通信端口和投票选举端口 34 server.1=mini01:2888:3888 35 server.2=mini02:2888:3888 36 server.3=mini03:2888:3888 37 server.4=mini04:2888:3888 38 server.5=mini05:2888:3888
2.2. 添加myid文件
1 [yun@mini01 data]$ pwd 2 /app/bigdata/zookeeper/data 3 [yun@mini01 data]$ ll 4 total 4 5 -rw-r--r-- 1 yun yun 2 may 26 14:36 myid 6 drwxr-xr-x 2 yun yun 232 jun 27 14:54 version-2 7 [yun@mini01 data]$ cat myid # 其中mini01的myid 为1;mini02的myid 为2;mini03的myid 为3;mini04的myid 为4;mini05的myid 为5 8 1
2.3. 环境变量
1 [root@mini01 profile.d]# pwd 2 /etc/profile.d 3 [root@mini01 profile.d]# cat zk.sh 4 export zk_home="/app/zookeeper" 5 export path=$zk_home/bin:$path 6 7 [root@mini01 profile.d]# logout 8 [yun@mini01 conf]$ source /etc/profile # 重新加载环境变量
2.4. 启动zk服务
1 # 依次在启动mini01、mini02、mini03、mini04、mini05 zk服务 2 [yun@mini01 zookeeper]$ pwd 3 /app/zookeeper 4 [yun@mini01 zookeeper]$ zkserver.sh start 5 jmx enabled by default 6 using config: /app/zookeeper/bin/../conf/zoo.cfg 7 starting zookeeper ... started
建议在 /app/zookeeper,因为在启动时,会有相关日志产生
1 [yun@mini01 zookeeper]$ pwd 2 /app/zookeeper 3 [yun@mini01 zookeeper]$ ll zookeeper.out 4 -rw-rw-r-- 1 yun yun 41981 aug 6 17:24 zookeeper.out
2.5. 查询运行状态
1 # 其中mini01、mini02、mini04、mini05状态如下 2 [yun@mini01 zookeeper]$ zkserver.sh status 3 jmx enabled by default 4 using config: /app/zookeeper/bin/../conf/zoo.cfg 5 mode: follower 6 7 # 其中mini03 状态如下 8 [yun@mini03 zookeeper]$ zkserver.sh status 9 jmx enabled by default 10 using config: /app/zookeeper/bin/../conf/zoo.cfg 11 mode: leader
3. hbase部署与配置修改
3.1. 软件部署
1 [yun@mini01 software]$ pwd 2 /app/software 3 [yun@mini01 software]$ tar xf hbase-2.0.0-bin.tar.gz 4 [yun@mini01 software]$ mv hbase-2.0.0 /app/ 5 [yun@mini01 software]$ cd 6 [yun@mini01 ~]$ ln -s hbase-2.0.0/ hbase
3.2. 环境变量
注意所有部署hbase的机器【mini01、mini02、mini03、mini04、mini05】都需要该环境变量
1 [root@mini01 profile.d]# pwd 2 /etc/profile.d 3 [root@mini01 profile.d]# cat hbase.sh # 也可以直接写在 /etc/profile 文件中 4 export hbase_home="/app/hbase" 5 export path=$hbase_home/bin:$path 6 7 [root@mini01 profile.d]# logout 8 [yun@mini01 hbase]$ source /etc/profile # 使用yun用户,并重新加载环境变量
3.3. hbase-env.sh 修改
1 [yun@mini01 conf]$ pwd 2 /app/hbase/conf 3 [yun@mini01 conf]$ cat hbase-env.sh 4 #!/usr/bin/env bash 5 ……………… 6 # the java implementation to use. java 1.8+ required. 7 # export java_home=/usr/java/jdk1.8.0/ 8 export java_home=${java_home} 9 10 # extra java classpath elements. optional. # hadoop配置文件的位置 11 # export hbase_classpath= 12 export hbase_classpath=${hadoop_home}/etc/hadoop/ 13 ……………… 14 # tell hbase whether it should manage it's own instance of zookeeper or not. 15 # 如果使用独立安装的zookeeper这个地方就是false 16 # export hbase_manages_zk=true 17 export hbase_manages_zk=false 18 ………………
3.4. hbase-site.xml 修改
1 [yun@mini01 conf]$ pwd 2 /app/hbase/conf 3 [yun@mini01 conf]$ cat hbase-site.xml 4 <?xml version="1.0"?> 5 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 6 <!-- 7 /** 8 * 9 ……………… 10 */ 11 --> 12 <configuration> 13 <property> 14 <name> hbase.master.port</name> <!-- hbasemaster的端口 --> 15 <value>16000</value> 16 </property> 17 <property> 18 <name>hbase.tmp.dir</name> <!-- hbase 临时存储 --> 19 <value>/app/hbase/tmp</value> 20 </property> 21 <property> 22 <name>hbase.master.maxclockskew</name> <!-- 时间同步允许的时间差 单位毫秒 --> 23 <value>180000</value> 24 </property> 25 <property> 26 <name>hbase.rootdir</name> 27 <value>hdfs://mini01:9000/hbase</value> <!-- hbase共享目录,持久化hbase数据 存放在对应的hdfs上 --> 28 </property> 29 <property> 30 <name>hbase.cluster.distributed</name> <!-- 是否分布式运行,false即为单机 --> 31 <value>true</value> 32 </property> 33 <property> 34 <name>hbase.zookeeper.property.clientport</name> <!-- zookeeper端口 --> 35 <value>2181</value> 36 </property> 37 <property> 38 <name>hbase.zookeeper.quorum</name> <!-- zookeeper地址 --> 39 <value>mini01,mini02,mini03,mini04,mini05</value> 40 </property> 41 <property> 42 <name>hbase.zookeeper.property.datadir</name> <!-- zookeeper配置信息快照的位置 --> 43 <value>/app/hbase/tmp/zookeeper</value> 44 </property> 45 </configuration>
3.5. regionservers 修改
1 [yun@mini01 conf]$ pwd 2 /app/hbase/conf 3 [yun@mini01 conf]$ cat regionservers # 从机器的域名 4 mini03 5 mini04 6 mini05
4. hbase的分发与启动
注意:启动hbase之前,必须保证hadoop集群和zookeeper集群是可用的。
4.1. hbase分发到其他机器
将 /app/hbase-2.0.0 从mini01 分发到mini02【用于ha】、mini03、mini04、mini05
其中配置不需要任何修改
1 scp hbase-2.0.0 yun@mini02:/app 2 scp hbase-2.0.0 yun@mini03:/app 3 scp hbase-2.0.0 yun@mini04:/app 4 scp hbase-2.0.0 yun@mini05:/app
分发完毕后,记得登录不同的主机然后创建软连接
1 [yun@mini02 ~]$ pwd 2 /app 3 [yun@mini02 ~]$ ln -s hbase-2.0.0/ hbase
4.2. 启动程序
1 [yun@mini01 ~]$ start-hbase.sh 2 slf4j: class path contains multiple slf4j bindings. 3 slf4j: found binding in [jar:file:/app/hbase-2.0.0/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/staticloggerbinder.class] 4 slf4j: found binding in [jar:file:/app/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/staticloggerbinder.class] 5 slf4j: see http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 6 slf4j: actual binding is of type [org.slf4j.impl.log4jloggerfactory] 7 running master, logging to /app/hbase/logs/hbase-yun-master-mini01.out 8 slf4j: class path contains multiple slf4j bindings. 9 slf4j: found binding in [jar:file:/app/hbase-2.0.0/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/staticloggerbinder.class] 10 slf4j: found binding in [jar:file:/app/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/staticloggerbinder.class] 11 slf4j: see http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 12 slf4j: actual binding is of type [org.slf4j.impl.log4jloggerfactory] 13 mini05: running regionserver, logging to /app/hbase/bin/../logs/hbase-yun-regionserver-mini05.out 14 mini04: running regionserver, logging to /app/hbase/bin/../logs/hbase-yun-regionserver-mini04.out 15 mini03: running regionserver, logging to /app/hbase/bin/../logs/hbase-yun-regionserver-mini03.out
master进程
1 [yun@mini01 ~]$ jps 2 1808 secondarynamenode 3 1592 namenode 4 6540 hmaster 5 4158 quorumpeermain 6 7998 jps
slave进程
1 [yun@mini04 ~]$ jps 2 2960 jps 3 1921 quorumpeermain 4 1477 nodemanager 5 1547 datanode 6 2333 hregionserver
4.3. zk中的信息
1 [zk: localhost:2181(connected) 9] ls /hbase 2 [replication, meta-region-server, rs, splitwal, backup-masters, table-lock, flush-table-proc, master-maintenance, online-snapshot, switch, master, running, draining, namespace, hbaseid, table]
1 [zk: localhost:2181(connected) 7] ls /hbase/rs 2 [mini03,16020,1533633829928, mini05,16020,1533633829675, mini04,16020,1533633829711]
4.4. 浏览器访问
1 http://mini01:16010
5. hbase的ha
根据规划,mini01和mini02为hmaster,其中mini01的hmaster已经起来了
5.1. 启动另一个hmaster
1 [yun@mini02 ~]$ hbase-daemon.sh start master # 在mini02起一个 hmaster 2 running master, logging to /app/hbase/logs/hbase-yun-master-mini02.out 3 slf4j: class path contains multiple slf4j bindings. 4 slf4j: found binding in [jar:file:/app/hbase-2.0.0/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/staticloggerbinder.class] 5 slf4j: found binding in [jar:file:/app/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/staticloggerbinder.class] 6 slf4j: see http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 7 slf4j: actual binding is of type [org.slf4j.impl.log4jloggerfactory] 8 [yun@mini02 ~]$ 9 [yun@mini02 ~]$ jps 10 3746 jps 11 1735 resourcemanager 12 2265 quorumpeermain 13 3643 hmaster
5.2. zk中的信息
1 [zk: localhost:2181(connected) 3] ls /hbase/backup-masters 2 [mini02,16000,1533635137064]
5.3. 浏览器访问
1 http://mini02:16010
下一篇: transform Vs Udf