欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  数据库

[HBase]完全分布式安装过程详解

程序员文章站 2022-06-07 22:42:43
...

[HBase]完全分布式安装过程详解 HBase版本:0.90.5 Hadoop版本:0.20.2 OS版本:CentOS 安装方式:完全分布式(1个master,3个regionserver) 1)解压缩HBase安装文件 [hadoop@node01 ~]$ tar -zxvf hbase-0.90.5.tar.gz 解压缩成功后的HBase主目录结构如下

[HBase]完全分布式安装过程详解

HBase版本:0.90.5

Hadoop版本:0.20.2

OS版本:CentOS

安装方式:完全分布式(1个master,3个regionserver)

1)解压缩HBase安装文件

[hadoop@node01 ~]$ tar -zxvf hbase-0.90.5.tar.gz

解压缩成功后的HBase主目录结构如下:

[hadoop@node01 hbase-0.90.5]$ ls -l

total 3636

drwxr-xr-x. 3 hadoop root 4096 Dec 8 2011 bin

-rw-r--r--. 1 hadoop root 217043 Dec 8 2011 CHANGES.txt

drwxr-xr-x. 2 hadoop root 4096 Dec 8 2011 conf

drwxr-xr-x. 4 hadoop root 4096 Dec 8 2011 docs

-rwxr-xr-x. 1 hadoop root 2425490 Dec 8 2011 hbase-0.90.5.jar

-rwxr-xr-x. 1 hadoop root 997956 Dec 8 2011 hbase-0.90.5-tests.jar

drwxr-xr-x. 5 hadoop root 4096 Dec 8 2011 hbase-webapps

drwxr-xr-x. 3 hadoop root 4096 Apr 12 19:03 lib

-rw-r--r--. 1 hadoop root 11358 Dec 8 2011 LICENSE.txt

-rw-r--r--. 1 hadoop root 803 Dec 8 2011 NOTICE.txt

-rw-r--r--. 1 hadoop root 31073 Dec 8 2011 pom.xml

-rw-r--r--. 1 hadoop root 1358 Dec 8 2011 README.txt

drwxr-xr-x. 8 hadoop root 4096 Dec 8 2011 src

2)配置hbase-env.sh

[hadoop@node01 conf]$ vi hbase-env.sh

# The java implementation to use. Java 1.6 required.

export JAVA_HOME=/usr/java/jdk1.6.0_38

# Extra Java CLASSPATH elements. Optional.

export HBASE_CLASSPATH=/home/hadoop/hadoop-0.20.2/conf

3)配置hbase-site.xml

[hadoop@node01 conf]$ vi hbase-site.xml

hbase.rootdir

hdfs://node01:9000/hbase

hbase.cluster.distributed

true

hbase.zookeeper.quorum

node01,node02,node03,node04

hbase.zookeeper.property.dataDir

/var/zookeeper

4)配置regionservers

[hadoop@node01 conf]$ vi regionservers

node02

node03

node04

5) 替换Jar包

[hadoop@node01 lib]$ mv hadoop-core-0.20-append-r1056497.jar hadoop-core-0.20-append-r1056497.sav

[hadoop@node01 lib]$ cp ../../hadoop-0.20.2/hadoop-0.20.2-core.jar .

[hadoop@node01 lib]$ ls

activation-1.1.jar commons-net-1.4.1.jar jasper-compiler-5.5.23.jar jetty-util-6.1.26.jar slf4j-api-1.5.8.jar

asm-3.1.jar core-3.1.1.jar jasper-runtime-5.5.23.jar jruby-complete-1.6.0.jar slf4j-log4j12-1.5.8.jar

avro-1.3.3.jar guava-r06.jar jaxb-api-2.1.jar jsp-2.1-6.1.14.jar stax-api-1.0.1.jar

commons-cli-1.2.jar hadoop-0.20.2-core.jar jaxb-impl-2.1.12.jar jsp-api-2.1-6.1.14.jar thrift-0.2.0.jar

commons-codec-1.4.jar hadoop-core-0.20-append-r1056497.sav jersey-core-1.4.jar jsr311-api-1.1.1.jar xmlenc-0.52.jar

commons-el-1.0.jar jackson-core-asl-1.5.5.jar jersey-json-1.4.jar log4j-1.2.16.jar zookeeper-3.3.2.jar

commons-httpclient-3.1.jar jackson-jaxrs-1.5.5.jar jersey-server-1.4.jar protobuf-java-2.3.0.jar

commons-lang-2.5.jar jackson-mapper-asl-1.4.2.jar jettison-1.1.jar ruby

commons-logging-1.1.1.jar jackson-xc-1.5.5.jar jetty-6.1.26.jar servlet-api-2.5-6.1.14.jar

6) 向其它3个结点复制Hbase相关配置

[hadoop@node01 ~]$scp -r ./hbase-0.90.5 node02:/home/hadoop

[hadoop@node01 ~]$scp -r ./hbase-0.90.5 node03:/home/hadoop

[hadoop@node01 ~]$scp -r ./hbase-0.90.5 node04:/home/hadoop

7) 添加HBase相关环境变量 (所有结点)

[hadoop@node01 conf]$ su - root

Password:

[root@node01 ~]# vi /etc/profile

export HBASE_HOME=/home/hadoop/hbase-0.90.5

export PATH=$PATH:$HBASE_HOME/bin

8)启动Hadoop,创建HBase主目录

[hadoop@node01 ~]$ $HADOOP_INSTALL/bin/start-all.sh

starting namenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-node01.out

node02: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-node02.out

node04: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-node04.out

node03: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-node03.out

hadoop@node01's password:

node01: starting secondarynamenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-node01.out

starting jobtracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-node01.out

node04: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-node04.out

node02: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-node02.out

node03: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-node03.out

[hadoop@node01 ~]$ jps

5332 Jps

5030 NameNode

5259 JobTracker

5185 SecondaryNameNode

[hadoop@node02 ~]$ jps

4603 Jps

4528 TaskTracker

4460 DataNode

[hadoop@node01 ~]$ hadoop fs -mkdir hbase

9)启动HBase

[hadoop@node01 conf]$ start-hbase.sh

hadoop@node01's password: node03: starting zookeeper, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-node03.out

node04: starting zookeeper, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-node04.out

node02: starting zookeeper, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-node02.out

node01: starting zookeeper, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-node01.out

starting master, logging to /home/hadoop/hbase-0.90.5/logs/hbase-hadoop-master-node01.out

node03: starting regionserver, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-regionserver-node03.out

node02: starting regionserver, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-regionserver-node02.out

node04: starting regionserver, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-regionserver-node04.out

[hadoop@node01 conf]$ jps

7437 HQuorumPeer

7495 HMaster

5030 NameNode

5259 JobTracker

5185 SecondaryNameNode

7597 Jps

[hadoop@node02 ~]$ jps

5965 HRegionServer

4528 TaskTracker

4460 DataNode

5892 HQuorumPeer

6074 Jps

10) 测试:在HBase上创建表

[hadoop@node01 logs]$ hbase shell

HBase Shell; enter 'help' for list of supported commands.

Type "exit" to leave the HBase Shell

Version 0.90.5, r1212209, Fri Dec 9 05:40:36 UTC 2011

hbase(main):001:0> status

3 servers, 0 dead, 0.6667 average load

hbase(main):002:0> create 'testtable', 'colfam1'

0 row(s) in 1.4820 seconds

hbase(main):003:0> list 'testtable'

TABLE

testtable

1 row(s) in 0.0290 seconds

hbase(main):004:0> put 'testtable', 'myrow-1', 'colfam1:q1', 'value-1'

0 row(s) in 0.1980 seconds

hbase(main):005:0> put 'testtable', 'myrow-2', 'colfam1:q2', 'value-2'

0 row(s) in 0.0140 seconds

hbase(main):006:0> put 'testtable', 'myrow-2', 'colfam1:q3', 'value-3'

0 row(s) in 0.0070 seconds

hbase(main):007:0> scan 'testtable'

ROW COLUMN+CELL

myrow-1 column=colfam1:q1, timestamp=1365829054040, value=value-1

myrow-2 column=colfam1:q2, timestamp=1365829061470, value=value-2

myrow-2 column=colfam1:q3, timestamp=1365829066386, value=value-3

2 row(s) in 0.0690 seconds

hbase(main):008:0> get 'testtable', 'myrow-1'

COLUMN CELL

colfam1:q1 timestamp=1365829054040, value=value-1

1 row(s) in 0.0330 seconds

hbase(main):009:0> delete 'testtable', 'myrow-2', 'colfam1:q2'

0 row(s) in 0.0220 seconds

hbase(main):010:0> scan 'testtable'

ROW COLUMN+CELL

myrow-1 column=colfam1:q1, timestamp=1365829054040, value=value-1

myrow-2 column=colfam1:q3, timestamp=1365829066386, value=value-3

2 row(s) in 0.0330 seconds

hbase(main):011:0> exit

11)停止HBase

[hadoop@node01 logs]$ stop-hbase.sh

stopping hbase..........

hadoop@node01's password: node02: stopping zookeeper.

node03: stopping zookeeper.

node04: stopping zookeeper.

node01: stopping zookeeper.

[hadoop@node01 logs]$ jps

5030 NameNode

5259 JobTracker

5185 SecondaryNameNode

7952 Jps

[hadoop@node02 logs]$ jps

6351 Jps

4528 TaskTracker

4460 DataNode