欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Hadoop Hive与Hbase整合  

程序员文章站 2022-06-10 08:57:41
...
用hbase做数据库,但由于hbase没有类sql查询方式,所以操作和计算数据非常不方便,于是整合hive,让hive支撑在hbase数据库层面 的 hql查询.hive也即 做数据仓库

1. 基于Hadoop+Hive架构对海量数据进行查询:http://blog.csdn.net/kunshan_shenbin/article/details/7105319
2. HBase 0.90.5 + Hadoop 1.0.0 集成:http://blog.csdn.net/kunshan_shenbin/article/details/7209990
本文的目的是要讲述如何让Hbase和Hive能互相访问,让Hadoop/Hbase/Hive协同工作,合为一体。
本文测试步骤主要参考自:http://running.iteye.com/blog/898399
当然,这边博文也是按照官网的步骤来的:http://wiki.apache.org/hadoop/Hive/HBaseIntegration
1. 拷贝hbase-0.90.5.jar和zookeeper-3.3.2.jar到hive/lib下。
注意:如何hive/lib下已经存在这两个文件的其他版本(例如zookeeper-3.3.1.jar),建议删除后使用hbase下的相关版本。
2. 修改hive/conf下hive-site.xml文件,在底部添加如下内容:
01 [html] view plaincopy
02 <!--
03 <property>
04 <name>hive.exec.scratchdir</name>
05 <value>/usr/local/hive/tmp</value>
06
07 </property>
08 -->
09
10 <property>
11 <name>hive.querylog.location</name>
12 <value>/usr/local/hive/logs</value>
13 </property>
14
15 <property>
16 <name>hive.aux.jars.path</name>
17 <value>file:///usr/local/hive/lib/hive-hbase-handler-0.8.0.jar,file:///usr/local/hive/lib/hbase-0.90.5.jar,file:///usr/local/hive/lib/zookeeper-3.3.2.jar</value>
18
19 </property>


注意:如果hive-site.xml不存在则自行创建,或者把hive-default.xml.template文件改名后使用。
具体请参见:http://blog.csdn.net/kunshan_shenbin/article/details/7210020

3. 拷贝hbase-0.90.5.jar到所有hadoop节点(包括master)的hadoop/lib下。
4. 拷贝hbase/conf下的hbase-site.xml文件到所有hadoop节点(包括master)的hadoop/conf下。
注意,hbase-site.xml文件配置信息参照:http://blog.csdn.net/kunshan_shenbin/article/details/7209990
注意,如果3,4两步跳过的话,运行hive时很可能出现如下错误:
1 [html] view plaincopy
2 org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect to ZooKeeper but the connection closes immediately.
3 This could be a sign that the server has too many connections (30 is the default). Consider inspecting your ZK server logs for that error and
4 then make sure you are reusing HBaseConfiguration as often as you can. See HTable's javadoc for more information. at org.apache.hadoop.
5 hbase.zookeeper.ZooKeeperWatcher.


参考:http://blog.sina.com.cn/s/blog_410d18710100vlbq.html

现在可以尝试启动Hive了。
单节点启动:
1 > bin/hive -hiveconf hbase.master=master:60000

集群启动:
1 > bin/hive -hiveconf hbase.zookeeper.quorum=slave

如何hive-site.xml文件中没有配置hive.aux.jars.path,则可以按照如下方式启动。
1 > bin/hive --auxpath /usr/local/hive/lib/hive-hbase-handler-0.8.0.jar, /usr/local/hive/lib/hbase-0.90.5.jar, /usr/local/hive/lib/zookeeper-3.3.2.jar -hiveconf hbase.zookeeper.quorum=slave


接下来可以做一些测试了。
1.创建hbase识别的数据库:
[sql] view plaincopy
CREATE TABLE hbase_table_1(key int, value string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
TBLPROPERTIES ("hbase.table.name" = "xyz");
hbase.table.name 定义在hbase的table名称
hbase.columns.mapping 定义在hbase的列族
2.使用sql导入数据
a) 新建hive的数据表
[sql] view plaincopy
<span><span></span></span>hive> CREATE TABLE pokes (foo INT, bar STRING);
b) 批量插入数据
[sql] view plaincopy
1 hive> LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE
pokes;
c) 使用sql导入hbase_table_1
[sql] view plaincopy
hive> INSERT OVERWRITE TABLE hbase_table_1 SELECT * FROM pokes WHERE foo=86;
3. 查看数据
[sql] view plaincopy
hive> select * from hbase_table_1;
这时可以登录Hbase去查看数据了.
> /usr/local/hbase/bin/hbase shell
hbase(main):001:0> describe 'xyz'
hbase(main):002:0> scan 'xyz'
hbase(main):003:0> put 'xyz','100','cf1:val','www.360buy.com'
这时在Hive中可以看到刚才在Hbase中插入的数据了。
hive> select * from hbase_table_1
4. hive访问已经存在的hbase
使用CREATE EXTERNAL TABLE
[sql] view plaincopy
CREATE EXTERNAL TABLE hbase_table_2(key int, value string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = "cf1:val")
TBLPROPERTIES("hbase.table.name" = "some_existing_table");


多列和多列族(Multiple Columns and Families)
1.创建数据库
Java代码
CREATE TABLE hbase_table_2(key int, value1 string, value2 int, value3 int)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
"hbase.columns.mapping" = ":key,a:b,a:c,d:e"
);

2.插入数据
Java代码
INSERT OVERWRITE TABLE hbase_table_2 SELECT foo, bar, foo+1, foo+2
FROM pokes WHERE foo=98 OR foo=100;


这个有3个hive的列(value1和value2,value3),2个hbase的列族(a,d)
Hive的2列(value1和value2)对应1个hbase的列族(a,在hbase的列名称b,c),hive的另外1列(value3)对应列(e)位于列族(d)

3.登录hbase查看结构
Java代码
1 hbase(main):003:0> describe "hbase_table_2"
2 DESCRIPTION ENABLED
3 {NAME => 'hbase_table_2', FAMILIES => [{NAME => 'a', COMPRESSION => 'N true
4 ONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_M
5 EMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'd', COMPRESSION =>
6 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN
7 _MEMORY => 'false', BLOCKCACHE => 'true'}]}
8 1 row(s) in 1.0630 seconds


4.查看hbase的数据
Java代码
1 hbase(main):004:0> scan 'hbase_table_2'
2 ROW COLUMN+CELL
3 100 column=a:b, timestamp=1297695262015, value=val_100
4 100 column=a:c, timestamp=1297695262015, value=101
5 100 column=d:e, timestamp=1297695262015, value=102
6 98 column=a:b, timestamp=1297695242675, value=val_98
7 98 column=a:c, timestamp=1297695242675, value=99
8 98 column=d:e, timestamp=1297695242675, value=100
9 2 row(s) in 0.0380 seconds


5.在hive中查看
Java代码
1 hive> select * from hbase_table_2;
2 OK
3 100 val_100 101 102
4 98 val_98 99 100
5 Time taken: 3.238 seconds


参考资料:
http://running.iteye.com/blog/898399
http://heipark.iteye.com/blog/1150648
http://www.javabloger.com/article/apache-hadoop-hive-hbase-integration.html