欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

hive学习笔记之八:Sqoop

程序员文章站 2022-03-11 12:38:58
欢迎访问我的GitHub https://github.com/zq2599/blog_demos 内容:所有原创文章分类汇总及配套源码,涉及Java、Docker、Kubernetes、DevOPS等; 关于Sqoop Sqoop是Apache开源项目,用于在Hadoop和关系型数据库之间高效传输 ......

欢迎访问我的github

内容:所有原创文章分类汇总及配套源码,涉及java、docker、kubernetes、devops等;

关于sqoop

sqoop是apache开源项目,用于在hadoop和关系型数据库之间高效传输大量数据,本文将与您一起实践以下内容:

  1. 部署sqoop
  2. 用sqoop将hive表数据导出至mysql
  3. 用sqoop将mysql数据导入到hive表

部署

  1. 在hadoop账号的家目录下载sqoop的1.4.7版本:
wget https://mirror.bit.edu.cn/apache/sqoop/1.4.7/sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz
  1. 解压:
tar -zxvf sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz
  1. 解压后得到文件夹sqoop-1.4.7.bin__hadoop-2.6.0,将mysql-connector-java-5.1.47.jar复制到sqoop-1.4.7.bin__hadoop-2.6.0/lib目录下
  2. 进入目录sqoop-1.4.7.bin__hadoop-2.6.0/conf,将sqoop-env-template.sh改名为sqoop-env.sh
mv sqoop-env-template.sh sqoop-env.sh
  1. 用编辑器打开sqoop-env.sh,增加下面三个配置,hadoop_common_homehadoop_mapred_home是完整的hadoop路径,hive_home是完整的hive路径:
export hadoop_common_home=/home/hadoop/hadoop-2.7.7
export hadoop_mapred_home=/home/hadoop/hadoop-2.7.7
export hive_home=/home/hadoop/apache-hive-1.2.2-bin
  1. 安装和配置完成了,进入sqoop-1.4.7.bin__hadoop-2.6.0/bin,执行./sqoop version查看sqoop版本,如下所示,可见是1.4.7版本(有些环境变量没配置会输出告警,在此先忽略):
[hadoop@node0 bin]$ ./sqoop version
warning: /home/hadoop/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../hbase does not exist! hbase imports will fail.
please set $hbase_home to the root of your hbase installation.
warning: /home/hadoop/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../hcatalog does not exist! hcatalog jobs will fail.
please set $hcat_home to the root of your hcatalog installation.
warning: /home/hadoop/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../accumulo does not exist! accumulo imports will fail.
please set $accumulo_home to the root of your accumulo installation.
warning: /home/hadoop/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../zookeeper does not exist! accumulo imports will fail.
please set $zookeeper_home to the root of your zookeeper installation.
20/11/02 12:02:58 info sqoop.sqoop: running sqoop version: 1.4.7
sqoop 1.4.7
git commit id 2328971411f57f0cb683dfb79d19d4d19d185dd8
compiled by maugli on thu dec 21 15:59:58 std 2017
  • sqoop装好之后,接下来体验其功能

mysql准备

为了接下来的实战,需要把mysql准备好,这里给出的mysql的配置供您参考:

  1. mysql版本:5.7.29
  2. mysql服务器ip:192.168.50.43
  3. mysql服务端口:3306
  4. 账号:root
  5. 密码:123456
  6. 数据库名:sqoop

关于mysql部署,我这为了省事儿,是用docker部署的,参考《群晖ds218+部署mysql》

从hive导入mysql(export)

  • 执行以下命令,将hive的数据导入到mysql:
./sqoop export \
--connect jdbc:mysql://192.168.50.43:3306/sqoop \
--table address \
--username root \
--password 123456 \
--export-dir '/user/hive/warehouse/address' \
--fields-terminated-by ','
  • 查看address表,数据已经导入:

hive学习笔记之八:Sqoop

从mysql导入hive(import)

  1. 在hive的命令行模式执行以下语句,新建名为address2的表结构和address一模一样:
create table address2 (addressid int, province string, city string) 
row format delimited 
fields terminated by ',';
  1. 执行以下命令,将mysql的address表的数据导入到hive的address2表,-m 2表示启动2个map任务:
./sqoop import \
--connect jdbc:mysql://192.168.50.43:3306/sqoop \
--table address \
--username root \
--password 123456 \
--target-dir '/user/hive/warehouse/address2' \
-m 2
  1. 执行完毕后,控制台输入类似以下内容:
		virtual memory (bytes) snapshot=4169867264
		total committed heap usage (bytes)=121765888
	file input format counters 
		bytes read=0
	file output format counters 
		bytes written=94
20/11/02 16:09:22 info mapreduce.importjobbase: transferred 94 bytes in 16.8683 seconds (5.5726 bytes/sec)
20/11/02 16:09:22 info mapreduce.importjobbase: retrieved 5 records.
  1. 去查看hive的address2表,可见数据已经成功导入:
hive> select * from address2;
ok
1	guangdong	guangzhou
2	guangdong	shenzhen
3	shanxi	xian
4	shanxi	hanzhong
6	jiangshu	nanjing
time taken: 0.049 seconds, fetched: 5 row(s)
  • 至此,sqoop工具的部署和基本操作已经体验完成,希望您在执行数据导入导出操作时,此文能给您一些参考;

你不孤单,欣宸原创一路相伴

  1. java系列
  2. spring系列
  3. docker系列
  4. devops系列

欢迎关注公众号:程序员欣宸

微信搜索「程序员欣宸」,我是欣宸,期待与您一同畅游java世界...