Alex的Hadoop菜鸟教程:第7课Sqoop2导入教程
程序员文章站
2022-05-21 08:06:21
...
具体的安装和jdbc的驱动准备你们看第6课。现在我用一个例子讲解sqoop2的具体使用方法 数据准备 有一个mysql的表叫worker,里面有三条数据,我们要将其导入hadoop 这是建表语句 CREATE TABLE `workers` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varch
具体的安装和jdbc的驱动准备你们看第6课。现在我用一个例子讲解sqoop2的具体使用方法
数据准备
有一个mysql的表叫worker,里面有三条数据,我们要将其导入hadoop
这是建表语句
CREATE TABLE `workers` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(20) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8;
插入三条数据
insert into workers (name) values ('jack'); insert into workers (name) values ('vicky'); insert into workers (name) values ('martin');
接下来我们使用sqoop客户端进行导入工作
导入数据
建立数据库连接
$ sqoop2 Sqoop home directory: /usr/lib/sqoop2 Sqoop Shell: Type 'help' or '\h' for help. sqoop:000> create connection --cid 1
这句话的意思是建立一个id为1的连接,然后sqoop会让你输入一些必要参数
Creating connection for connector with id 1 Please fill following values to create new connection object Name: First connection Configuration configuration JDBC Driver Class: com.mysql.jdbc.Driver JDBC Connection String: jdbc:mysql://mysql.server/database Username: sqoop Password: ***** JDBC Connection Properties: There are currently 0 values in the map: entry# Security related configuration options Max connections: 0 New connection was successfully created with validation status FINE and persistent id 1
记得把 jdbc:mysql://mysql.server/database 替换成你真实的数据库连接
建立job
建立一个id为1的job,类型是 importsqoop:000> create job --xid 1 --type import
接下来sqoop会让你输入需要的参数,只需要输入job的名字和Table name就好了,还有几个存储选项都选0,其他直接回车
Creating job for connection with id 1 Please fill following values to create new job object Name: First job Database configuration Table name: workers Table SQL statement: Table column names: Partition column name: Boundary query: Output configuration Storage type: 0 : HDFS Choose: 0 Output format: 0 : TEXT_FILE 1 : SEQUENCE_FILE Choose: 0 Compression format: 0 : NONE 1 : DEFAULT 2 : DEFLATE 3 : GZIP 4 : BZIP2 5 : LZO 6 : LZ4 7 : SNAPPY Choose: 0 Output directory: /user/jarcec/users New job was successfully created with validation status FINE and persistent id 1
执行任务
用start job命令去执行这个任务,用--jid来传入任务idsqoop:000> start job --jid 1 Submission details Job ID: 1 Server URL: http://localhost:12000/sqoop/ Created by: root Creation date: 2014-11-26 16:41:30 CST Lastly updated by: root External ID: job_1406097234796_0006 N/A 2014-11-26 16:41:30 CST: BOOTING - Progress is not available
检查结果
再打开一个ssh终端,然后用hdfs的命令查看结果$ hdfs dfs -ls /user/jarcec/workers/ Found 3 items -rw-r--r-- 2 sqoop2 supergroup 0 2014-11-26 16:42 /user/jarcec/workers/_SUCCESS -rw-r--r-- 2 sqoop2 supergroup 9 2014-11-26 16:41 /user/jarcec/workers/part-m-00000 -rw-r--r-- 2 sqoop2 supergroup 21 2014-11-26 16:42 /user/jarcec/workers/part-m-00001
可以看到有三个结果文件被生成,然后我们cat看下文件的内容
$ hdfs dfs -cat /user/jarcec/workers/part-m-00000 1,'jack' $ hdfs dfs -cat /user/jarcec/workers/part-m-00001 2,'vicky' 3,'martin'
今天写到这里,下节课讲讲导出
上一篇: HTML5实践-如何使用css3丰富图片样式的详解(一)
下一篇: 移动文件,该如何解决
推荐阅读
-
Alex的Hadoop菜鸟教程:第8课Sqoop1导入Hbase以及Hive
-
Alex的Hadoop菜鸟教程:第7课Sqoop2导出教程
-
Alex的Hadoop菜鸟教程:第8课Sqoop1安装/导入/导出教程
-
Alex的Hadoop菜鸟教程:第8课Sqoop1安装/导入/导出教程
-
Alex的Hadoop菜鸟教程:第10课Hive入门教程
-
Alex的Hadoop菜鸟教程:第7课Sqoop2导入教程
-
Alex的Hadoop菜鸟教程:第7课Sqoop2导出教程
-
Alex的Hadoop菜鸟教程:第7课Sqoop2导入教程
-
Alex的Hadoop菜鸟教程:第9课Sqoop1从Hbase或者Hive导出mysql_MySQL
-
Alex的Hadoop菜鸟教程:第9课Sqoop1从Hbase或者Hive导出mysql_MySQL