欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

如何使用Hadoop读写数据库

程序员文章站 2022-05-25 10:10:16
...
在我们的一些应用程序中,常常避免不了要与数据库进行交互,而在我们的hadoop中,有时候也需要和数据库进行交互,比如说,数据分析的结果存入数据库,或者是,读取数据库的信息写入HDFS上,不过直接使用MapReduce操作数据库,这种情况在现实开发还是比较少,一般我们会采用Sqoop来进行数据的迁入,迁出,使用Hive分析数据集,大多数情况下,直接使用Hadoop访问关系型数据库,可能产生比较大的数据访问压力,尤其是在数据库还是单机的情况下,情况可能更加糟糕,在集群的模式下压力会相对少一些。

那么,今天散仙就来看下,如何直接使用Hadoop1.2.0的MR来读写操作数据库,hadoop的API提供了DBOutputFormat和DBInputFormat这两个类,来进行与数据库交互,除此之外,我们还需要定义一个类似JAVA Bean的实体类,来与数据库的每行记录进行对应,通常这个类要实现Writable和DBWritable接口,来重写里面的4个方法以对应获取每行记录里面的各个字段信息。

下面,我们先来看下如何使用MR来读取数据库的数据,并写入HDFS上,
数据表的截图如下所示,


如何使用Hadoop读写数据库
            
    
    博客分类: Hadoop hadoopmapreduce读写数据库 
实体类定义代码:
package com.qin.operadb;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.lib.db.DBWritable;
 
/***
 * 封装数据库实体信息
 * 的记录
 * 
 * 搜索大数据技术交流群:376932160
 * 
 * **/
public class PersonRecoder implements Writable,DBWritable {

	public int id;//对应数据库中id字段
	public String name;//对应数据库中的name字段
	public int age;//对应数据库中的age字段
	
	
	
	
	@Override
	public void readFields(ResultSet result) throws SQLException {
	
		
		this.id=result.getInt(1);
		this.name=result.getString(2);
		this.age=result.getInt(3);
		
	}

	@Override
	public void write(PreparedStatement stmt) throws SQLException {
		
		stmt.setInt(1, id);
		stmt.setString(2, name);
		stmt.setInt(3, age);
		
	}

	@Override
	public void readFields(DataInput arg0) throws IOException {
		// TODO Auto-generated method stub
		this.id=arg0.readInt();
		this.name=Text.readString(arg0);
		this.age=arg0.readInt();
		
	}

	@Override
	public void write(DataOutput out) throws IOException {
		// TODO Auto-generated method stub
		out.writeInt(id);
		Text.writeString(out, this.name);
		out.writeInt(this.age);
	}
	
	
	@Override
	public String toString() {
		// TODO Auto-generated method stub
		return "id: "+id+"  年龄: "+age+"   名字:"+name;
	}
	

}

MR类的定义代码,注意是一个Map Only作业:
package com.qin.operadb;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.lib.IdentityReducer;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.db.DBConfiguration;
import org.apache.hadoop.mapreduce.lib.db.DBInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class ReadMapDB {
	
	
	
	/**
	 * Map作业读取数据记录数
	 * 
	 * **/
	private static class DBMap extends Mapper<LongWritable, PersonRecoder	, LongWritable, Text>{
		@Override
		protected void map(LongWritable key, PersonRecoder value,Context context)
				throws IOException, InterruptedException {
		 
			context.write(new LongWritable(value.id), new Text(value.toString()));
		 
		}
	}
	
	public static void main(String[] args)throws Exception {
		
		
		 JobConf conf=new JobConf(ReadMapDB.class);
		//Configuration conf=new Configuration();
	   //	conf.set("mapred.job.tracker","192.168.75.130:9001");
		//读取person中的数据字段
	 //	conf.setJar("tt.jar");
		 
		//注意这行代码放在最前面,进行初始化,否则会报
		DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", "jdbc:mysql://192.168.211.36:3306/test", "root", "qin");
		
		/**要读取的字段信息**/
		String fileds[]=new String[]{"id","name","age"};
		/**Job任务**/
		Job job=new Job(conf, "readDB");
		System.out.println("模式:  "+conf.get("mapred.job.tracker"));
		
		/**设置数据库输入格式的一些信息**/
		DBInputFormat.setInput(job, PersonRecoder.class, "person", null, "id", fileds);
		/***设置输入格式*/
		job.setInputFormatClass(DBInputFormat.class);
		job.setOutputKeyClass(LongWritable.class);
		job.setOutputValueClass(Text.class);
		job.setMapperClass(DBMap.class);
		String path="hdfs://192.168.75.130:9000/root/outputdb";
		FileSystem fs=FileSystem.get(conf);
		Path p=new Path(path);
		if(fs.exists(p)){
			fs.delete(p, true);
			System.out.println("输出路径存在,已删除!");
		}
		FileOutputFormat.setOutputPath(job,p );
		System.exit(job.waitForCompletion(true) ? 0 : 1);  
		
		
		 
		
		
		
		
	}
	
	
	
	
	
	

}

写入到HDFS目录下数据集:

如何使用Hadoop读写数据库
            
    
    博客分类: Hadoop hadoopmapreduce读写数据库 
读取相对比较简单,需要注意的第一注意JDBC的驱动jar包要在各个节点上分别上传一份,第二是在main方法里的驱动类的编写顺序,以及数据信息的完整,才是正确连接数据库并读取的关键。


下面来看下,如何使用MR,分析完数据后的结果,写入在数据库中,散仙本篇测试的是一个简单的WordCount的统计。我们先来看下数据库表的信息:


如何使用Hadoop读写数据库
            
    
    博客分类: Hadoop hadoopmapreduce读写数据库 

实体类定义代码:
package com.qin.operadb;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.lib.db.DBWritable;

public class WordRecoder implements Writable,DBWritable {

	 public String word;
	 public int count;
	
	
	@Override
	public void readFields(ResultSet rs) throws SQLException {
		
		this.word=rs.getString(1);
		this.count=rs.getInt(2);
		
	}

	@Override
	public void write(PreparedStatement ps) throws SQLException {
		
		ps.setString(1, this.word);
		ps.setInt(2, this.count);
		
	}

	@Override
	public void readFields(DataInput in) throws IOException {
	 
		  this.word=Text.readString(in);
		  this.count=in.readInt();
		
	}

	@Override
	public void write(DataOutput out) throws IOException {
	  
		   
		Text.writeString(out, this.word);
		out.writeInt(count);
		 
		
		
	}
	
	

}


统计的2个文件的内容所示:

如何使用Hadoop读写数据库
            
    
    博客分类: Hadoop hadoopmapreduce读写数据库 
MR的核心类代码:
package com.qin.operadb;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.db.DBConfiguration;
import org.apache.hadoop.mapreduce.lib.db.DBInputFormat;
import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
 

public class WriteMapDB {
	
	
	
	private static class WMap extends Mapper<LongWritable, Text, Text, IntWritable>{
		
		
		/***
		 * Mapper的参数类型介绍
		 * K,V,K,V分别依次代表
		 * Map作业输入类型的K,输入类型的V
		 * 后面两个是输出类型的K,输出类型的V
		 * 后面的两个与 context.write(word, one);
		 * 的两个参数是对应的
		 * **/
		private Text word=new Text();
		private IntWritable one=new IntWritable(1);
		
		@Override
		protected void map(LongWritable key, Text value,Context context)
				throws IOException, InterruptedException {
	 
			 String line=value.toString();
			//处理记事本UTF-8的BOM问题

	            if (line.getBytes().length > 0) {

	                if ((int) line.charAt(0) == 65279) {

	                    line = line.substring(1);

	                }

	            }
			 
			 StringTokenizer st=new StringTokenizer(line);
			 while(st.hasMoreTokens()){
				 word.set(st.nextToken());//设置单词
				 context.write(word, one);
			 }
			
			
			 
		}
		
		
		
	}
	
	
	/***
	 * 由于在reduce中,需要向数据库里写入
	 * 数据,所以跟combine,不能共用
	 * 
	 * 
	 * 
	 * ***/
	private static class WCombine extends Reducer<Text, IntWritable, Text, IntWritable>{
		
		@Override
		protected void reduce(Text text, Iterable<IntWritable> value,Context context)
				throws IOException, InterruptedException {
			
			
			int sum=0;
			
			for(IntWritable iw:value){
				sum+=iw.get();
			}
			context.write(text, new IntWritable(sum));
			
			 
		}
		
		
		
		
	}
	
	
	
	
	
	/**
	 * 
	 * Reduce类
	 * 
	 * **/
	private static class WReduce extends Reducer<Text, IntWritable, WordRecoder, Text>{
		
		
		@Override
		protected void reduce(Text key, Iterable<IntWritable> values,Context context)
				throws IOException, InterruptedException {
			 
			  int sum=0;
			
			  for(IntWritable s:values){
				  sum+=s.get();
			  }
			  
			  
			  WordRecoder wr=new WordRecoder();
			  wr.word=key.toString();
			  wr.count=sum;
			  
			  //写出到数据库里
			  context.write(wr, new Text());
			
			
			
			
		}
		
		
		
	}
	
	
	
	
	
	
	
	public static void main(String[] args)throws Exception {
		

		 JobConf conf=new JobConf(WriteMapDB.class);
		//Configuration conf=new Configuration();
	   // conf.set("mapred.job.tracker","192.168.75.130:9001");
		//读取person中的数据字段
	  	//conf.setJar("tt.jar");
		// conf.setNumReduceTasks(1);
		//注意这行代码放在最前面,进行初始化,否则会报
		 /**建立数据库连接**/
		DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", "jdbc:mysql://192.168.211.36:3306/test?characterEncoding=utf-8", "root", "qin");
		String fileds[]=new String[]{"word","count"};
		
		Job job=new Job(conf, "writeDB");
		
		System.out.println("运行模式:  "+conf.get("mapred.job.tracker"));
 
		/**设置输出表的的信息  第一个参数是job任务,第二个参数是表名,第三个参数字段项**/
		DBOutputFormat.setOutput(job, "wordresult", fileds);
		
		/**设置DB的输入路径**/
		job.setInputFormatClass(TextInputFormat.class);
		 /**设置DB的输出路径**/
		job.setOutputFormatClass(DBOutputFormat.class);
		
		
		/***设置Reduce的个数为1,可以得到全局统计的数字
		 * 但,需要注意,在分布式环境下,最好不要设置为1,Reduce的个数
		 * 正是Hadoop并发能力的体现
		 * 
		 * **/
	//	job.setNumReduceTasks(1);
		
		
		/**设置输出K路径**/
		job.setOutputKeyClass(Text.class);
		/**设置输出V路径**/
		job.setOutputValueClass(IntWritable.class);
		
		
		/**设置Map类**/
		job.setMapperClass(WMap.class);
		/**设置Combiner类**/
		job.setCombinerClass(WCombine.class);
		/**设置Reduce类**/
		job.setReducerClass(WReduce.class);
		
		
		
		/**设置输入路径*/
		FileInputFormat.setInputPaths(job, new Path("hdfs://192.168.75.130:9000/root/input"));
	 

	
		System.exit(job.waitForCompletion(true) ? 0 : 1);  
		
		
	}
	

}

运行状态如下所示:
运行模式:  192.168.75.130:9001
14/03/26 20:26:59 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/03/26 20:27:01 INFO input.FileInputFormat: Total input paths to process : 2
14/03/26 20:27:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/03/26 20:27:01 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/26 20:27:01 INFO mapred.JobClient: Running job: job_201403262328_0006
14/03/26 20:27:02 INFO mapred.JobClient:  map 0% reduce 0%
14/03/26 20:27:10 INFO mapred.JobClient:  map 50% reduce 0%
14/03/26 20:27:11 INFO mapred.JobClient:  map 100% reduce 0%
14/03/26 20:27:18 INFO mapred.JobClient:  map 100% reduce 33%
14/03/26 20:27:19 INFO mapred.JobClient:  map 100% reduce 100%
14/03/26 20:27:20 INFO mapred.JobClient: Job complete: job_201403262328_0006
14/03/26 20:27:20 INFO mapred.JobClient: Counters: 28
14/03/26 20:27:20 INFO mapred.JobClient:   Job Counters 
14/03/26 20:27:20 INFO mapred.JobClient:     Launched reduce tasks=1
14/03/26 20:27:20 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10345
14/03/26 20:27:20 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/03/26 20:27:20 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/03/26 20:27:20 INFO mapred.JobClient:     Launched map tasks=2
14/03/26 20:27:20 INFO mapred.JobClient:     Data-local map tasks=2
14/03/26 20:27:20 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=8911
14/03/26 20:27:20 INFO mapred.JobClient:   File Output Format Counters 
14/03/26 20:27:20 INFO mapred.JobClient:     Bytes Written=0
14/03/26 20:27:20 INFO mapred.JobClient:   FileSystemCounters
14/03/26 20:27:20 INFO mapred.JobClient:     FILE_BYTES_READ=158
14/03/26 20:27:20 INFO mapred.JobClient:     HDFS_BYTES_READ=325
14/03/26 20:27:20 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=182065
14/03/26 20:27:20 INFO mapred.JobClient:   File Input Format Counters 
14/03/26 20:27:20 INFO mapred.JobClient:     Bytes Read=107
14/03/26 20:27:20 INFO mapred.JobClient:   Map-Reduce Framework
14/03/26 20:27:20 INFO mapred.JobClient:     Map output materialized bytes=164
14/03/26 20:27:20 INFO mapred.JobClient:     Map input records=6
14/03/26 20:27:20 INFO mapred.JobClient:     Reduce shuffle bytes=164
14/03/26 20:27:20 INFO mapred.JobClient:     Spilled Records=24
14/03/26 20:27:20 INFO mapred.JobClient:     Map output bytes=185
14/03/26 20:27:20 INFO mapred.JobClient:     Total committed heap usage (bytes)=336338944
14/03/26 20:27:20 INFO mapred.JobClient:     CPU time spent (ms)=2850
14/03/26 20:27:20 INFO mapred.JobClient:     Combine input records=20
14/03/26 20:27:20 INFO mapred.JobClient:     SPLIT_RAW_BYTES=218
14/03/26 20:27:20 INFO mapred.JobClient:     Reduce input records=12
14/03/26 20:27:20 INFO mapred.JobClient:     Reduce input groups=8
14/03/26 20:27:20 INFO mapred.JobClient:     Combine output records=12
14/03/26 20:27:20 INFO mapred.JobClient:     Physical memory (bytes) snapshot=464982016
14/03/26 20:27:20 INFO mapred.JobClient:     Reduce output records=8
14/03/26 20:27:20 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2182836224
14/03/26 20:27:20 INFO mapred.JobClient:     Map output records=20

最后,我们就可以去数据库里,查看统计的信息了,截图如下:

如何使用Hadoop读写数据库
            
    
    博客分类: Hadoop hadoopmapreduce读写数据库 
至此,我们就完成了使用MR来读写数据库了,注意测试前,先确保自己的hadoop集群,可以正常工作。

  • 如何使用Hadoop读写数据库
            
    
    博客分类: Hadoop hadoopmapreduce读写数据库 
  • 大小: 60.1 KB
  • 如何使用Hadoop读写数据库
            
    
    博客分类: Hadoop hadoopmapreduce读写数据库 
  • 大小: 101.3 KB
  • 如何使用Hadoop读写数据库
            
    
    博客分类: Hadoop hadoopmapreduce读写数据库 
  • 大小: 168.9 KB
  • 如何使用Hadoop读写数据库
            
    
    博客分类: Hadoop hadoopmapreduce读写数据库 
  • 大小: 165.3 KB
  • 如何使用Hadoop读写数据库
            
    
    博客分类: Hadoop hadoopmapreduce读写数据库 
  • 大小: 115 KB