欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

MapReduce的数据去重处理

程序员文章站 2024-03-18 11:30:04
...

6、让类【DeduplicationMapper】继承类Mapper同时指定需要的参数类型,根据业务逻辑修改map类的内容如下。

package com.simple.duduplication;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class DeduplicationMapper extends Mapper<LongWritable, Text, Text, Text> {
    @Override
    protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, Text>.Context context)
            throws IOException, InterruptedException {
        //按行读取信息并作为mapper的输出键,mapper的输出值置为空文本即可
        Text line = value;
        context.write(line, new Text(''));
    }
}

7、在项目src目录下指定的包名【com.simple.deduplication】下右键点击,新建一个类名为【DeduplicationReducer】并继承Reducer类,然后添加该类中的代码内容如下所示。

package com.simple.duduplication;
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class DeduplicationReducer extends Reducer<Text, Text, Text, Text> {
    @Override
    protected void reduce(Text key, Iterable<Text> value, Reducer<Text, Text, Text, Text>.Context context)
            throws IOException, InterruptedException {
        //Reducer阶段直接按键输出即可,键直接可以实现去重
        context.write(key, new Text(''));
    }
}

8、在项目src目录下指定的包名【com.simple.deduplication】下右键点击,新建一个测试主类名为【TestDeduplication】并指定main主方法,测试代码如下所示:

package com.simple.duduplication;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class TestDeduplication {
    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        conf.set('fs.defaultFS', 'hdfs://localhost:9000');
        //获取作业对象
        Job job = Job.getInstance(conf);
        //设置主类
        job.setJarByClass(TestDeduplication.class);
        //设置job参数
        job.setMapperClass(DeduplicationMapper.class);
        job.setReducerClass(DeduplicationReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        //设置job输入输出
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

相关标签: Hadoop实验代码