大数据Hadoop之KeyValueTextInputFormat使用案例
程序员文章站
2022-04-28 18:31:10
...
1.需求
统计输入文件中每一行的第一个单词相同的行数。
(1)输入数据
banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang
(2)期望结果数据
banzhang 2
xihuan 2
2.需求分析
3. 代码实现
Mapper:
package com.mapreduce.kvsplit;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class KVSplitMapper extends Mapper<Text, Text, Text, LongWritable>{
LongWritable v = new LongWritable(1);
protected void map(Text key, Text value, Context context)
throws java.io.IOException ,InterruptedException {
// 1. 直接写出
context.write(key, v);
};
}
Reducer:
package com.mapreduce.kvsplit;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class KVSplitReducer extends Reducer<Text, LongWritable, Text, IntWritable>{
protected void reduce(Text k, java.lang.Iterable<LongWritable> values, Context context)
throws java.io.IOException ,InterruptedException {
int sum = 0; // 用于记录行数的总和
// 1. 遍历values
for(LongWritable l : values) {
sum += l.get();
}
// 2. 写出
context.write(k, new IntWritable(sum));
};
}
Driver:
package com.mapreduce.kvsplit;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader;
import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class KVSplitDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
args = new String[] {"D:\\hadoop-2.7.1\\winMR\\KVSplit\\input",
"D:\\hadoop-2.7.1\\winMR\\KVSplit\\output1"};
// 1. 获取job实例
Configuration conf = new Configuration();
// 8. 设置kv的分隔符为空格符
conf.set(KeyValueLineRecordReader.KEY_VALUE_SEPERATOR, " ");
Job job = Job.getInstance(conf);
// 2. 设置jar
job.setJarByClass(KVSplitDriver.class);
// 3. 关联map和reduce
job.setMapperClass(KVSplitMapper.class);
job.setReducerClass(KVSplitReducer.class);
// 4. 设置map的kv输出类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(LongWritable.class);
// 5. 设置最终的kv类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
// 9. 设置job的分片机制
job.setInputFormatClass(KeyValueTextInputFormat.class);
// 6. 设置输入输出路径
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
// 7. 提交job
job.waitForCompletion(true);
}
}
4. 运行结果
上一篇: Java 手动分页
下一篇: Base64图片编码的使用
推荐阅读
-
Vue之使用mockjs生成模拟数据案例详解
-
Hadoop之使用python实现数据集合间join操作
-
Android 之 SQLite数据库及游标使用案例
-
Android 之 SQLite数据库及游标使用案例
-
大数据Hadoop之KeyValueTextInputFormat使用案例
-
大数据Hadoop之MR自定义排序 全排序案例实操
-
大数据Hadoop之MR Partition分区案例
-
大数据技术之_05_Hadoop学习_04_MapReduce_Hadoop企业优化+HDFS小文件优化方法+MapReduce扩展案例+倒排索引案例(多job串联)+TopN案例+找博客案例
-
大数据Hadoop之MR自定义排序 区内排序案例实操
-
大数据Hadoop之MR Combiner案例实操