hadoop实践|各省份的学生平均成绩
程序员文章站
2022-04-12 21:22:36
问题描述
建立两张表,第1张表有学生姓名和出生省份数据,第2张表有学生姓名和英语成绩数据,用map-reduce程序来统计同一省份的学生英语平均成绩。
数据自备
一个解析
实在想不到如...
问题描述
建立两张表,第1张表有学生姓名和出生省份数据,第2张表有学生姓名和英语成绩数据,用map-reduce程序来统计同一省份的学生英语平均成绩。
数据自备
一个解析
实在想不到如何一次MapReduce完成,菜鸡的我只能分两次完成。如果有更好的思路希望私信或留言
分两次,先多表关联, 然后算平均值
里面有条数据 shanghai打成shagnhai了
程序
JoinTable.java
package examples; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.FileSplit; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; public class JoinTable { public static class SMMapper extends Mapper { private String flag = null; @Override protected void setup(Context context) throws IOException, InterruptedException { FileSplit split = (FileSplit) context.getInputSplit(); flag = split.getPath().getName(); } @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String[] val = value.toString().split(","); if ("placeTable".equals(flag)) { context.write(new Text(val[0]), new Text("a," + val[1])); } else if ("scoreTable".equals(flag)) { context.write(new Text(val[0]), new Text("b," + val[1])); } } } public static class SMReducer extends Reducer { @Override protected void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { String[] ArrStr = new String[2]; for (Text value : values) { String[] val = value.toString().split(","); if ("a".equals(val[0])) { ArrStr[0] = val[1]; } else if ("b".equals(val[0])) { ArrStr[1] = val[1]; } } context.write(new Text(ArrStr[0]), new Text(ArrStr[1])); } } public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { String input1 = "hdfs:/score/placeTable"; String input2 = "hdfs:/score/scoreTable"; String output = "hdfs:/score/out"; Configuration conf = new Configuration(); conf.addResource("classpath:/core-site.xml"); conf.addResource("classpath:/hdfs-site.xml"); Job job = Job.getInstance(conf, "JoinTable"); job.setJarByClass(JoinTable.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); job.setMapperClass(SMMapper.class); job.setReducerClass(SMReducer.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.setInputPaths(job, new Path(input1), new Path(input2));// 加载2个输入数据集 Path outputPath = new Path(output); outputPath.getFileSystem(conf).delete(outputPath, true); FileOutputFormat.setOutputPath(job, outputPath); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
AverageScore.java
package examples; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.DoubleWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.FileSplit; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; public class AverageScore { public static class SMMapper extends Mapper { @Override protected void setup(Context context) throws IOException, InterruptedException { FileSplit split = (FileSplit) context.getInputSplit(); split.getPath().getName(); } @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String str = value.toString(); char[] ArrCh = str.toCharArray(); int i = 0; for(; i { private DoubleWritable result = new DoubleWritable(); @Override protected void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { double sum = 0; int len_ = 0; for(Text val:values){ sum += Double.parseDouble(val.toString()); ++len_; } result.set(sum/len_); context.write(key, result); } } public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { String input1 = "hdfs:/score/out/part-r-00000"; String output = "hdfs:/score/out/lastout"; Configuration conf = new Configuration(); conf.addResource("classpath:/core-site.xml"); conf.addResource("classpath:/hdfs-site.xml"); Job job = Job.getInstance(conf, "AverageScore"); job.setJarByClass(AverageScore.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); job.setMapperClass(SMMapper.class); job.setReducerClass(SMReducer.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.setInputPaths(job, new Path(input1)); Path outputPath = new Path(output); outputPath.getFileSystem(conf).delete(outputPath, true); FileOutputFormat.setOutputPath(job, outputPath); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
日志
JoinTable.java
18/03/25 08:34:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 18/03/25 08:34:32 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 18/03/25 08:34:32 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 18/03/25 08:34:33 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 18/03/25 08:34:33 WARN mapreduce.JobResourceUploader: No job jar file set. User classes may not be found. See Job or Job#setJar(String). 18/03/25 08:34:33 INFO input.FileInputFormat: Total input paths to process : 2 18/03/25 08:34:33 INFO mapreduce.JobSubmitter: number of splits:2 18/03/25 08:34:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local371132637_0001 18/03/25 08:34:33 INFO mapreduce.Job: The url to track the job: https://localhost:8080/ 18/03/25 08:34:33 INFO mapreduce.Job: Running job: job_local371132637_0001 18/03/25 08:34:33 INFO mapred.LocalJobRunner: OutputCommitter set in config null 18/03/25 08:34:33 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 18/03/25 08:34:33 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 18/03/25 08:34:34 INFO mapred.LocalJobRunner: Waiting for map tasks 18/03/25 08:34:34 INFO mapred.LocalJobRunner: Starting task: attempt_local371132637_0001_m_000000_0 18/03/25 08:34:34 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 18/03/25 08:34:34 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 18/03/25 08:34:34 INFO mapred.MapTask: Processing split: hdfs://master:9000/score/placeTable:0+104 18/03/25 08:34:34 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 18/03/25 08:34:34 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 18/03/25 08:34:34 INFO mapred.MapTask: soft limit at 83886080 18/03/25 08:34:34 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 18/03/25 08:34:34 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 18/03/25 08:34:34 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 18/03/25 08:34:34 INFO mapred.LocalJobRunner: 18/03/25 08:34:34 INFO mapred.MapTask: Starting flush of map output 18/03/25 08:34:34 INFO mapred.MapTask: Spilling map output 18/03/25 08:34:34 INFO mapred.MapTask: bufstart = 0; bufend = 118; bufvoid = 104857600 18/03/25 08:34:34 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214372(104857488); length = 25/6553600 18/03/25 08:34:34 INFO mapred.MapTask: Finished spill 0 18/03/25 08:34:34 INFO mapred.Task: Task:attempt_local371132637_0001_m_000000_0 is done. And is in the process of committing 18/03/25 08:34:34 INFO mapreduce.Job: Job job_local371132637_0001 running in uber mode : false 18/03/25 08:34:34 INFO mapred.LocalJobRunner: map 18/03/25 08:34:34 INFO mapred.Task: Task 'attempt_local371132637_0001_m_000000_0' done. 18/03/25 08:34:34 INFO mapred.LocalJobRunner: Finishing task: attempt_local371132637_0001_m_000000_0 18/03/25 08:34:34 INFO mapred.LocalJobRunner: Starting task: attempt_local371132637_0001_m_000001_0 18/03/25 08:34:34 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 18/03/25 08:34:34 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 18/03/25 08:34:34 INFO mapreduce.Job: map 0% reduce 0% 18/03/25 08:34:34 INFO mapred.MapTask: Processing split: hdfs://master:9000/score/scoreTable:0+65 18/03/25 08:34:35 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 18/03/25 08:34:35 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 18/03/25 08:34:35 INFO mapred.MapTask: soft limit at 83886080 18/03/25 08:34:35 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 18/03/25 08:34:35 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 18/03/25 08:34:35 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 18/03/25 08:34:35 INFO mapred.LocalJobRunner: 18/03/25 08:34:35 INFO mapred.MapTask: Starting flush of map output 18/03/25 08:34:35 INFO mapred.MapTask: Spilling map output 18/03/25 08:34:35 INFO mapred.MapTask: bufstart = 0; bufend = 79; bufvoid = 104857600 18/03/25 08:34:35 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214372(104857488); length = 25/6553600 18/03/25 08:34:35 INFO mapred.MapTask: Finished spill 0 18/03/25 08:34:35 INFO mapred.Task: Task:attempt_local371132637_0001_m_000001_0 is done. And is in the process of committing 18/03/25 08:34:35 INFO mapred.LocalJobRunner: map 18/03/25 08:34:35 INFO mapred.Task: Task 'attempt_local371132637_0001_m_000001_0' done. 18/03/25 08:34:35 INFO mapred.LocalJobRunner: Finishing task: attempt_local371132637_0001_m_000001_0 18/03/25 08:34:35 INFO mapred.LocalJobRunner: map task executor complete. 18/03/25 08:34:35 INFO mapred.LocalJobRunner: Waiting for reduce tasks 18/03/25 08:34:35 INFO mapred.LocalJobRunner: Starting task: attempt_local371132637_0001_r_000000_0 18/03/25 08:34:35 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 18/03/25 08:34:35 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 18/03/25 08:34:35 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@2f36f1ce 18/03/25 08:34:35 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=322594400, maxSingleShuffleLimit=80648600, mergeThreshold=212912320, ioSortFactor=10, memToMemMergeOutputsThreshold=10 18/03/25 08:34:35 INFO reduce.EventFetcher: attempt_local371132637_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 18/03/25 08:34:35 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local371132637_0001_m_000000_0 decomp: 134 len: 138 to MEMORY 18/03/25 08:34:35 INFO reduce.InMemoryMapOutput: Read 134 bytes from map-output for attempt_local371132637_0001_m_000000_0 18/03/25 08:34:35 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 134, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->134 18/03/25 08:34:35 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local371132637_0001_m_000001_0 decomp: 95 len: 99 to MEMORY 18/03/25 08:34:35 INFO reduce.InMemoryMapOutput: Read 95 bytes from map-output for attempt_local371132637_0001_m_000001_0 18/03/25 08:34:35 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 95, inMemoryMapOutputs.size() -> 2, commitMemory -> 134, usedMemory ->229 18/03/25 08:34:35 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning 18/03/25 08:34:35 INFO mapred.LocalJobRunner: 2 / 2 copied. 18/03/25 08:34:35 INFO reduce.MergeManagerImpl: finalMerge called with 2 in-memory map-outputs and 0 on-disk map-outputs 18/03/25 08:34:35 INFO mapred.Merger: Merging 2 sorted segments 18/03/25 08:34:35 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 215 bytes 18/03/25 08:34:35 INFO reduce.MergeManagerImpl: Merged 2 segments, 229 bytes to disk to satisfy reduce memory limit 18/03/25 08:34:35 INFO reduce.MergeManagerImpl: Merging 1 files, 231 bytes from disk 18/03/25 08:34:35 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce 18/03/25 08:34:35 INFO mapred.Merger: Merging 1 sorted segments 18/03/25 08:34:35 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 220 bytes 18/03/25 08:34:35 INFO mapred.LocalJobRunner: 2 / 2 copied. 18/03/25 08:34:35 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 18/03/25 08:34:35 INFO mapreduce.Job: map 100% reduce 0% 18/03/25 08:34:36 INFO mapred.Task: Task:attempt_local371132637_0001_r_000000_0 is done. And is in the process of committing 18/03/25 08:34:36 INFO mapred.LocalJobRunner: 2 / 2 copied. 18/03/25 08:34:36 INFO mapred.Task: Task attempt_local371132637_0001_r_000000_0 is allowed to commit now 18/03/25 08:34:36 INFO output.FileOutputCommitter: Saved output of task 'attempt_local371132637_0001_r_000000_0' to hdfs://master:9000/score/out/_temporary/0/task_local371132637_0001_r_000000 18/03/25 08:34:36 INFO mapred.LocalJobRunner: reduce > reduce 18/03/25 08:34:36 INFO mapred.Task: Task 'attempt_local371132637_0001_r_000000_0' done. 18/03/25 08:34:36 INFO mapred.LocalJobRunner: Finishing task: attempt_local371132637_0001_r_000000_0 18/03/25 08:34:36 INFO mapred.LocalJobRunner: reduce task executor complete. 18/03/25 08:34:36 INFO mapreduce.Job: map 100% reduce 100% 18/03/25 08:34:36 INFO mapreduce.Job: Job job_local371132637_0001 completed successfully 18/03/25 08:34:37 INFO mapreduce.Job: Counters: 35 File System Counters FILE: Number of bytes read=1795 FILE: Number of bytes written=869992 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=442 HDFS: Number of bytes written=79 HDFS: Number of read operations=28 HDFS: Number of large read operations=0 HDFS: Number of write operations=8 Map-Reduce Framework Map input records=14 Map output records=14 Map output bytes=197 Map output materialized bytes=237 Input split bytes=200 Combine input records=0 Combine output records=0 Reduce input groups=7 Reduce shuffle bytes=237 Reduce input records=14 Reduce output records=7 Spilled Records=28 Shuffled Maps =2 Failed Shuffles=0 Merged Map outputs=2 GC time elapsed (ms)=13 Total committed heap usage (bytes)=950009856 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=169 File Output Format Counters Bytes Written=79
AverageScore.java
18/03/25 08:36:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 18/03/25 08:36:43 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 18/03/25 08:36:43 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 18/03/25 08:36:43 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 18/03/25 08:36:43 WARN mapreduce.JobResourceUploader: No job jar file set. User classes may not be found. See Job or Job#setJar(String). 18/03/25 08:36:43 INFO input.FileInputFormat: Total input paths to process : 1 18/03/25 08:36:43 INFO mapreduce.JobSubmitter: number of splits:1 18/03/25 08:36:43 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1381911508_0001 18/03/25 08:36:44 INFO mapreduce.Job: The url to track the job: https://localhost:8080/ 18/03/25 08:36:44 INFO mapreduce.Job: Running job: job_local1381911508_0001 18/03/25 08:36:44 INFO mapred.LocalJobRunner: OutputCommitter set in config null 18/03/25 08:36:44 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 18/03/25 08:36:44 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 18/03/25 08:36:44 INFO mapred.LocalJobRunner: Waiting for map tasks 18/03/25 08:36:44 INFO mapred.LocalJobRunner: Starting task: attempt_local1381911508_0001_m_000000_0 18/03/25 08:36:44 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 18/03/25 08:36:44 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 18/03/25 08:36:44 INFO mapred.MapTask: Processing split: hdfs://master:9000/score/out/part-r-00000:0+79 18/03/25 08:36:44 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 18/03/25 08:36:44 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 18/03/25 08:36:44 INFO mapred.MapTask: soft limit at 83886080 18/03/25 08:36:44 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 18/03/25 08:36:44 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 18/03/25 08:36:44 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 18/03/25 08:36:44 INFO mapred.LocalJobRunner: 18/03/25 08:36:44 INFO mapred.MapTask: Starting flush of map output 18/03/25 08:36:44 INFO mapred.MapTask: Spilling map output 18/03/25 08:36:44 INFO mapred.MapTask: bufstart = 0; bufend = 79; bufvoid = 104857600 18/03/25 08:36:44 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214372(104857488); length = 25/6553600 18/03/25 08:36:45 INFO mapred.MapTask: Finished spill 0 18/03/25 08:36:45 INFO mapred.Task: Task:attempt_local1381911508_0001_m_000000_0 is done. And is in the process of committing 18/03/25 08:36:45 INFO mapred.LocalJobRunner: map 18/03/25 08:36:45 INFO mapred.Task: Task 'attempt_local1381911508_0001_m_000000_0' done. 18/03/25 08:36:45 INFO mapred.LocalJobRunner: Finishing task: attempt_local1381911508_0001_m_000000_0 18/03/25 08:36:45 INFO mapred.LocalJobRunner: map task executor complete. 18/03/25 08:36:45 INFO mapred.LocalJobRunner: Waiting for reduce tasks 18/03/25 08:36:45 INFO mapred.LocalJobRunner: Starting task: attempt_local1381911508_0001_r_000000_0 18/03/25 08:36:45 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1 18/03/25 08:36:45 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 18/03/25 08:36:45 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@83f8595 18/03/25 08:36:45 INFO mapreduce.Job: Job job_local1381911508_0001 running in uber mode : false 18/03/25 08:36:45 INFO mapreduce.Job: map 100% reduce 0% 18/03/25 08:36:45 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=322594400, maxSingleShuffleLimit=80648600, mergeThreshold=212912320, ioSortFactor=10, memToMemMergeOutputsThreshold=10 18/03/25 08:36:45 INFO reduce.EventFetcher: attempt_local1381911508_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 18/03/25 08:36:45 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1381911508_0001_m_000000_0 decomp: 95 len: 99 to MEMORY 18/03/25 08:36:45 INFO reduce.InMemoryMapOutput: Read 95 bytes from map-output for attempt_local1381911508_0001_m_000000_0 18/03/25 08:36:45 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 95, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->95 18/03/25 08:36:45 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning 18/03/25 08:36:45 INFO mapred.LocalJobRunner: 1 / 1 copied. 18/03/25 08:36:45 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 18/03/25 08:36:45 INFO mapred.Merger: Merging 1 sorted segments 18/03/25 08:36:45 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 85 bytes 18/03/25 08:36:45 INFO reduce.MergeManagerImpl: Merged 1 segments, 95 bytes to disk to satisfy reduce memory limit 18/03/25 08:36:45 INFO reduce.MergeManagerImpl: Merging 1 files, 99 bytes from disk 18/03/25 08:36:45 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce 18/03/25 08:36:45 INFO mapred.Merger: Merging 1 sorted segments 18/03/25 08:36:45 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 85 bytes 18/03/25 08:36:45 INFO mapred.LocalJobRunner: 1 / 1 copied. 18/03/25 08:36:45 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 18/03/25 08:36:45 INFO mapred.Task: Task:attempt_local1381911508_0001_r_000000_0 is done. And is in the process of committing 18/03/25 08:36:45 INFO mapred.LocalJobRunner: 1 / 1 copied. 18/03/25 08:36:45 INFO mapred.Task: Task attempt_local1381911508_0001_r_000000_0 is allowed to commit now 18/03/25 08:36:45 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1381911508_0001_r_000000_0' to hdfs://master:9000/score/out/lastout/_temporary/0/task_local1381911508_0001_r_000000 18/03/25 08:36:45 INFO mapred.LocalJobRunner: reduce > reduce 18/03/25 08:36:45 INFO mapred.Task: Task 'attempt_local1381911508_0001_r_000000_0' done. 18/03/25 08:36:45 INFO mapred.LocalJobRunner: Finishing task: attempt_local1381911508_0001_r_000000_0 18/03/25 08:36:45 INFO mapred.LocalJobRunner: reduce task executor complete. 18/03/25 08:36:46 INFO mapreduce.Job: map 100% reduce 100% 18/03/25 08:36:46 INFO mapreduce.Job: Job job_local1381911508_0001 completed successfully 18/03/25 08:36:46 INFO mapreduce.Job: Counters: 35 File System Counters FILE: Number of bytes read=558 FILE: Number of bytes written=582457 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=158 HDFS: Number of bytes written=54 HDFS: Number of read operations=13 HDFS: Number of large read operations=0 HDFS: Number of write operations=6 Map-Reduce Framework Map input records=7 Map output records=7 Map output bytes=79 Map output materialized bytes=99 Input split bytes=106 Combine input records=0 Combine output records=0 Reduce input groups=4 Reduce shuffle bytes=99 Reduce input records=7 Reduce output records=4 Spilled Records=14 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=19 Total committed heap usage (bytes)=463470592 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=79 File Output Format Counters Bytes Written=54
上一篇: 2018美团点评春招C++试卷编程题 -- 字符串距离
下一篇: 如何配置mysql的连接数?