欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

大数据-Hadoop生态(18)-MapReduce框架原理-WritableComparable排序和GroupingComparator分组

程序员文章站 2022-04-29 14:00:53
1.排序概述 2.排序分类 3.WritableComparable案例 这个文件,是大数据-Hadoop生态(12)-Hadoop序列化和源码追踪的输出文件,可以看到,文件根据key,也就是手机号进行了字典排序 字段含义分别为手机号,上行流量,下行流量,总流量 需求是根据总流量进行排序 Bean对 ......

1.排序概述

大数据-Hadoop生态(18)-MapReduce框架原理-WritableComparable排序和GroupingComparator分组

大数据-Hadoop生态(18)-MapReduce框架原理-WritableComparable排序和GroupingComparator分组

2.排序分类

大数据-Hadoop生态(18)-MapReduce框架原理-WritableComparable排序和GroupingComparator分组

 

3.writablecomparable案例

这个文件,是大数据-hadoop生态(12)-hadoop序列化和源码追踪的输出文件,可以看到,文件根据key,也就是手机号进行了字典排序

13470253144    180    180    360
13509468723    7335    110349    117684
13560439638    918    4938    5856
13568436656    3597    25635    29232
13590439668    1116    954    2070
13630577991    6960    690    7650
13682846555    1938    2910    4848
13729199489    240    0    240
13736230513    2481    24681    27162
13768778790    120    120    240
13846544121    264    0    264
13956435636    132    1512    1644
13966251146    240    0    240
13975057813    11058    48243    59301
13992314666    3008    3720    6728
15043685818    3659    3538    7197
15910133277    3156    2936    6092
15959002129    1938    180    2118
18271575951    1527    2106    3633
18390173782    9531    2412    11943
84188413    4116    1432    5548

字段含义分别为手机号,上行流量,下行流量,总流量

需求是根据总流量进行排序

 

bean对象,需要实现序列化,反序列化和comparable接口

package com.nty.writablecomparable;

import org.apache.hadoop.io.writablecomparable;

import java.io.datainput;
import java.io.dataoutput;
import java.io.ioexception;

/**
 * author nty
 * date time 2018-12-12 16:33
 */

/**
 * 实现writablecomparable接口
 * 原先将bean序列化时,需要实现writable接口,现在再实现comparable接口
 * 
 * public interface writablecomparable<t> extends writable, comparable<t>
 * 
 * 所以我们可以实现writable和comparable两个接口,也可以实现writablecomparable接口
 */
public class flow implements writablecomparable<flow> {

  private long upflow;
  private long downflow;
  private long total;

    public long getupflow() {
        return upflow;
    }

    public void setupflow(long upflow) {
        this.upflow = upflow;
    }

    public long getdownflow() {
        return downflow;
    }

    public void setdownflow(long downflow) {
        this.downflow = downflow;
    }

    public long gettotal() {
        return total;
    }

    public void settotal(long total) {
        this.total = total;
    }

    //快速赋值
    public void setflow(long upflow, long downflow){
        this.upflow = upflow;
        this.downflow = downflow;
        this.total = upflow + downflow;
    }

    @override
    public string tostring() {
        return upflow + "\t" + downflow + "\t" + total;
    }

    //重写compareto方法
    @override
    public int compareto(flow o) {
        return long.compare(o.total, this.total);
    }

    //序列化方法
    @override
    public void write(dataoutput out) throws ioexception {
        out.writelong(upflow);
        out.writelong(downflow);
        out.writelong(total);
    }

    //反序列化方法
    @override
    public void readfields(datainput in) throws ioexception {
        upflow = in.readlong();
        downflow = in.readlong();
        total = in.readlong();
    }
}

mapper类

package com.nty.writablecomparable;

import org.apache.hadoop.io.longwritable;
import org.apache.hadoop.io.text;
import org.apache.hadoop.mapreduce.mapper;

import java.io.ioexception;

/**
 * author nty
 * date time 2018-12-12 16:47
 */
public class flowmapper extends mapper<longwritable, text, flow, text> {

    private text phone = new text();

    private flow flow = new flow();


    @override
    protected void map(longwritable key, text value, context context) throws ioexception, interruptedexception {
        //13470253144    180    180    360
        //分割行数据
        string[] flieds = value.tostring().split("\t");

        //赋值
        phone.set(flieds[0]);

        flow.setflow(long.parselong(flieds[1]), long.parselong(flieds[2]));

        //写出
        context.write(flow, phone);
    }
}

reducer类

package com.nty.writablecomparable;

import org.apache.hadoop.io.text;
import org.apache.hadoop.mapreduce.reducer;

import java.io.ioexception;

/**
 * author nty
 * date time 2018-12-12 16:47
 */
//注意一下输出类型
public class flowreducer extends reducer<flow, text, text, flow> {

    @override
    protected void reduce(flow key, iterable<text> values, context context) throws ioexception, interruptedexception {
        for (text value : values) {
            //输出
            context.write(value,key);
        }
    }
}

driver类

package com.nty.writablecomparable;

import org.apache.hadoop.conf.configuration;
import org.apache.hadoop.fs.path;
import org.apache.hadoop.io.text;
import org.apache.hadoop.mapreduce.job;
import org.apache.hadoop.mapreduce.lib.input.fileinputformat;
import org.apache.hadoop.mapreduce.lib.output.fileoutputformat;

/**
 * author nty
 * date time 2018-12-12 16:47
 */
public class flowdriver {

    public static void main(string[] args) throws  exception {
        //1. 获取job实例
        configuration configuration = new configuration();
        job instance = job.getinstance(configuration);

        //2. 设置类路径
        instance.setjarbyclass(flowdriver.class);


        //3. 设置mapper和reducer
        instance.setmapperclass(flowmapper.class);
        instance.setreducerclass(flowreducer.class);

        //4. 设置输出类型
        instance.setmapoutputkeyclass(flow.class);
        instance.setmapoutputvalueclass(text.class);

        instance.setoutputkeyclass(text.class);
        instance.setoutputvalueclass(flow.class);

        //5. 设置输入输出路径
        fileinputformat.setinputpaths(instance, new path("d:\\hadoop_test"));
        fileoutputformat.setoutputpath(instance, new path("d:\\hadoop_test_out"));

        //6. 提交
        boolean b = instance.waitforcompletion(true);
        system.exit(b ? 0 : 1);
    }
}

 

结果

大数据-Hadoop生态(18)-MapReduce框架原理-WritableComparable排序和GroupingComparator分组

 

 4.groupingcomparator案例

     订单id           商品id          商品金额        

0000001    pdt_01    222.8
0000002    pdt_05    722.4
0000001    pdt_02    33.8
0000003    pdt_06    232.8
0000003    pdt_02    33.8
0000002    pdt_03    522.8
0000002    pdt_04    122.4

求出每一个订单中最贵的商品

需求分析:

1) 将订单id和商品金额作为key,在map阶段先用订单id升序排序,如果订单id相同,再用商品金额降序排序

2) 在reduce阶段,用groupingcomparator按照订单分组,每一组的第一个即是最贵的商品

 

先定义bean对象,重写序列化反序列话排序方法

package com.nty.groupingcomparator;

import org.apache.hadoop.io.writablecomparable;

import java.io.datainput;
import java.io.dataoutput;
import java.io.ioexception;

/**
 * author nty
 * date time 2018-12-12 18:07
 */
public class order implements writablecomparable<order> {

    private string orderid;

    private string productid;

    private double price;

    public string getorderid() {
        return orderid;
    }

    public order setorderid(string orderid) {
        this.orderid = orderid;
        return this;
    }

    public string getproductid() {
        return productid;
    }

    public order setproductid(string productid) {
        this.productid = productid;
        return this;
    }

    public double getprice() {
        return price;
    }

    public order setprice(double price) {
        this.price = price;
        return this;
    }

    @override
    public string tostring() {
        return orderid + "\t" + productid + "\t" + price;
    }


    @override
    public int compareto(order o) {
        //先按照订单排序,正序
        int compare = this.orderid.compareto(o.getorderid());
        if(0 == compare){
            //订单相同,再比较价格,倒序
            return double.compare( o.getprice(),this.price);
        }
        return compare;
    }

    @override
    public void write(dataoutput out) throws ioexception {
        out.writeutf(orderid);
        out.writeutf(productid);
        out.writedouble(price);
    }

    @override
    public void readfields(datainput in) throws ioexception {
        this.orderid = in.readutf();
        this.productid = in.readutf();
        this.price = in.readdouble();
    }
}

mapper类

package com.nty.groupingcomparator;

import org.apache.hadoop.io.longwritable;
import org.apache.hadoop.io.nullwritable;
import org.apache.hadoop.io.text;
import org.apache.hadoop.mapreduce.mapper;

import java.io.ioexception;

/**
 * author nty
 * date time 2018-12-12 18:07
 */
public class ordermapper extends mapper<longwritable, text, order, nullwritable> {

    private order order = new order();

    @override
    protected void map(longwritable key, text value, context context) throws ioexception, interruptedexception {
        //0000001    pdt_01    222.8
        //分割行数据
        string[] fields = value.tostring().split("\t");

        //为order赋值
        order.setorderid(fields[0]).setproductid(fields[1]).setprice(double.parsedouble(fields[2]));

        //写出
        context.write(order,nullwritable.get());
    }
}

groupingcomparator类

package com.nty.groupingcomparator;

import org.apache.hadoop.io.writablecomparable;
import org.apache.hadoop.io.writablecomparator;

/**
 * author nty
 * date time 2018-12-12 18:08
 */
public class ordergroupingcomparator extends writablecomparator {

    //用作比较的对象的具体类型
    public ordergroupingcomparator() {
        super(order.class,true);
    }

    //重写的方法要选对哦,一共有三个,选择参数为writablecomparable的方法
    //默认的compare方法调用的是a,b对象的compare方法,但是现在我们排序和分组的规则不一致,所以要重写分组规则
    @override
    public int compare(writablecomparable a, writablecomparable b) {
        order oa = (order) a;
        order ob = (order) b;
        //按照订单id分组
        return oa.getorderid().compareto(ob.getorderid());
    }
}

reducer类

package com.nty.groupingcomparator;

import org.apache.hadoop.io.nullwritable;
import org.apache.hadoop.mapreduce.reducer;

import java.io.ioexception;

/**
 * author nty
 * date time 2018-12-12 18:07
 */
public class orderreducer extends reducer<order, nullwritable,order, nullwritable> {

    @override
    protected void reduce(order key, iterable<nullwritable> values, context context) throws ioexception, interruptedexception {
        //每一组的第一个即是最高价商品,不需要遍历
        context.write(key, nullwritable.get());
    }
}

driver类

package com.nty.groupingcomparator;


import org.apache.hadoop.conf.configuration;
import org.apache.hadoop.fs.path;
import org.apache.hadoop.io.nullwritable;
import org.apache.hadoop.mapreduce.job;
import org.apache.hadoop.mapreduce.lib.input.fileinputformat;
import org.apache.hadoop.mapreduce.lib.output.fileoutputformat;

import java.io.ioexception;

/**
 * author nty
 * date time 2018-12-12 18:07
 */
public class orderdriver {

    public static void main(string[] args) throws ioexception, classnotfoundexception, interruptedexception {
        //1获取实例
        configuration configuration = new configuration();
        job job = job.getinstance(configuration);

        //2设置类路径
        job.setjarbyclass(orderdriver.class);

        //3.设置mapper和reducer
        job.setmapperclass(ordermapper.class);
        job.setreducerclass(orderreducer.class);

        //4.设置自定义分组类
        job.setgroupingcomparatorclass(ordergroupingcomparator.class);

        //5. 设置输出类型
        job.setmapoutputkeyclass(order.class);
        job.setmapoutputvalueclass(nullwritable.class);

        job.setoutputkeyclass(order.class);
        job.setoutputvalueclass(nullwritable.class);

        //6. 设置输入输出路径
        fileinputformat.setinputpaths(job, new path("d:\\hadoop_test"));
        fileoutputformat.setoutputpath(job, new path("d:\\hadoop_test_out"));

        //7. 提交
        boolean b = job.waitforcompletion(true);
        system.exit(b ? 0 : 1);
    }
}

输出结果

大数据-Hadoop生态(18)-MapReduce框架原理-WritableComparable排序和GroupingComparator分组