欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

kafka实现wordcount并实现累加操作

程序员文章站 2022-03-04 13:51:21
...

先在kafka输入下列命令充当生产者

   ./bin/kafka-console-producer.sh  --broker-list 192.168.147.133:9092,192.168.147.134:9092,192.168.147.135:9092 --topic test034

下列代码当消费者

package day14
import org.apache.spark.streaming.dstream.ReceiverInputDStream
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Duration, Seconds, StreamingContext}

/**
  * Kafka的receive方式实现WC
  * receive
  * 1.重点:首先会以DStream中的数据进行按key做reduce操作,然后再对各个批次的数据进行累加
     2.updateStateByKey 方法中
updateFunc就要传入的参数,他是一个函数。Seq[V]表示当前key对应的所有值,Option[S] 是当前key的历史状态,返回的是新的
  */
object KafkaWC {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setAppName("wordcount").setMaster("local[2]")
    val ssc = new StreamingContext(conf,Seconds(3))
    val zks="192.168.147.133:2181"
    ssc.checkpoint("hdfs://spark01:9000/aaaa")
   /* val prop = new Properties
    prop.put("auto.offset.reset","smallest");*/

    //消费者组
    val groupId: String ="gp1"
    //Topic的名字 Map的key是topic名字,第二参数是线程数
    val topics: Map[String, Int] =Map[String,Int]("test04"->1)
    //获取到的数据是键值对的格式(key,value)
    //获取到的数据 key是偏移量,value是数据
    val data: ReceiverInputDStream[(String,String)] = KafkaUtils.createStream(ssc,zks,groupId,topics)
    //接下来开始处理数据
    val lines = data.flatMap(_._2.split(" "))
    val words = lines.map((_,1))
    words.updateStateByKey( (currValues:Seq[Int],preValue:Option[Int]) =>{
    var currValueSum = 0
    for(currValue <- currValues) {
      currValueSum += currValue
    }
      Some(currValueSum + preValue.getOrElse(0))
    }).print(50)   //    默认10条  ,加参数可设置打印条数
   // val result = words.reduceByKey(_+_)
    //words.updateStateByKey[Int](result,
      //new HashPartitioner(ssc.sparkContext.defaultParallelism), true, data))
     // result.print()
      ssc.start()
      ssc.awaitTermination()
  }
}
相关标签: kafka