欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Structured Streaming

程序员文章站 2022-07-14 22:10:00
...

Structured Streaming

Structured Streaming是什么

Structured Streaming是一个scalablefault-tolerant的流处理引擎(泛指使用SQL操作Spark的流处理)该引擎构建在Spark SQL之上,使得用户可以用静态批处理的方式去处理流计算任务。Structured Streaming底层弃用Spark SQL引擎对流数据做增量和持续的更新计算,并且输出最终结果。用户可以使用 Dataset/DataFrame API完成流处理过程中的常见问题:aggregations-聚合统计event-time window-事件窗口stream-to-batch/stream-to-stream join -连接等功能。Structured Streaming可以通过checkpoint (检查点)Write-Ahead Logs(写前日志)机制实现end-to-end(端到端)exactly-once(精准一次)语义容错机制。简而言之,Structured Streaming提供快速、可扩展、容错、端到端的精准一次的流处理机制,无需用户过多的干预。

Structured Streaming底层计算引擎默认采取的是micro-batch处理引擎(与DStreams一致),除此之外Spark还提供了其它的处理模型可供选择:micro-batch-100msFixed interval micro-batchesOne-time micro-batchContinuous Processing-1ms(实验)

快速入门

  • pom.xml 文件
<properties>
    <spark.version>2.4.3</spark.version>
    <scala.version>2.11</scala.version>
</properties>
<dependencies>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_${scala.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>
</dependencies>
  • Driver 程序
//1.创建SparkSession对象
val spark = SparkSession
.builder
.appName("StructuredNetworkWordCount")
.master("local[5]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式创建Dataframe - 细化
val lines = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 9999)
.load()
//3.执行SQL操作 API  - 细化
val wordCounts = lines.as[String].flatMap(_.split(" "))
.groupBy("value").count()
//4.构建StreamQuery 将结果写出去 - 细化
val query = wordCounts.writeStream
.outputMode("complete") //表示全量输出,等价于updateStateByKey
.format("console")
.start()
//5.关闭流
query.awaitTermination()

常规概念

在结构化流处理中,关键思想是将实时数据流视为被连续追加的表,将输入数据流视为Input Table,流上到达的每个数据项都像是将新行附加到Input Table中。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-9YyvoqCQ-1590288732996)(…/图片/1570693628976.png)]

Input Table的查询将生成Result Table,每个触发间隔(例如,每隔1秒钟),新行将附加到Input Table中,最终更新Result Table,无论何时更新Result Table,我们都希望将更改后的结果行写入外部接收器(sink)。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-uK974dcN-1590288733000)(…/图片/1570694216166.png)]

输出定义为写到外部存储器,支持以下模式的输出:

  • Complete Mode(有状态) - 整个更新的结果表将被写入外部存储器,由存储连接器决定如何处理整个表的写入
  • Update Mode(有状态) - 自上次触发以来,仅将结果表中已更新的行写入外部存储器(Spark 2.1.1),如果没有聚合操作,该策略等价于Append Mode
  • Append Mode(无状态) - 自上次触发以来,仅将追加到结果表中的新行写入外部存储,这仅适用于结果表中现有行预计不会更改的查询(Append也可以用在含有聚合操作的查询中,但是仅仅局限于窗口计算 -后续讨论)

注意

  • Spark 不会存储Input Table中的数据,一旦处理完数据,就将接收的数据丢弃。Spark维护的仅仅是计算的中间结果(状态)

  • Structured Streaming的好处在于用户无需维护计算的状态(相比较于Storm的流处理),Spark自身可以实现end-to-end(端到端)exactly-once(精准一次)语义容错机制

输入和输出

输入

Kafka source

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-sql-kafka-0-10_${scala.version}</artifactId>
    <version>${spark.version}</version>
</dependency>
//1.创建SparkSession对象
val spark = SparkSession
.builder
.appName("StructuredNetworkWordCount")
.master("local[6]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式创建Dataframe
val df = spark.readStream
    .format("kafka")
    .option("kafka.bootstrap.servers", "centos:9092")
    .option("subscribe", "topic01")
    .load()
//3.执行SQL操作 API
import org.apache.spark.sql.functions._
val wordCounts = df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)", "partition", "offset", "CAST(timestamp AS LONG)")
    .flatMap(row => row.getAs[String]("value").split("\\s+"))
    .map((_, 1))
    .toDF("word", "num")
    .groupBy($"word")
    .agg(sum($"num"))
//4.构建StreamQuery 将结果写出去
val query = wordCounts.writeStream
    .outputMode(OutputMode.Update())
    .format("console")
    .start()
//5.关闭流
query.awaitTermination()

File Source

//1.创建SparkSession对象
val spark = SparkSession
.builder
.appName("StructuredNetworkWordCount")
.master("local[6]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式创建Dataframe - 细化
val schema = new StructType()
.add("id",IntegerType)
.add("name",StringType)
.add("age",IntegerType)
.add("dept",IntegerType)
val df = spark.readStream
.schema(schema)
.format("json")
.load("file:///D:/demo/json")
//3.SQL操作
// 略
//4.构建StreamQuery 将结果写出去
val query = df.writeStream
.outputMode(OutputMode.Update())
.format("console")
.start()
//5.关闭流
query.awaitTermination()

输出

File sink

//1.创建SparkSession对象
val spark = SparkSession
.builder
.appName("filesink")
.master("local[6]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式创建Dataframe - 细化
val lines = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 9999)
.load()
//3.创建DF
val wordCounts=lines.as[String].flatMap(_.split("\\s+"))
.map((_,1))
.toDF("word","num")
//4.构建SparkStream 将结果写出去
val query = wordCounts.writeStream
.outputMode(OutputMode.Append())
.option("path", "file:///D:/write/json")
.option("checkpointLocation", "file:///D:/checkpoints") //需要指定检查点
.format("json")
.start()
//5.关闭流
query.awaitTermination()

仅仅支持Append Mode,一般用作数据清洗,不能做为数据分析(聚合)输出

Kafka Sink

//1.创建SparkSession对象
val spark = SparkSession
.builder
.appName("filesink")
.master("local[6]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式创建Dataframe - 细化
val lines = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 9999)
.load()
//3.创建DF
//001 zs iphone 5000
//import org.apache.spark.sql.functions._
val userCost=lines.as[String].map(_.split("\\s+"))
.map(ts=>(ts(0),ts(1),ts(3).toDouble))
.toDF("id","name","cost")
.groupBy("id","name")
.sum("cost")
.as[(String,String,Double)]
.map(t=>(t._1,t._2+"\t"+t._3))
.toDF("key","value") //输出字段中必须有value string类型
//4.构建SparkStream 将结果写出去
val query = userCost.writeStream
    .outputMode(OutputMode.Update())
    .format("kafka")
    .option("topic","topic02")
    .option("kafka.bootstrap.servers","centos:9092")
    .option("checkpointLocation","file:///D:/checkpoints01") //设置程序的检查点目录
    .start()
//5.关闭流
query.awaitTermination()

支持AppendUpdateComplete输出模式

Foreach sink

使用foreachforeachBatch操作,您可以在流查询的输出上应用任意操作并编写逻辑。它们的用例略有不同 - 尽管foreach允许在每行上使用自定义写逻辑,而foreachBatch允许在每个微批处理的输出上进行任意操作和自定义逻辑。

ForeachBatch

foreachBatch(...)允许您指定在流查询的每个微批处理的输出数据上执行的函数。从Spark 2.4开始,ScalaJavaPython支持此功能。它具有两个参数:具有微批处理的输出数据的DataFrameDataset和微批处理的唯一ID

//2.通过流的方式创建Dataframe - 细化
val lines = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 9999)
.load()
//3.创建Dataframe
//001 zhangsan iphone 15000
val userCost=lines.as[String].map(_.split("\\s+"))
    .map(ts=>(ts(0),ts(1),ts(3).toDouble))
    .toDF("id","name","cost")
    .groupBy("id","name")
    .sum("cost")
    .as[(String,String,Double)]
    .map(t=>(t._1,t._2+"\t"+t._3))
    .toDF("key","value") //输出字段中必须有value string类型
userCost.printSchema()
//4.构建SparkStream 将结果写出去
val query = userCost.writeStream
.outputMode(OutputMode.Update())
.foreachBatch((ds:Dataset[Row],bacthId)=>{
    ds.show()
})
.start()
//5.关闭流
query.awaitTermination()

使用foreachBatch,可以执行以下操作:

  • 使用现有的批处理当中的Writer或者是Sink将数据写出到外围系统
  • 可以将数据集合写到多个地方
streamingDF.writeStream.foreachBatch { (batchDF: DataFrame, batchId: Long) =>
  batchDF.persist() //缓存
  batchDF.write.format(...).save(...) // location 1
  batchDF.write.format(...).save(...) // location 2
  batchDF.unpersist() //释放缓存
}
  • 可以获取Dataset或者Dataframe执行额外的SQL操作
Foreach

如果不使用foreachBatch,则可以使用foreach表达自定义Writer将数据写到外围系统。具体来说,您可以通过自定Writer将数据写到外围系统:open,process和close。从Spark 2.4开始,foreachScalaJavaPython中可用。

//1.创建SparkSession对象
val spark = SparkSession
.builder
.appName("filesink")
.master("local[6]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式创建Dataframe - 细化
val lines = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 9999)
.load()
//3.创建Dataframe
//001 zhangsan iphone 15000
import org.apache.spark.sql.functions._
val userCost=lines.as[String].map(_.split("\\s+"))
.map(ts=>(ts(0),ts(1),ts(3).toDouble))
.toDF("id","name","cost")
.groupBy("id","name")
.agg(sum("cost") as "cost")
//4.构建StreamQuery 将结果写出去
val query = userCost.writeStream
.outputMode(OutputMode.Update())
.foreach(new ForeachWriter[Row] {
    override def open(partitionId: Long, epochId: Long): Boolean = {//开启事务
        // println(s"open:${partitionId},${epochId}") 
        return true //返回true,系统调用 process ,然后调用 close
    }
    override def process(value: Row): Unit = {
        val id=value.getAs[String]("id")
        val name=value.getAs[String]("name")
        val cost=value.getAs[Double]("cost")
        println(s"${id},${name},${cost}") //提交事务
    }
    override def close(errorOrNull: Throwable): Unit = {
        //println("close:"+errorOrNull) //errorOrNull!=nul 回滚事务 
    }
})
.start()
//5.关闭流
query.awaitTermination()

窗口计算(前闭后开时间区间)

快速入门

滑动事件时间窗口上的聚合对于结构化流而言非常简单,与分组聚合非常相似,时间是嵌入在数据当中

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-nCUv3cWl-1590288733003)(…/图片/1570775418343.png)]

//1.创建SparkSession对象
val spark = SparkSession
.builder
.appName("windowWordcount")
.master("local[6]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式创建Dataframe - 细化
val lines = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 9999)
.load()
//3.执行SQL操作 API
//a 时间戳Long
import org.apache.spark.sql.functions._
val wordCounts = lines.as[String].map(_.split("\\s+"))
.map(ts => (ts(0), new Timestamp(ts(1).toLong), 1))
.toDF("word", "timestamp", "num")
.groupBy(
    window($"timestamp", "4 seconds", "2 seconds"),
    $"word")
.agg(sum("num") as "sum")
.map(row=> {
    val start = row.getAs[Row]("window").getAs[Timestamp]("start")
    val end = row.getAs[Row]("window").getAs[Timestamp]("end")
    val word = row.getAs[String]("word")
    val sum = row.getAs[Long]("sum")
    (start,end,word,sum)
}).toDF("start","end","word","sum")

wordCounts.printSchema()

//4.构建StreamQuery 将结果写出去
val query = wordCounts.writeStream
.outputMode(OutputMode.Complete())
.format("console")
.start()
//5.关闭流
query.awaitTermination()

Late Data & Watermarking

在基于窗口的流处理过程中,由于网络传输的问题,数据到达计算节点时有可能出现乱序,比如说计算节点已经读到12:11的数据且完成计算,也就意味着12:00~12:10的窗口已经触发过了,后续抵达的数据的时间正常来说一定是12:11以后的数据,但是在实际的应用场景中,由于网络延迟或者故障导致出现乱序的数据,例如在12:11接收到了12:04的数据,此时Spark需要将12:04产生的数据添加到12:00 ~ 12:10的窗口中,也就意味着Spark会一直存储12:00 ~ 12:10窗口的计算状态,在默认情况下,Spark会一直留存窗口的计算状态,以保证乱序的数据可以正常加入到窗口计算中。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BECSHGDf-1590288733006)(…/图片/1570780801158.png)]

由于流处理不同于批处理,需要24*7小时不间断的工作,因此,对于流处理而言长时间存储计算的中间状态不太切合实际,为此我们需要告诉计算引擎什么时候可以丢弃计算的中间状态。在Spark2.1提出Watermarking机制,可以让计算引擎知道什么时候该丢弃窗口的计算状态。watermarker的计算公式:max event time seen by the engine - late threshold,当窗口的endtime T值 < watermarker,这个时候Spark就可以丢弃该窗口的计算状态,如果后续还有数据落入到已经被淹没的窗口中,则称该数据为Late data。由于窗口被淹没,因此窗口的状态就没法保证一定存在(Spark会尝试清理那些已经被淹没的窗口状态),迟到越久的数据被处理的几率越低。

一般情况下,窗口触发的条件是:Watermarking >= 窗口 end time,窗口输出的结果一般是FinalResult,但是在Structured Streaming中Watermarking仅仅控制计算引擎什么时候该删除窗口计算的中间状态。如果用户想输出FinalResult,也就意味着只有当Watermarking >= 窗口 end time的时候才输出结果,用户必须配合Append输出模式

注意:在使用水位线机制的时候,用户不能使用Complete 输出模式

update输出

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Iv6wrg9L-1590288733008)(…/图片/1570783473098.png)]

  • 输出条件:

    1. 有数据落入到窗口
    2. 水位线没有没过
  • 注意

    窗口可能会多次触发,但是一旦水位线没过窗口的end time,数据有可能就会被丢弃

//1.创建sparksession
val spark = SparkSession
.builder
.appName("windowWordcount")
.master("local[6]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式创建Dataframe - 细化
val lines = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 9999)
.load()
//3.执行SQL操作 API
// a 时间戳Long
import org.apache.spark.sql.functions._
val wordCounts = lines.as[String].map(_.split("\\s+"))
.map(ts => (ts(0), new Timestamp(ts(1).toLong), 1))
.toDF("word", "timestamp", "num")
.withWatermark("timestamp", "1 second")
.groupBy(
    window($"timestamp", "4 seconds", "2 seconds"),
    $"word")
.agg(sum("num") as "sum")
.map(row=> {
    val start = row.getAs[Row]("window").getAs[Timestamp]("start")
    val end = row.getAs[Row]("window").getAs[Timestamp]("end")
    val word = row.getAs[String]("word")
    val sum = row.getAs[Long]("sum")
    (start,end,word,sum)
}).toDF("start","end","word","sum")

wordCounts.printSchema()

//4.构建StreamQuery 将结果写出去
val query = wordCounts.writeStream
.outputMode(OutputMode.Update())
.format("console")
.start()
//5.关闭流
query.awaitTermination()

Append

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-8mwAwfcd-1590288733010)(…/图片/1570783542673.png)]

  • 输出条件

    必须是水位线 >= 窗口的end time

//1.创建sparksession
val spark = SparkSession
.builder
.appName("windowWordcount")
.master("local[6]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式创建Dataframe - 细化
val lines = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 9999)
.load()
//3.执行SQL操作 API
// a 时间戳Long
import org.apache.spark.sql.functions._
val wordCounts = lines.as[String].map(_.split("\\s+"))
.map(ts => (ts(0), new Timestamp(ts(1).toLong), 1))
.toDF("word", "timestamp", "num")
.withWatermark("timestamp", "1 second")
.groupBy(
    window($"timestamp", "4 seconds", "2 seconds"),
    $"word")
.agg(sum("num") as "sum")
.map(row=> {
    val start = row.getAs[Row]("window").getAs[Timestamp]("start")
    val end = row.getAs[Row]("window").getAs[Timestamp]("end")
    val word = row.getAs[String]("word")
    val sum = row.getAs[Long]("sum")
    (start,end,word,sum)
}).toDF("start","end","word","sum")

wordCounts.printSchema()

//4.构建StreamQuery 将结果写出去
val query = wordCounts.writeStream
.outputMode(OutputMode.Append())
.format("console")
.start()
//5.关闭流
query.awaitTermination()

从严格意义上说,Spark并没有提供对too late数据(在其它的流处理框架中被称为迟到,所谓late数据被称为乱序数据)的处理机制,默认策略是丢弃。Storm和Flink都提供了对too late数据的处理方案,这一点Spark有待提高。

Join 操作

Structured Streaming不仅支持static的Dataset/Dataframe,还支持流中的Dataset/Dataframe

Stream-static Joins

//1.创建SparkSession对象
val spark = SparkSession
.builder
.appName("windowWordcount")
.master("local[6]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式获取日志信息
//001 apple 2 4.5
val orderLines = spark.readStream
.format("socket")
.option("host", "cenntos")
.option("port", 9999)
.load()
//3.创建Dataframe
import org.apache.spark.sql.functions._
val orderDF = orderLines.as[String].map(_.split("\\s+"))
.map(ts => (ts(0), ts(1), ts(2).toInt, ts(3).toDouble))
.toDF("id", "name", "count", "price")
val userDF = List(("001","zhangsan"),("002","lisi"),("003","wangwu")).toDF("uid","name")
//等价于 userDF.join(orderDF,expr("id=uid"),"right_outer")
val result = userDF.join(orderDF,$"id" ===$"uid","right_outer")
//4.构建StreamQuery 将结果写出去
val query = result.writeStream
.outputMode(OutputMode.Append())
.format("console")
.start()
//5.关闭流
query.awaitTermination()

stream-static支持 inner、left_outer | static-stream支持 inner、right_outer

Stream-stream Joins

在Spark 2.3中,我们添加了对流流连接的支持,即您可以连接两个流Dataset/ DataFrame。流流join的最大挑战是系统需要缓存流的信息,然后和后续输入的数据进行join,也就意味着Spark需要缓存流的中间状态,这就给框架的内存带来很大的压力,因此在Structured Streaming中引入watermarker的概念,作用是为了限制state的存活时间,告知spark在什么时候可以释放流处理过程中的中间状态。

内连接可以使用任意一个column作为连接条件,然而在stream开始运行计算任务的时候 ,流计算的中间状态会持续的增长,因为必须存储所有传递过来的状态数据,才能和后续新接收的数据做匹配。为了避免无限制的状态存储。一般需要定义额外的join条件。例如限制一些old数据,如果和新数据的时间间隔大于某个阈值就不能进行匹配,从而可以删除一些陈旧的状态。简单来说需要做以下步骤:

  • 两边流计算需要定义watermarker延迟,这样系统才知道两个流的时间差值
  • 定制event time的限制条件,这样计算引擎可以计算出哪些old数据已经不再需要,可以使用以下两种方式界定时间范围,如:
    • JOIN ON leftTime BETWEEN rightTime AND rightTime + INTERVAL 1 HOUR (Range join)
    • 基于Event-time Window 例如:JOIN ON leftTimeWindow = rightTimeWindow (window join)

range

//1.创建SparkSession对象
val spark = SparkSession
.builder
.appName("windowWordcount")
.master("local[6]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式获取日志信息
//001 apple 2 4.5 1566113400000 2019-08-18 15:30:00
val orderLines = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 9999)
.load()
//001 zhangsan 1566113400000 2019-08-18 15:30:00
val userLogin = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 8888)
.load()
//3.创建Dataframe
import org.apache.spark.sql.functions._
val userDF = userLogin.as[String].map(_.split("\\s+"))
.map(ts => (ts(0), ts(1),new Timestamp(ts(2).toLong)))
.toDF("uid","uname","tlogin")
.withWatermark("tlogin","1 seconds")
val orderDF = orderLines.as[String].map(_.split("\\s+"))
.map(ts => (ts(0), ts(1), ts(2).toInt, ts(3).toDouble,new Timestamp(ts(4).toLong)))
.toDF("id", "name", "count", "price","torder")
.withWatermark("torder","1 second")
val joinExpr=
	 """
        id=uid AND
        torder >=  tlogin AND
        torder <=  tlogin +  interval 2 seconds
      """
val result=userDF.join(orderDF, expr(joinExpr),"left_outer")
//4.构建StreamQuery 将结果写出去
val query = result.writeStream
.outputMode(OutputMode.Append())
.format("console")
.start()
//5.关闭流
query.awaitTermination()

window

//1.创建SparkSession对象
val spark = SparkSession
.builder
.appName("windowWordcount")
.master("local[6]")
.getOrCreate()
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//2.通过流的方式获取日志信息
//001 apple 2 4.5 1566113400000 2019-08-18 15:30:00
val orderLines = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 9999)
.load()
//001 zhangsan 1566113400000 2019-08-18 15:30:00
val userLogin = spark.readStream
.format("socket")
.option("host", "centos")
.option("port", 8888)
.load()
//3.创建Dataframe
import org.apache.spark.sql.functions._
val userDF = userLogin.as[String].map(_.split("\\s+"))
.map(ts => (ts(0), ts(1),new Timestamp(ts(2).toLong)))
.toDF("uid","uname","tlogin")
.withWatermark("tlogin","1 seconds")
.withColumn("leftWindow",window($"tlogin","10 seconds","5 seconds"))
val orderDF = orderLines.as[String].map(_.split("\\s+"))
.map(ts => (ts(0), ts(1), ts(2).toInt, ts(3).toDouble,new Timestamp(ts(4).toLong)))
.toDF("id", "name", "count", "price","torder")
.withWatermark("torder","1 second")
.withColumn("rightWindow",window($"torder","10 seconds","5 seconds"))
val joinExpr=
	 """
        id=uid AND leftWindow = rightWindow
      """
val result=userDF.join(orderDF, expr(joinExpr))
//4.构建StreamQuery 将结果写出去
val query = result.writeStream
.outputMode(OutputMode.Append())
.format("console")
.start()
//5.关闭流
query.awaitTermination()

将任务打包递交给集群

id name count price time category
1 zhangsan 2 3.5 2019-10-10 10:10:00 水果
1 zhangsan 1 1500 2019-10-10 10:10:00 手机
...
统计出用户年度总消费 - kafka
1 zhangsan 水果  2000
1 zhangsan 手机  5000
...
import java.util.regex.Pattern
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.streaming.OutputMode

object UserOrderAnalyzer {
  def main(args: Array[String]): Unit = {
    //1.创建SparkSession对象
    val spark = SparkSession.builder()
      .appName("UserOrderAnalyzer")
      .master("spark://centos:7077")
      .getOrCreate()
    import  spark.implicits._
    spark.sparkContext.setLogLevel("FATAL")
    //2.通过流的方式获取日志信息
    //1 zhangsan 2 3.5 2019-10-10 10:10:00 水果
    val userOrderLog = spark.readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", "CentOS:9092")
      .option("subscribe", "topic01")
      .load().selectExpr("CAST(value AS STRING)")
    //自定义单行函数实现某个类别商品的一次消费小计
    spark.udf.register("order_cost",(count:Int,price:Double)=>{
      count*price
    })
    //3.创建Dataframe
    import org.apache.spark.sql.functions._
    val userOrderDF = userOrderLog.as[String]
      .filter(isLegal(_))
      .map(parse(_))
      .toDF("id", "name", "count", "price", "year", "channel")
      .selectExpr("id","name","order_cost(count,price) as cost","year","channel")
      .groupBy($"id", $"name", $"year")
      .agg(sum("cost") as "cost")
      .as[(Int,String,String,Double)]
      .map(t=>(t._1+":"+t._2,t._3+"\t"+t._4))
      .toDF("key","value")
    //4.构建SparkStream 将结果写出去
    val query= userOrderDF.writeStream
             .outputMode(OutputMode.Update())
             .option("kafka.bootstrap.servers", "centos:9092")
             .option("topic", "topic02")
             .option("checkpointLocation","hdfs:///checkpoints-UserOrderAnalyzer")
             .format("kafka")
     .start()
    //5.关闭流
    query.awaitTermination()
  }
  //校验日志信息是否合法
  def isLegal(log:String):Boolean={
    val regex="(\\d+)\\s(.*)\\s(\\d+)\\s(\\d+\\.\\d+)\\s(\\d{4}).*\\d{2}\\s(.*)"
    val pattern = Pattern.compile(regex)
    val matcher = pattern.matcher(log)
    matcher.matches()
  }
  //获取所需的元组类型数据
  def parse(log:String):(Int,String,Int,Double,String,String)={ //表达式中的()表示内容的抽取
    val regex="(\\d+)\\s(.*)\\s(\\d+)\\s(\\d+\\.\\d+)\\s(\\d{4}).*\\d{2}\\s(.*)" 
    val pattern = Pattern.compile(regex)
    val matcher = pattern.matcher(log)
    matcher.matches() //结果是Boolean类型
 (matcher.group(1).toInt,matcher.group(2),matcher.group(3).toInt,matcher.group(4).toDouble,matcher.group(5),matcher.group(6)) //结果是(Int,String,Int,Double,String,String)类型
  }
}
  • 在项目中修改依赖pom.xml
<properties>
    <spark.version>2.4.3</spark.version>
    <scala.version>2.11</scala.version>
</properties>
<dependencies>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_${scala.version}</artifactId>
        <version>${spark.version}</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql-kafka-0-10_${scala.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>
</dependencies>
<build>
    <plugins>
        <!--在执行package时候,将scala源码编译进jar-->
        <plugin>
            <groupId>net.alchim31.maven</groupId>
            <artifactId>scala-maven-plugin</artifactId>
            <version>4.0.1</version>
            <executions>
                <execution>
                    <id>scala-compile-first</id>
                    <phase>process-resources</phase>
                    <goals>
                        <goal>add-source</goal>
                        <goal>compile</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
        <!--将依赖jar打入到jar中-->
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-shade-plugin</artifactId>
            <version>2.4.3</version>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>shade</goal>
                    </goals>
                    <configuration>
                        <filters>
                            <filter>
                                <artifact>*:*</artifact>
                                <excludes>
                                    <exclude>META-INF/*.SF</exclude>
                                    <exclude>META-INF/*.DSA</exclude>
                                    <exclude>META-INF/*.RSA</exclude>
                                </excludes>
                            </filter>
                        </filters>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>
  • 任务提交(fat jar)
[[email protected] spark-2.4.3]# ./bin/spark-submit 
		--master spark://centos:7077 
		--deploy-mode cluster 
		--class com.baizhi.demo.UserOrderAnalyzer 
		--total-executor-cores 6 
		/root/finalspark-1.0-SNAPSHOT.jar
  • 远程jar下载(网络-不推荐)
[[email protected] spark-2.4.3]# ./bin/spark-submit 
		--master spark://centos:7077								  
		--deploy-mode cluster 
		--class com.baizhi.demo.UserOrderAnalyzer 
		--total-executor-cores 6 
		--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.3
		/root/original-finalspark-1.0-SNAPSHOT.jar

一般情况下,企业内网服务器不允许连接外网

https://blog.csdn.net/u010454030>

                  </filters>
                </configuration>
            </execution>
        </executions>
    </plugin>
</plugins>
```
  • 任务提交(fat jar)
[[email protected] spark-2.4.3]# ./bin/spark-submit 
		--master spark://centos:7077 
		--deploy-mode cluster 
		--class com.baizhi.demo.UserOrderAnalyzer 
		--total-executor-cores 6 
		/root/finalspark-1.0-SNAPSHOT.jar
  • 远程jar下载(网络-不推荐)
[[email protected] spark-2.4.3]# ./bin/spark-submit 
		--master spark://centos:7077								  
		--deploy-mode cluster 
		--class com.baizhi.demo.UserOrderAnalyzer 
		--total-executor-cores 6 
		--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.3
		/root/original-finalspark-1.0-SNAPSHOT.jar

一般情况下,企业内网服务器不允许连接外网

https://blog.csdn.net/u010454030>

http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#window-operations-on-event-time