Spark(Java)的一些坑
程序员文章站
2022-05-03 15:50:59
...
Spark(Java)的一些坑
0.2542019.01.06 20:47:16字数 267阅读 1284
1. org.apache.spark.SparkException: Task not serializable
广播变量时使用一些自定义类会出现无法序列化,实现 java.io.Serializable 即可。
public class CollectionBean implements Serializable {
2. SparkSession如何广播变量
想要使用SparkSession广播变量,查了很久,有的人是通过得到SparkContext,再用SparkContext广播,但SparkContext第二个参数会要求一个scala的classtag:
ClassTag tag = scala.reflect.ClassTag$.MODULE$.apply(String.class);
Broadcast s=spark.sparkContext().broadcast(args[0], tag);
但是我广播的变量是有自定义类型的Map,这个ClassTag不能创建带泛型的类型,于是问题绕回了怎么用 SparkSession 获取 JavaSparkContext:
Broadcast<Map<String, CollectionBean>> broadcastPlayList = JavaSparkContext.fromSparkContext(session.sparkContext()).broadcast(col);
3. 在广播变量时出现 UnsupportedOperationException
java.lang.UnsupportedOperationException
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1276)
at org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:206)
at org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:66)
at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:66)
at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:96)
at UAARead.SparkHiveETL$2.call(SparkHiveETL.java:109)
at UAARead.SparkHiveETL$2.call(SparkHiveETL.java:99)
at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377) at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.foreach(WholeStageCodegenExec.scala:375)
at org.apache.spark.sql.hive.SparkHiveWriterContainer.writeToFile(hiveWriterContainers.scala:184) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:210) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:210) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.UnsupportedOperationException
at java.util.AbstractMap.put(AbstractMap.java:209)
at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:162)
at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)
at org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:244)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$9.apply(TorrentBroadcast.scala:290)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1303)
at org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:291)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:225) at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1269) ... 25 more Driver stacktrace:
解决方法:new 一个 map,把rdd转换的map putall进去广播
Map col =new HashMap();
col.putAll(collection_bean_rdd.collectAsMap());
集合AbstractMap 问题 UnsupportedOperationException
4. 最最最坑的一个
查看日志发现DAG只出现了一半,后面的语句不运行了,找了几天的bug,竟然是session.sql("")的时候,黏贴sql语句分号黏贴上了,这里不能加分号,一个语句一个session.sql("")!
上一篇: 对esp和ebp分析来了解函数的调用过程
推荐阅读
-
spring cloud配置高可用eureka时遇到的一些坑
-
详解Android Studio3.5及使用AndroidX的一些坑
-
JAVA开发中的一些规范讲解(阿里巴巴Java开发规范手册)
-
关于java学习中的一些易错点(基础篇)
-
聊聊java中一些减少if else 的编码习惯的方法
-
AntD框架的upload组件上传图片时遇到的一些坑
-
Java8新特性Lambda表达式的一些复杂用法总结
-
新人踩坑的一天——springboot注入mapper时出现java.lang.NullPointerException: null
-
对Python中一些“坑”的总结及技巧
-
Yii2框架中一些折磨人的坑