IDEA开发SparkSQL报错:org.apache.spark.SparkException: A master URL must be set in your configuration
程序员文章站
2022-07-12 14:20:13
...
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/08/06 10:33:42 INFO SparkContext: Running Spark version 2.4.4
20/08/06 10:33:42 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:368)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
at com.day2.HelloSQL$.main(HelloSQL.scala:14)
at com.day2.HelloSQL.main(HelloSQL.scala)
20/08/06 10:33:42 ERROR Utils: Uncaught exception in thread main
java.lang.NullPointerException
at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$postApplicationEnd(SparkContext.scala:2416)
at org.apache.spark.SparkContext$$anonfun$stop$1.apply$mcV$sp(SparkContext.scala:1931)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1930)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:585)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
at com.day2.HelloSQL$.main(HelloSQL.scala:14)
at com.day2.HelloSQL.main(HelloSQL.scala)
20/08/06 10:33:42 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:368)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
at com.day2.HelloSQL$.main(HelloSQL.scala:14)
at com.day2.HelloSQL.main(HelloSQL.scala)
Process finished with exit code 1
报错信息如上。
解决方式
-Dspark.master=local
原因
sparkSQL默认按照yarn的模式进行运行,但是本地测试的话不支持这样的运行模式,所以需要设置为本地模式运行。
上一篇: spark 异常解决:A master URL must be set in your configuration
下一篇: IDEA使用SparkSession读取Json文件报错 A master URL must be set in your configuration
推荐阅读
-
IDEA开发SparkSQL报错:org.apache.spark.SparkException: A master URL must be set in your configuration
-
IDEA使用SparkSession读取Json文件报错 A master URL must be set in your configuration
-
Spark程序报错排查:A master URL must be set in your configuration
-
org.apache.spark.SparkException: A master URL must be set in your configuration
-
本地开发Spark,运行JavaSparkPi例子报错:A master URL must be set in your configuration