Spark java.lang.OutOfMemoryError: Java heap space
程序员文章站
2022-06-13 21:34:29
...
http://*.com/questions/21138751/spark-java-lang-outofmemoryerror-java-heap-space?rq=1
My cluster: 1 master, 11 slaves, each node has 6 GB memory.
My settings:
spark.executor.memory=4g, Dspark.akka.frameSize=512
Here is the problem:
First, I read some data (2.19 GB) from HDFS to RDD:
val imageBundleRDD = sc.newAPIHadoopFile(...)
Second, do something on this RDD:
val res = imageBundleRDD.map(data => {
val desPoints = threeDReconstruction(data._2, bg)
(data._1, desPoints)
})
Last, output to HDFS:
res.saveAsNewAPIHadoopFile(...)
When I run my program it shows:
.....
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:24 as TID 33 on executor 9: Salve7.Hadoop (NODE_LOCAL)
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:24 as 30618515 bytes in 210 ms
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:36 as TID 34 on executor 2: Salve11.Hadoop (NODE_LOCAL)
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:36 as 30618515 bytes in 449 ms
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Starting task 1.0:32 as TID 35 on executor 7: Salve4.Hadoop (NODE_LOCAL)
Uncaught error from thread [spark-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[spark]
java.lang.OutOfMemoryError: Java heap space
There are too many tasks?
PS: Every thing is ok when the input data is about 225 MB.
How can I solve this problem?
I have a few suggestions:
-
If your nodes are configured to have 6g maximum for Spark (and are leaving a little for other processes), then use 6g rather than 4g,
spark.executor.memory=6g
. Make sure you're using as much memory as possible by checking the UI (it will say how much mem you're using) - Try using more partitions, you should have 2 - 4 per CPU. IME increasing the number of partitions is often the easiest way to make a program more stable (and often faster). For huge amounts of data you may need way more than 4 per CPU, I've had to use 8000 partitions in some cases!
-
Decrease the fraction of memory reserved for caching, using
spark.storage.memoryFraction
. If you don't usecache()
orpersist
in your code, this might as well be 0. It's default is 0.6, which means you only get 0.4 * 4g memory for your heap. IME reducing the mem frac often makes OOMs go away. UPDATE: From spark 1.6 apparently we will no longer need to play with these values, spark will determine them automatically. - Similar to above but shuffle memory fraction. If your job doesn't need much shuffle memory then set it to a lower value (this might cause your shuffles to spill to disk which can have catastrophic impact on speed). Sometimes when it's a shuffle operation that's OOMing you need to do the opposite i.e. set it to something large, like 0.8, or make sure you allow your shuffles to spill to disk (it's the default since 1.0.0).
- Watch out for memory leaks, these are often caused by accidentally closing over objects you don't need in your lambdas. The way to diagnose is to look out for the "task serialized as XXX bytes" in the logs, if XXX is larger than a few k or more than an MB, you may have a memory leak. See http://*.com/a/25270600/1586965
- Related to above; use broadcast variables if you really do need large objects.
- If you are caching large RDDs and can sacrifice some access time consider serialising the RDD http://spark.apache.org/docs/latest/tuning.html#serialized-rdd-storage. Or even caching them on disk (which sometimes isn't that bad if using SSDs).
-
(Advanced) Related to above, avoid
String
and heavily nested structures (likeMap
and nested case classes). If possible try to only use primitive types and index all non-primitives especially if you expect a lot of duplicates. ChooseWrappedArray
over nested structures whenever possible. Or even roll out your own serialisation - YOU will have the most information regarding how to efficiently back your data into bytes, USE IT! -
(bit hacky) Again when caching, consider using a
Dataset
to cache your structure as it will use more efficient serialisation. This should be regarded as a hack when compared to the previous bullet point. Building your domain knowledge into your algo/serialisation can minimise memory/cache-space by 100x or 1000x, whereas all aDataset
will likely give is 2x - 5x in memory and 10x compressed (parquet) on disk.
http://spark.apache.org/docs/1.2.1/configuration.html
上一篇: 模块化 CommonJS AMD CMD
下一篇: 网站优化先行官:面包屑导航
推荐阅读
-
java中程序上线报错: tomcat中java.lang.OutOfMemoryError: PermGen space
-
spark大批量读取Hbase时出现java.lang.OutOfMemoryError: unable to create new native thread
-
Tomcat同时部署多个应用——内存溢出(java.lang.OutOfMemoryError: PermGen space)的解决办法
-
java.lang.OutOfMemoryError: PermGen space及其解决方法
-
ES内存溢出,报错:java.lang.OutOfMemoryError: Java heap space
-
java.lang.OutOfMemoryError: Java heap space 解决方法
-
java.lang.OutOfMemoryError: Java heap space解决方案
-
java.lang.OutOfMemoryError: Java heap space解决方法:即就该内存大小
-
java.lang.OutOfMemoryError: Java heap space 报错实例
-
java.lang.OutOfMemoryError: Java heap space