spark内存管理器--MemoryManager源码解析
memorymanager内存管理器
内存管理器可以说是spark内核中最重要的基础模块之一,shuffle时的排序,rdd缓存,展开内存,广播变量,task运行结果的存储等等,凡是需要使用内存的地方都需要向内存管理器定额申请。我认为内存管理器的主要作用是为了尽可能减小内存溢出的同时提高内存利用率。旧版本的spark的内存管理是静态内存管理器staticmemorymanager,而新版本(应该是从1.6之后吧,记不清了)则改成了统一内存管理器unifiedmemorymanager,同一内存管理器相对于静态内存管理器最大的区别在于执行内存和存储内存二者之间没有明确的界限,可以相互借用,但是执行内存的优先级更高,也就是说如果执行内存不够用就会挤占存储内存,这时会将一部分缓存的rdd溢写到磁盘上直到腾出足够的空间。但是执行内存任何情况下都不会被挤占,想想这也可以理解,毕竟执行内存是用于shuffle时排序的,这只能在内存中进行,而rdd缓存的要求就没有这么严格。
有几个参数控制各个部分内存的使用比例,
- spark.memory.fraction,默认值0.6,这个参数控制spark内存管理器管理的内存占内存存的比例(准确地说是:堆内存-300m,300m是为永久代预留),也就是说执行内存和存储内存加起来只有(堆内存-300m)的0.6,剩余的0.4是用于用户代码执行过程中的内存占用,比如你的代码中可能会加载一些较大的文件到内存中,或者做一些排序,用户代码使用的内存并不受内存管理器管理,所以需要预留一定的比例。
- spark.memory.storagefraction,默认值0.5,顾名思义,这个值决定了存储内存的占比,注意是占内存管理器管理的那部分内存的比例,剩余的部分用作执行内存。例如,默认情况下,存储内存占堆内存的比例是0.6 * 0.5 = 0.3(当然准确地说是占堆内存-300m的比例)。
memorymanager概述
我们首先整体看一下memorymanager这个类,
maxonheapstoragememory maxoffheapstoragememory setmemorystore acquirestoragememory acquireunrollmemory acquireexecutionmemory releaseexecutionmemory releaseallexecutionmemoryfortask releasestoragememory releaseallstoragememory releaseunrollmemory executionmemoryused storagememoryused getexecutionmemoryusagefortask
可以发现,memorymanager内部的方法比较少而且是有规律的,它将内存在功能上分为三种:storagememory,unrollmemory,executionmemory,
针对这三种内存分别有申请内存的方法和释放内存的方法,并且三种申请内存的方法都是抽象方法,由子类实现。
此外,我们看一下memorymanager内部有哪些成员变量:
protected val onheapstoragememorypool = new storagememorypool(this, memorymode.on_heap) protected val offheapstoragememorypool = new storagememorypool(this, memorymode.off_heap) protected val onheapexecutionmemorypool = new executionmemorypool(this, memorymode.on_heap) protected val offheapexecutionmemorypool = new executionmemorypool(this, memorymode.off_heap)
这四个成员变量分别代表四种内存池。这里要注意的是,memorypool的构造其中有一个object类型参数用于同步锁,memorypool内部的一些方法会获取该对象锁用于同步。
我们看一下他们的初始化:
onheapstoragememorypool.incrementpoolsize(onheapstoragememory) onheapexecutionmemorypool.incrementpoolsize(onheapexecutionmemory) offheapexecutionmemorypool.incrementpoolsize(maxoffheapmemory - offheapstoragememory) offheapstoragememorypool.incrementpoolsize(offheapstoragememory)
memorymanager.releaseexecutionmemory
其实就是调用executionmemorypool的相关方法,
private[memory] def releaseexecutionmemory( numbytes: long, taskattemptid: long, memorymode: memorymode): unit = synchronized { memorymode match { case memorymode.on_heap => onheapexecutionmemorypool.releasememory(numbytes, taskattemptid) case memorymode.off_heap => offheapexecutionmemorypool.releasememory(numbytes, taskattemptid) } }
executionmemorypool.releasememory
代码逻辑很简单,就不多说了。
其实从这个方法,我们大概可以看出,spark内存管理的含义,其实spark的内存管理说到底就是对内存使用量的记录和管理,而并不是像操作系统或jvm那样真正地进行内存的分配和回收。
def releasememory(numbytes: long, taskattemptid: long): unit = lock.synchronized { // 从内部的簿记量中获取该任务使用的内存 val curmem = memoryfortask.getorelse(taskattemptid, 0l) // 检查要释放的内存是否超过了该任务实际使用的内存,并打印告警日志 var memorytofree = if (curmem < numbytes) { logwarning( s"internal error: release called on $numbytes bytes but task only has $curmem bytes " + s"of memory from the $poolname pool") curmem } else { numbytes } if (memoryfortask.contains(taskattemptid)) { // 更新簿记量 memoryfortask(taskattemptid) -= memorytofree // 如果该任务的内存使用量小于等于0,那么从簿记量中移除该任务 if (memoryfortask(taskattemptid) <= 0) { memoryfortask.remove(taskattemptid) } } // 最后通知其他等待的线程 // 因为可能会有其他的任务在等待获取执行内存 lock.notifyall() // notify waiters in acquirememory() that memory has been freed }
memorymanager.releaseallexecutionmemoryfortask
把堆上的执行内存和直接内存的执行内存中该任务使用的内存都释放掉,
onheapexecutionmemorypool和offheapexecutionmemorypool是同一个类,只是一个记录执行内存对直接内存的使用,一个记录执行内存对堆内存的使用。
private[memory] def releaseallexecutionmemoryfortask(taskattemptid: long): long = synchronized { onheapexecutionmemorypool.releaseallmemoryfortask(taskattemptid) + offheapexecutionmemorypool.releaseallmemoryfortask(taskattemptid) }
memorymanager.releasestoragememory
对于存储内存的使用的记录并没有执行内存那么细,不会记录每个rdd使用了多少内存
def releasestoragememory(numbytes: long, memorymode: memorymode): unit = synchronized { memorymode match { case memorymode.on_heap => onheapstoragememorypool.releasememory(numbytes) case memorymode.off_heap => offheapstoragememorypool.releasememory(numbytes) } }
memorymanager.releaseunrollmemory
这里,我们看一下释放展开内存的方法,发现展开内存使用的就是存储内存。回顾一下blockmanager部分,展开内存的申请主要是在将数据通过memorystore存储成块时需要将数据临时放在内存中,这时就需要申请展开内存。
final def releaseunrollmemory(numbytes: long, memorymode: memorymode): unit = synchronized { releasestoragememory(numbytes, memorymode) }
小结
从上面分析的几个释放内存的方法不难看出,所谓的释放内存其实只是对内存管理器内部的一些簿记量的改变,这就要求外部的调用者必须确保它们确实释放了这么多的内存,否则内存管理就会和实际的内存使用情况出现很大偏差。当然,好在内存管理器是spark内部的模块,并不向用户开放,所以在用户代码中不会调用内存管理模块。
unifiedmemorymanager
开篇我们讲到,spark的内存管理器分为两种,而新的版本默认都是使用统一内存管理器unifiedmemorymanager,后面静态内存管理器会逐渐启用,所以这里我们也重点分析统一内存管理。
前面,我们分析了父类memorymanager中释放内存的几个方法,而申请内存的几个方法都是抽象方法,这些方法的实现都是在子类中,也就是unifiedmemorymanager中实现的。
unifiedmemorymanager.acquireexecutionmemory
这个方法是用来申请执行内存的。其中定义了几个局部方法,maybegrowexecutionpool方法用来挤占存储内存以扩展执行内存空间;
computemaxexecutionpoolsize方法用来计算最大的执行内存大小。
最后调用了executionpool.acquirememory方法实际申请执行内存。
override private[memory] def acquireexecutionmemory( numbytes: long, taskattemptid: long, memorymode: memorymode): long = synchronized { // 检查内存大小是否正确 assertinvariants() assert(numbytes >= 0) // 根据堆内存还是直接内存决定使用不同的内存池和内存大小 val (executionpool, storagepool, storageregionsize, maxmemory) = memorymode match { case memorymode.on_heap => ( onheapexecutionmemorypool, onheapstoragememorypool, onheapstorageregionsize, maxheapmemory) case memorymode.off_heap => ( offheapexecutionmemorypool, offheapstoragememorypool, offheapstoragememory, maxoffheapmemory) } /** * grow the execution pool by evicting cached blocks, thereby shrinking the storage pool. * * when acquiring memory for a task, the execution pool may need to make multiple * attempts. each attempt must be able to evict storage in case another task jumps in * and caches a large block between the attempts. this is called once per attempt. */ // 通过挤占存储内存来扩张执行内存, // 通过将缓存的块溢写到磁盘上,从而为执行内存腾出空间 def maybegrowexecutionpool(extramemoryneeded: long): unit = { if (extramemoryneeded > 0) { // there is not enough free memory in the execution pool, so try to reclaim memory from // storage. we can reclaim any free memory from the storage pool. if the storage pool // has grown to become larger than `storageregionsize`, we can evict blocks and reclaim // the memory that storage has borrowed from execution. // 我们可以将剩余的存储内存都借过来用作执行内存 // 另外,如果存储内存向执行内存借用了一部分内存,也就是说此时存储内存的实际大小大于配置的值 // 那么我们就将所有的借用的存储内存都还回来 val memoryreclaimablefromstorage = math.max( storagepool.memoryfree, storagepool.poolsize - storageregionsize) if (memoryreclaimablefromstorage > 0) { // only reclaim as much space as is necessary and available: // 只腾出必要大小的内存空间,这个方法会将内存中的block挤到磁盘中 val spacetoreclaim = storagepool.freespacetoshrinkpool( math.min(extramemoryneeded, memoryreclaimablefromstorage)) // 更新一些簿记量,存储内存少了这么多内存,相应的执行内存增加了这么多内存 storagepool.decrementpoolsize(spacetoreclaim) executionpool.incrementpoolsize(spacetoreclaim) } } } /** * the size the execution pool would have after evicting storage memory. * * the execution memory pool divides this quantity among the active tasks evenly to cap * the execution memory allocation for each task. it is important to keep this greater * than the execution pool size, which doesn't take into account potential memory that * could be freed by evicting storage. otherwise we may hit spark-12155. * * additionally, this quantity should be kept below `maxmemory` to arbitrate fairness * in execution memory allocation across tasks, otherwise, a task may occupy more than * its fair share of execution memory, mistakenly thinking that other tasks can acquire * the portion of storage memory that cannot be evicted. */ def computemaxexecutionpoolsize(): long = { maxmemory - math.min(storagepool.memoryused, storageregionsize) } executionpool.acquirememory( numbytes, taskattemptid, maybegrowexecutionpool, () => computemaxexecutionpoolsize) }
executionmemorypool.acquirememory
这个方法的代码我就不贴了,主要是一些复杂的内存申请规则的计算,以及内部簿记量的维护,此外如果现有可用的内存量太小,则会等待(通过对象锁等待)直到其他任务释放一些内存;
除此之外最重要的就是对上面提到的maybegrowexecutionpool方法的调用,所以我们重点还是看一下maybegrowexecutionpool方法。
maybegrowexecutionpool
由于这个方法在前面已经贴出来,并且标上了很详细的注释,所以代码逻辑略过,其中有一个关键的调用storagepool.freespacetoshrinkpool,这个方法实现了将内存中的块挤出去的逻辑。
storagepool.freespacetoshrinkpool
我们发现其中调用了memorystore.evictblockstofreespace方法,
def freespacetoshrinkpool(spacetofree: long): long = lock.synchronized { val spacefreedbyreleasingunusedmemory = math.min(spacetofree, memoryfree) val remainingspacetofree = spacetofree - spacefreedbyreleasingunusedmemory if (remainingspacetofree > 0) { // if reclaiming free memory did not adequately shrink the pool, begin evicting blocks: val spacefreedbyeviction = memorystore.evictblockstofreespace(none, remainingspacetofree, memorymode) // when a block is released, blockmanager.dropfrommemory() calls releasememory(), so we do // not need to decrement _memoryused here. however, we do need to decrement the pool size. spacefreedbyreleasingunusedmemory + spacefreedbyeviction } else { spacefreedbyreleasingunusedmemory } }
memorystore.evictblockstofreespace
这个方法看似很长,其实大概可以总结为一点。
因为memorystore存储了内存中所有块的实际数据,所以可以根据这些信息知道每个块实际大小,这样就能计算出需要挤出哪些块,当然这个过程中还有一些细节的处理,比如块的写锁的获取和释放等等。
这里面,实际将块从内存中释放(本质上就是将块的数据对应的memoryentry的引用设为null,这样gc就可以回收这个块)的功能代码在blockevictionhandler.dropfrommemory方法中实现,也就是
blockmanager.dropfrommemory。
private[spark] def evictblockstofreespace( blockid: option[blockid], space: long, memorymode: memorymode): long = { assert(space > 0) memorymanager.synchronized { var freedmemory = 0l val rddtoadd = blockid.flatmap(getrddid) val selectedblocks = new arraybuffer[blockid] def blockisevictable(blockid: blockid, entry: memoryentry[_]): boolean = { entry.memorymode == memorymode && (rddtoadd.isempty || rddtoadd != getrddid(blockid)) } // this is synchronized to ensure that the set of entries is not changed // (because of getvalue or getbytes) while traversing the iterator, as that // can lead to exceptions. entries.synchronized { val iterator = entries.entryset().iterator() while (freedmemory < space && iterator.hasnext) { val pair = iterator.next() val blockid = pair.getkey val entry = pair.getvalue if (blockisevictable(blockid, entry)) { // we don't want to evict blocks which are currently being read, so we need to obtain // an exclusive write lock on blocks which are candidates for eviction. we perform a // non-blocking "trylock" here in order to ignore blocks which are locked for reading: // 这里之所以要获取写锁是为了防止在块正在被读取或写入的时候将其挤出去 if (blockinfomanager.lockforwriting(blockid, blocking = false).isdefined) { selectedblocks += blockid freedmemory += pair.getvalue.size } } } } def dropblock[t](blockid: blockid, entry: memoryentry[t]): unit = { val data = entry match { case deserializedmemoryentry(values, _, _) => left(values) case serializedmemoryentry(buffer, _, _) => right(buffer) } // 这里的调用将块挤出内存,如果允许写到磁盘则溢写到磁盘上 // 注意blockevictionhandler的实现类就是blockmanager val neweffectivestoragelevel = blockevictionhandler.dropfrommemory(blockid, () => data)(entry.classtag) if (neweffectivestoragelevel.isvalid) { // the block is still present in at least one store, so release the lock // but don't delete the block info // 因为前面获取了这些块的写锁,还没有释放, // 所以在这里释放这些块的写锁 blockinfomanager.unlock(blockid) } else { // the block isn't present in any store, so delete the block info so that the // block can be stored again // 因为块由于从内存中移除又没有写到磁盘上,所以直接从内部的簿记量中移除该块的信息 blockinfomanager.removeblock(blockid) } } // 如果腾出的内存足够多,比申请的量要大,这时才会真正释放相应的块 if (freedmemory >= space) { var lastsuccessfulblock = -1 try { loginfo(s"${selectedblocks.size} blocks selected for dropping " + s"(${utils.bytestostring(freedmemory)} bytes)") (0 until selectedblocks.size).foreach { idx => val blockid = selectedblocks(idx) val entry = entries.synchronized { entries.get(blockid) } // this should never be null as only one task should be dropping // blocks and removing entries. however the check is still here for // future safety. if (entry != null) { dropblock(blockid, entry) // 这时为测试留的一个钩子方法 afterdropaction(blockid) } lastsuccessfulblock = idx } loginfo(s"after dropping ${selectedblocks.size} blocks, " + s"free memory is ${utils.bytestostring(maxmemory - blocksmemoryused)}") freedmemory } finally { // like blockmanager.doput, we use a finally rather than a catch to avoid having to deal // with interruptedexception // 如果不是所有的块都转移成功,那么必然有的块的写锁可能没有释放 // 所以在这里将这些没有移除成功的块的写锁释放掉 if (lastsuccessfulblock != selectedblocks.size - 1) { // the blocks we didn't process successfully are still locked, so we have to unlock them (lastsuccessfulblock + 1 until selectedblocks.size).foreach { idx => val blockid = selectedblocks(idx) blockinfomanager.unlock(blockid) } } } } else {// 如果不能腾出足够多的内存,那么取消这次行动,释放所有已经持有的块的写锁 blockid.foreach { id => loginfo(s"will not store $id") } selectedblocks.foreach { id => blockinfomanager.unlock(id) } 0l } } }
blockmanager.dropfrommemory
总结一下这个方法的主要逻辑:
- 如果存储级别允许存到磁盘,那么先溢写到磁盘上
- 将block从memorystore内部的map结构中移除掉
- 向driver上的blockmanagermaster汇报块更新
- 向任务度量系统汇报块更新的统计信息
所以,七绕八绕,饶了这么一大圈,其实所谓的内存挤占,其实就是把引用设为null ^_^当然肯定不是这么简单啦,其实在整个分析的过程中我们也能发现,所谓的内存管理大部分工作就是对任务使用内存一些簿记量的管理维护,这里面有一些比较复杂的逻辑,例如给每个任务分配多少内存的计算逻辑就比较复杂。
private[storage] override def dropfrommemory[t: classtag]( blockid: blockid, data: () => either[array[t], chunkedbytebuffer]): storagelevel = { loginfo(s"dropping block $blockid from memory") val info = blockinfomanager.assertblockislockedforwriting(blockid) var blockisupdated = false val level = info.level // drop to disk, if storage level requires // 如果存储级别允许存到磁盘,那么先溢写到磁盘上 if (level.usedisk && !diskstore.contains(blockid)) { loginfo(s"writing block $blockid to disk") data() match { case left(elements) => diskstore.put(blockid) { channel => val out = channels.newoutputstream(channel) serializermanager.dataserializestream( blockid, out, elements.toiterator)(info.classtag.asinstanceof[classtag[t]]) } case right(bytes) => diskstore.putbytes(blockid, bytes) } blockisupdated = true } // actually drop from memory store val droppedmemorysize = if (memorystore.contains(blockid)) memorystore.getsize(blockid) else 0l val blockisremoved = memorystore.remove(blockid) if (blockisremoved) { blockisupdated = true } else { logwarning(s"block $blockid could not be dropped from memory as it does not exist") } val status = getcurrentblockstatus(blockid, info) if (info.tellmaster) { reportblockstatus(blockid, status, droppedmemorysize) } // 向任务度量系统汇报块更新的统计信息 if (blockisupdated) { addupdatedblockstatustotaskmetrics(blockid, status) } status.storagelevel }
unifiedmemorymanager.acquirestoragememory
我们再来看一下对于存储内存的申请。
其中,存储内存向执行内存借用 的逻辑相对简单,仅仅是将两个内存池的大小改一下,执行内存池减少一定的大小,存储内存池则增加相应的大小。
override def acquirestoragememory( blockid: blockid, numbytes: long, memorymode: memorymode): boolean = synchronized { assertinvariants() assert(numbytes >= 0) val (executionpool, storagepool, maxmemory) = memorymode match { case memorymode.on_heap => ( onheapexecutionmemorypool, onheapstoragememorypool, maxonheapstoragememory) case memorymode.off_heap => ( offheapexecutionmemorypool, offheapstoragememorypool, maxoffheapstoragememory) } // 因为执行内存挤占不了,所以这里如果申请的内存超过现在可用的内存,那么就申请不了了 if (numbytes > maxmemory) { // fail fast if the block simply won't fit loginfo(s"will not store $blockid as the required space ($numbytes bytes) exceeds our " + s"memory limit ($maxmemory bytes)") return false } // 如果大于存储内存的可用内存,那么就需要向执行内存借用一部分内存 if (numbytes > storagepool.memoryfree) { // there is not enough free memory in the storage pool, so try to borrow free memory from // the execution pool. val memoryborrowedfromexecution = math.min(executionpool.memoryfree, numbytes - storagepool.memoryfree) // 存储内存向执行内存借用的逻辑很简单, // 仅仅是将两个内存池的大小改一下, // 执行内存池减少一定的大小,存储内存池则增加相应的大小 executionpool.decrementpoolsize(memoryborrowedfromexecution) storagepool.incrementpoolsize(memoryborrowedfromexecution) } // 通过storagepool申请一定量的内存 storagepool.acquirememory(blockid, numbytes) }
storagememorypool.acquirememory
def acquirememory( blockid: blockid, numbytestoacquire: long, numbytestofree: long): boolean = lock.synchronized { assert(numbytestoacquire >= 0) assert(numbytestofree >= 0) assert(memoryused <= poolsize) // 首先调用memorystore的相关方法挤出一些块以释放内存 if (numbytestofree > 0) { memorystore.evictblockstofreespace(some(blockid), numbytestofree, memorymode) } // note: if the memory store evicts blocks, then those evictions will synchronously call // back into this storagememorypool in order to free memory. therefore, these variables // should have been updated. // 因为前面挤出一些块后释放内存时,blockmanager会通过memorymanager相关方法更新内部的簿记量, // 所以这里的memoryfree就会变化,会变大 val enoughmemory = numbytestoacquire <= memoryfree if (enoughmemory) { _memoryused += numbytestoacquire } enoughmemory }
可以看到,这里也调用了memorystore.evictblockstofreespace方法来讲一部分块挤出内存,以此来为新的block腾出空间。
unifiedmemorymanager.acquireunrollmemory
另外还有对展开内存的申请,实际就是申请存储内存。
override def acquireunrollmemory( blockid: blockid, numbytes: long, memorymode: memorymode): boolean = synchronized { acquirestoragememory(blockid, numbytes, memorymode) }
总结
内存管理,本质上是对shuffle排序过程中使用的内存和rdd缓存使用的内存的簿记,通过对内存使用量的详细精确的记录和管理,最大限度避免oom的发生,同时尽量提高内存利用率。
上一篇: 记录错误
推荐阅读
-
netty源码解析(4.0)-28 ByteBuf内存池:PooledByteBufAllocator-把一切组装起来
-
netty源码解解析(4.0)-23 ByteBuf内存管理:分配和释放
-
Spark源码解析:RDD
-
16.Spark Streaming源码解读之数据清理机制解析 sparkSpark Streaming源码解析RDD数据清理
-
Spark 源码解析 : DAGScheduler中的DAG划分与提交
-
Spark 源码解析 : DAGScheduler中的DAG划分与提交
-
6.Spark streaming技术内幕 : Job动态生成原理与源码解析
-
16.Spark Streaming源码解读之数据清理机制解析 sparkSpark Streaming源码解析RDD数据清理
-
6.Spark streaming技术内幕 : Job动态生成原理与源码解析
-
Spark源码分析 集群架构介绍和SparkContext源码解析